The Bedrock of Fairness

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-03T06:00:14.000Z · LW · GW · Legacy · 103 comments

Followup toThe Moral Void

Three people, whom we'll call Xannon, Yancy and Zaire, are separately wandering through the forest; by chance, they happen upon a clearing, meeting each other.  Introductions are performed.  And then they discover, in the center of the clearing, a delicious blueberry pie.

Xannon:  "A pie!  What good fortune!  But which of us should get it?"

Yancy:  "Let us divide it fairly."

Zaire:  "I agree; let the pie be distributed fairly.  Who could argue against fairness?"

Xannon:  "So we are agreed, then.  But what is a fair division?"

Yancy:  "Eh?  Three equal parts, of course!"

Zaire:  "Nonsense!  A fair distribution is half for me, and a quarter apiece for the two of you."

Yancy:  "What?  How is that fair?"

Zaire:  "I'm hungry, therefore I should be fed; that is fair."

Xannon:  "Oh, dear.  It seems we have a dispute as to what is fair.  For myself, I want to divide the pie the same way as Yancy.  But let us resolve this dispute over the meaning of fairness, fairly: that is, giving equal weight to each of our desires.  Zaire desires the pie to be divided {1/4, 1/4, 1/2}, and Yancy and I desire the pie to be divided {1/3, 1/3, 1/3}.  So the fair compromise is {11/36, 11/36, 14/36}."

Zaire:  "What?  That's crazy.  There's two different opinions as to how fairness works—why should the opinion that happens to be yours, get twice as much weight as the opinion that happens to be mine?  Do you think your theory is twice as good?  I think my theory is a hundred times as good as yours!  So there!"

Yancy:  "Craziness indeed.  Xannon, I already took Zaire's desires into account in saying that he should get 1/3 of the pie.  You can't count the same factor twice.  Even if we count fairness as an inherent desire, why should Zaire be rewarded for being selfish?  Think about which agents thrive under your system!"

Xannon:  "Alas!  I was hoping that, even if we could not agree on how to distribute the pie, we could agree on a fair resolution procedure for our dispute, such as averaging our desires together.  But even that hope was dashed.  Now what are we to do?"

Yancy:  "Xannon, you are overcomplicating things.  1/3 apiece.  It's not that complicated.  A fair distribution is an even split, not a distribution arrived at by a 'fair resolution procedure' that everyone agrees on.  What if we'd all been raised in a society that believed that men should get twice as much pie as women?  Then we would split the pie unevenly, and even though no one of us disputed the split, it would still be unfair."

Xannon:  "What?  Where is this 'fairness' stored if not in human minds?  Who says that something is unfair if no intelligent agent does so?  Not upon the stars or the mountains is 'fairness' written."

Yancy:  "So what you're saying is that if you've got a whole society where women are chattel and men sell them like farm animals and it hasn't occurred to anyone that things could be other than they are, that this society is fair, and at the exact moment where someone first realizes it shouldn't have to be that way, the whole society suddenly becomes unfair."

Xannon:  "How can a society be unfair without some specific party who claims injury and receives no reparation?  If it hasn't occurred to anyone that things could work differently, and no one's asked for things to work differently, then—"

Yancy:  "Then the women are still being treated like farm animals and that is unfair.  Where's your common sense?  Fairness is not agreement, fairness is symmetry."

Zaire:  "Is this all working out to my getting half the pie?"

Yancy:  "No."

Xannon:  "I don't know... maybe as the limit of an infinite sequence of meta-meta-fairnesses..."

Zaire:  "I fear I must accord with Yancy on one point, Xannon; your desire for perfect accord among us is misguided.  I want half the pie.  Yancy wants me to have a third of the pie.  This is all there is to the world, and all there ever was.  If two monkeys want the same banana, in the end one will have it, and the other will cry morality.  Who gets to form the committee to decide the rules that will be used to determine what is 'fair'?  Whoever it is, got the banana."

Yancy:  "I wanted to give you a third of the pie, and you equate this to seizing the whole thing for myself?  Small wonder that you don't want to acknowledge the existence of morality—you don't want to acknowledge that anyone can be so much less of a jerk."

Xannon:  "You oversimplify the world, Zaire.  Banana-fights occur across thousands and perhaps millions of species, in the animal kingdom.  But if this were all there was, Homo sapiens would never have evolved moral intuitions.  Why would the human animal evolve to cry morality, if the cry had no effect?"

Zaire:  "To make themselves feel better."

Yancy:  "Ha!  You fail at evolutionary biology."

Xannon:  "A murderer accosts a victim, in a dark alley; the murderer desires the victim to die, and the victim desires to live.  Is there nothing more to the universe than their conflict?  No, because if I happen along, I will side with the victim, and not with the murderer.  The victim's plea crosses the gap of persons, to me; it is not locked up inside the victim's own mind.  But the murderer cannot obtain my sympathy, nor incite me to help murder.  Morality crosses the gap between persons; you might not see it in a conflict between two people, but you would see it in a society."

Yancy:  "So you define morality as that which crosses the gap of persons?"

Xannon:  "It seems to me that social arguments over disputed goals are how human moral intuitions arose, beyond the simple clash over bananas.  So that is how I define the term."

Yancy:  "Then I disagree.  If someone wants to murder me, and the two of us are alone, then I am still in the right and they are still in the wrong, even if no one else is present."

Zaire:  "And the murderer says, 'I am in the right, you are in the wrong'.  So what?"

Xannon:  "How does your statement that you are in the right, and the murderer is in the wrong, impinge upon the universe—if there is no one else present to be persuaded?"

Yancy:  "It licenses me to resist being murdered; which I might not do, if I thought that my desire to avoid being murdered was wrong, and the murderer's desire to kill me was right.  I can distinguish between things I merely want, and things that are right—though alas, I do not always live up to my own standards.  The murderer is blind to the morality, perhaps, but that doesn't change the morality.  And if we were both blind, the morality still would not change."

Xannon:  "Blind?  What is being seen, what sees it?"

Yancy:  "You're trying to treat fairness as... I don't know, something like an array-mapped 2-place function that goes out and eats a list of human minds, and returns a list of what each person thinks is 'fair', and then averages it together.  The problem with this isn't just that different people could have different ideas about fairness.  It's not just that they could have different ideas about how to combine the results.  It's that it leads to infinite recursion outright—passing the recursive buck.  You want there to be some level on which everyone agrees, but at least some possible minds will disagree with any statement you make."

Xannon:  "Isn't the whole point of fairness to let people agree on a division, instead of fighting over it?"

Yancy:  "What is fair is one question, and whether someone else accepts that this is fair is another question.  What is fair?  That's easy: an equal division of the pie is fair.  Anything else won't be fair no matter what kind of pretty arguments you put around it.  Even if I gave Zaire a sixth of my pie, that might be a voluntary division but it wouldn't be a fair division.  Let fairness be a simple and object-level procedure, instead of this infinite meta-recursion, and the buck will stop immediately."

Zaire:  "If the word 'fair' simply means 'equal division' then why not just say 'equal division' instead of this strange additional word, 'fair'?  You want the pie divided equally, I want half the pie for myself.  That's the whole fact of the matter; this word 'fair' is merely an attempt to get more of the pie for yourself."

Xannon:  "If that's the whole fact of the matter, why would anyone talk about 'fairness' in the first place, I wonder?"

Zaire:  "Because they all share the same delusion."

Yancy:  "A delusion of what?  What is it that you are saying people think incorrectly the universe is like?"

Zaire:  "I am under no obligation to describe other people's confusions."

Yancy:  "If you can't dissolve their confusion, how can you be sure they're confused?  But it seems clear enough to me that if the word fair is going to have any meaning at all, it has to finally add up to each of us getting one-third of the pie."

Xannon:  "How odd it is to have a procedure of which we are more sure of the result than the procedure itself."

Zaire:  "Speak for yourself."

 

Part of The Metaethics Sequence

Next post: "Moral Complexities"

Previous post: "Created Already In Motion"

103 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Doug_S. · 2008-07-03T06:20:11.000Z · LW(p) · GW(p)

That is a very interesting dialogue.

It does not seem to come to any definite conclusion, instead simply presenting arguments and leaving the three participants in the dialogue with beliefs that are largely unchanged from their original position.

I am unable to come up with anything of substance to add, other than praise, but I feel compelled to comment anyway.

comment by Roland2 · 2008-07-03T06:27:43.000Z · LW(p) · GW(p)

At first I tend to side with Zaire. The pie should be divided according to everyone's needs. But what if Zaire has a bigger body and generally needs to eat more? Should he always get more? Should the others receive less and be penalized because Zaire happens to be bigger? This is not easy, sigh...

comment by Nanani2 · 2008-07-03T06:28:52.000Z · LW(p) · GW(p)

Does the point of the story have anything to do with the object desire switching from a pie to a cake and back again?

Replies from: DSimon
comment by DSimon · 2010-12-16T05:36:52.621Z · LW(p) · GW(p)

The cake is a... well, you know.

Replies from: GLaDOS
comment by GLaDOS · 2011-05-13T20:33:36.806Z · LW(p) · GW(p)

Does the point of the story have anything to do with the object desire switching from a pie to a cake and back again?

He realized that anyway, this cake is great. It's so delicious and moist.

Replies from: Alicorn
comment by Alicorn · 2011-05-13T20:44:27.960Z · LW(p) · GW(p)

Why are you still talking? There's science to do.

Replies from: GLaDOS
comment by GLaDOS · 2011-05-13T20:52:56.300Z · LW(p) · GW(p)

And make a neat gun?

I wish I could. I did not have access to enough qualified test subjects. But LW has plenty of people who are still alive so I'm GLaD to be here!

comment by Eneasz · 2008-07-03T06:29:29.000Z · LW(p) · GW(p)

I'm about to make a naked assertion with nothing to back it up, just to put it out there.

The purpose of morality is to prevent such an arguement from even ever occurring. If the morale engine of society is working correctly, then all it's members will have a desire for everyone to get an equally sized portion of the pie (in this example). If there is a Zaire who believes he should get 1/2 of the pie, then there was a malfunction when morality was being programmed into him. This malfunction will lead to conflict.

View it like you would view programming a friendly AI. The purpose is to program the AI with desires that will motivate it to help humanity, and to have a strong aversion to destroying humanity. If this goal is not reached, there was a failure by the programmers. I think it's been said on this blog that if you create an AI without having made it friendly you've already lost and the game is over. It's not quite as drastic if you fail with humans, but the principle is the same. If a friendly Human Intelligence is not programmed with the desires that will help to keep humanity thriving then there was a failure by it's programmers(parents/society/teachers/whoever).

Why is it that human morality is this confusing and mysterious realm that no one seems to be able to fathom when AI morality is straight-forward? Is it just that humans can easily see the goal of one (an AI that desires to help rather than hurt humanity) and for some reason can't see the goal of another (a human that desires to help rather than hurt humanity)?

Replies from: taryneast
comment by taryneast · 2011-07-24T17:12:32.314Z · LW(p) · GW(p)

I think it's more complex than that.

Zaire's argument is that some people actually need more of "the pie" than others. Equal portions aren't necessarily fair, in that situation.

For example: would it be fair if every person on the globe got an equal portion of diabetic insulin? No, obviously not. We disproportionately give insulin to diabetics. Because that is more fair than to distribute it equally amongst all people (regardless of their health situation).

The disagreement here is between two perfectly understandable concepts of fairness. Both of them make sense in different ways. I see no easy solution to this myself.

Replies from: Brilliand
comment by Brilliand · 2015-08-11T00:36:49.654Z · LW(p) · GW(p)

Diabetics pay for their insulin. If someone needs more resources than others do, they need to earn those extra resources in some way.

Replies from: taryneast
comment by taryneast · 2015-08-15T02:14:20.178Z · LW(p) · GW(p)

I'd lay a high likelihood that you have quite a few more advantages than the kind of person I'm thinking of. You probably have your fair number of disadvantages too, but you've (through being lucky enough to have good health, intelligence, time and/or money for education and maybe good friends/family for support) been able to overcome those "on your own" (except for the aforementioned support)... which means you are categorically not the kind of person I'm thinking of when I am talking about people that need more support than others.

Some people need extra, and those people do try to pay for their extra.. but even so... some of them will still not be able to, due to circumstances that isn't their fault.

Do you condemn to death?

Replies from: Brilliand
comment by Brilliand · 2015-08-18T17:31:55.690Z · LW(p) · GW(p)

At least in some cases, yes. I don't agree with the "every sentient mind has value" view that's so common around here; sentient minds are remarkably easy to create, using the reproduction method. Dividing a share of resources to every human according to their needs rewards producing as many children of possible, and not caring if they're a net drain on resources. I would prefer to reward a K-selection strategy, rather than an r-selection strategy.

The various advantages you list aren't simply a matter of chance; they're things I have because my parents earned the right to have children who live.

Replies from: taryneast
comment by taryneast · 2015-09-07T05:46:24.745Z · LW(p) · GW(p)

"sentient minds are remarkably easy to create"

I'm not sure I agree with this. It takes quite a lot of resources (time, energy etc) to create sentient minds at present... certainly to bring them to any reasonable state of maturity. After which, the people that put that time and effort in quite often get very attached to that new sentient mind - even if that mind is not a net-productive citizen.

The strategy that you choose to follow in how to divide up resources to sentient minds may be based on what you perceive to be their net-productivity... and maybe you feel a strong need to push your ideas on others as "oughts" that you think they should follow (eg that people ought to earn every resource themselves)... but it's pretty clear that other people are following other strategies than your preferred one.

as a counter-example, a very large number of people (not including myself here) follow that old adage of "from each according to his abilities to each according to his needs" which is just about the exact opposite of your own.

Replies from: Brilliand, Brilliand
comment by Brilliand · 2015-09-09T23:01:57.579Z · LW(p) · GW(p)

It's a lot of resources from the perspective of a single person, but I was thinking at a slightly larger scale. By "easy", I mean that manageable groups of people can do it repeatedly and be confident of success. Really, the fact that sentient minds can be valued in terms of resources at all is sufficient for my argument. (That value can then be ignored when assessing productivity, as it's a sunk cost.)

You seem to be looking in the wrong place with your "that people ought to earn every resource themselves" example - my opinion is that the people who have resources should not give those resources to people who won't make good use of them. That the people who lack resources will then have to earn them if they're to survive is an unavoidable consequence of that (and is my real goal here), but those aren't the people that I think ought to be changing things.

As for what strategies people actually follow, I think most people do what I'm saying they should do, on an individual level. Most people protect their resources, and share them only with those who they expect to be able to return the favor. On the group level, though, people lose track of how much things actually cost, and support things like welfare that help people regardless of whether they're worth the cost of keeping alive.

Replies from: taryneast, CCC
comment by taryneast · 2015-09-10T00:41:56.411Z · LW(p) · GW(p)

"whether they're worth the cost of keeping alive." and this highlights the differences in our views.

our point of difference is in this whole basis of using practical "worth" as The way of deciding whether or not a person should live/die.

I can get trying to minimise the birth of new people that are net-negative contributors to the world... but from my perspective, once they are born - it's worth putting some effort into supporting them.

Why? because it's not their fault they were born the way they are, and they should not be punished because of that. They need help to get along.

Sometimes - the situation that put them in their needy state occurred after they were born - and again is still not their fault.

Another example to point out why I feel your view is unfair to people: Imagine somebody who has worked all their lives in an industry that has given amazing amounts of benefit to the world.. but has only just now become obsolete. That person is now unemployed and, due to being near retirement age, unemployable. It's an industry in which they were never really paid very well, and their savings don't add up to enough to cover their ongoing living costs for very long.

Eventually, there will come a time when the savings run out and this person dies of starvation without our help.

I consider this not to be a fair situation, and I'd rather my tax-dollars went to helping this person live a bit longer, than go to the next unnecessary-war (drummed up to keep the current pollies in power).

Replies from: nyralech, Brilliand, Brilliand
comment by nyralech · 2015-09-10T04:18:14.752Z · LW(p) · GW(p)

I consider this not to be a fair situation, and I'd rather my tax-dollars went to helping this person live a bit longer, than go to the next unnecessary-war (drummed up to keep the current pollies in power).

I think this shows the underlying problem. You would also rather have all your tax money go to give a cute little puppy more food than it will ever need, simply because war is a terrible alternative.

But that doesn't mean it's the best thing you can do with your money, or even anywhere near that standard. And neither is, one could argue, giving money to an obsolete person in a country where the cost of living is very high comparative to other countries in the world.

Replies from: taryneast
comment by taryneast · 2015-09-10T06:38:19.324Z · LW(p) · GW(p)

If I were magically put in charge of distributing the next year's federal budget - I would still allocate resources to domestic welfare (supporting others that, through no fault of their own, have fallen on times of hardship), even though a larger portion went to foreign aid.

comment by Brilliand · 2015-09-14T23:08:36.991Z · LW(p) · GW(p)

When someone is born who is a net-negative contributor to the world... it was their parents' doing. They carry their parents' genes; it's a very appropriate punishment for their parents' misdeed to let the child die. It comes very close to being a direct reversal of the original mistake, in fact.

It does sometimes happen that someone otherwise capable of being productive is accidentally stripped of their resources, and ideally they should get some help to get back on their feet - this seems like an ideal use case for a loan. In general, someone will have to make the call that they're worth saving, and I do grant that some people in dire straits are worth saving.

In your example of the old man, it appears to me that he was cheated earlier in life; you postulate that he actually produced a very great benefit to others, and it seems to me that he deserves to have a very great amount of money to show for it. Without government support, he might still have friends to fall back on... if not, then this is clearly a case where welfare does some good, but it doesn't come close to reversing the injustice here. I see the benefit of welfare in this case as mostly accidental, and would prefer that something more targeted be done to repay him, while recognizing his actual contribution.

I just took a brief look at current U.S. welfare law, and it looks like there are some provisions in there to exclude the most obvious cases of people who don't deserve support (able-bodied people who aren't even trying to be productive).

Replies from: CCC, gjm
comment by CCC · 2015-09-15T07:23:49.195Z · LW(p) · GW(p)

When someone is born who is a net-negative contributor to the world...

How do you define this? I can name a number of people throughout history who would have used heuristics here that I vehemently disagree with...

comment by gjm · 2015-09-15T14:57:38.729Z · LW(p) · GW(p)

it's a very appropriate punishment for their parents' misdeed to let the child die

Doesn't it strike you that that's not very fair to the child?

For that matter, it's not remotely fair to the parents either; productivity is not solely determined by parents' genes plus upbringing, still less by what the parents can know about their genes plus upbringing. Consider, for instance, the following scenarios. In all of them, by "net positive contribution" I mean "net positive economic contribution", which I'm pretty sure is what you have in mind by that phrase.

  • Two intelligent and hardworking people have a child. The child loses the genetic lottery and ends up much less intelligent than average, or suffers from some condition that greatly reduces her capacity for work. (Perhaps both parents had a very harmful recessive gene, but didn't know it.) She is not able to make a "net positive contribution" to the world.
  • Two highly productive people have a child. Between the child's conception and adulthood, society changes (e.g., because of technological innovation) in such a way that the sort of work that made the parents highly productive is no longer viable; maybe machines can do it so much better that no one will employ humans to do it. It turns out that the parents were decidedly sub-average in other ways, and the child is too. He is not able to make a "net positive contribution" to the world.

Would you say that these children deserve to die because of their parents' misdeeds in having them? This seems to me an absolutely untenable position; it requires you to hold

  • that having an economically unproductive child is a crime deserving terrible punishment
  • that this applies even if you had no good reason to think your child would be economically unproductive
  • that the fact that this punishment involves the death penalty for the child is not a problem

all of which seem absurd.

A world run the way you seem to prefer would not be one I would want to live in.

comment by Brilliand · 2015-09-15T21:50:10.638Z · LW(p) · GW(p)

I've just made the unpleasant discovery that being downvoted to -4 makes it impossible to reply to those who replied to me (or to edit my comment). I'll state for the record that I disagree with that policy... and proceed to shut up.

Replies from: Lumifer, CCC
comment by Lumifer · 2015-09-16T00:31:34.858Z · LW(p) · GW(p)

being downvoted to -4 makes it impossible to reply to those who replied to me

It's quite possible, only requiring payment in your own karma points. If you're karma-broke, well....

Replies from: Brilliand
comment by Brilliand · 2015-09-28T19:25:41.254Z · LW(p) · GW(p)

Seeing as how what I was saying was basically "let the poor starve", this ending seems strangely appropriate.

comment by CCC · 2015-09-16T09:47:02.414Z · LW(p) · GW(p)

It's not impossible, you'd just need to pay 5 karma per reply.

...you'd need to have 5 karma to pay, first. You should be able to pick that up by making positive, helpful contributions to discussion on this site.

comment by CCC · 2015-09-10T08:41:33.312Z · LW(p) · GW(p)

my opinion is that the people who have resources should not give those resources to people who won't make good use of them.

When widely applied, this principle tends to lead to trouble. It's a very small intuitive step from this to "people who aren't making good use of their own resources should have them taken away and given to someone who will make better use of them" and that is, in turn, a very small step away from "resources shouldn't be wasted on anyone too elderly to be employed".

Now, I'm not saying that's where you're going with this. It's just that that's close enough to what you said that it's probably something you'd want to specifically avoid.

Replies from: Lumifer
comment by Lumifer · 2015-09-10T14:22:49.765Z · LW(p) · GW(p)

It's a very small intuitive step from this to "people who aren't making good use of their own resources should have them taken away and given to someone who will make better use of them"

That step doesn't look small to me, specifically because it leaps over the rather large concept of property.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-09-10T15:38:13.577Z · LW(p) · GW(p)

We pretty much do this already (outside of a few nations like New Zealand), and it doesn't lead to trouble at all, although some people complain about it (although if they recognized exactly what was going on, the number of people complaining about it would probably rise dramatically).

Property taxes rise with land values, which are proportional to the value of resources. If you're not making good use of your resources, you can't cover property taxes, and you have to sell the property. The only people who will buy it are those who think they can make sufficient use of the resources to cover the sale price, in addition to property taxes going forward.

Replies from: Lumifer
comment by Lumifer · 2015-09-10T17:00:13.189Z · LW(p) · GW(p)

We pretty much do this already

Not quite. Imposing some cost to own certain things is not the same as "should have them taken away".

Yes, I understand that you can construct a continuous spectrum from a small fee to "it's cheaper for you to give it away rather than pay the tax", but I feel that in practice the distance is great.

comment by Brilliand · 2015-09-09T23:02:04.129Z · LW(p) · GW(p)

[I've written two different responses to your comment. This one is more true to my state of mind when I wrote the comment you replied to.]

Consider this: a man gets a woman pregnant, the man leaves. The woman carries the child to birth, hands it over to an adoption agency. Raising the child to maturity is now someone else's problem, but it has those parents' genes. I do not want this to be a viable strategy. If some people choose this strategy, that only makes it more important to stop letting them cheat.

comment by Paul_Gowder · 2008-07-03T06:33:53.000Z · LW(p) · GW(p)

What's the point?

You realize, incidentally, that there's a huge literature in political philosophy about what procedural fairness means. Right? Right?

comment by Tiiba2 · 2008-07-03T06:40:41.000Z · LW(p) · GW(p)

Eneasz: You say that Zaire is broken. What broke him, though, was the fact that he hasn't eaten a dew drop in a week. Hunger does weird things to people, cut him some slack.

comment by Nick_Tarleton · 2008-07-03T06:42:56.000Z · LW(p) · GW(p)

It licenses me to resist being murdered; which I might not do, if I thought that my desire to avoid being murdered was wrong, and the murderer's desire to kill me was right.
Licenses relative to what authority? Himself, I presume. Of course the murderer would say the same.

Blind? What is being seen, what sees it?
Optimistically, I would say that if the murderer perfectly knew all the relevant facts, including the victim's experience, ve wouldn't do it (at least if ve's human or similar; a paperclip maximizer won't care).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-03T07:18:25.000Z · LW(p) · GW(p)

Nanani, I think it has more to do with my having just finished Portal. Fixed.

comment by Eneasz · 2008-07-03T07:21:08.000Z · LW(p) · GW(p)

Tiiba - Sure, I got no problem with that. There's often extenuating circumstances which change how any particular interaction occurs. However that was not the case presented in this hypothetical. :) However as a baseline that everyone should start with (and work forward from), an equally sized portion for all is the ideal as it will lead to the least conflict.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-03T07:25:38.000Z · LW(p) · GW(p)

Gowder, I'm talking to the people who say unto me, "Friendly to who?" and "Oh, so you get to say what 'Friendly' means." I find that the existing literature rarely serves my purposes. In this case I'm driving at a distinction between the object level and the meta level, and the notion of bedrock (the Buck Stops Immediately). Does the political philosophy go there? - for I am not wholly naive, but of course I have only read a tiny fraction of what's out there. I fear that much political philosophy is written for humans by humans.

Roland, I confess that I'd intended the original reading of the dialogue as simple greed on Zaire's part, but your reading is also interesting... I would still tend to say that 1/3 apiece is the fair division, but that either of the other two are welcome to donate portions of pie to Zaire; the resulting division might perhaps be termed utilitarian. The morally interesting situation is when Xannon thinks Zaire deserves more pie but Yancy disagrees.

Replies from: Academian
comment by Academian · 2012-12-30T21:28:59.732Z · LW(p) · GW(p)

I would still tend to say that 1/3 apiece is the fair division

I'm curious why you personally just chose to use the norm-connoting term "fair" in place of the less loaded term "equal division" ... what properties does equal division have that make you want to give it special normative consideration? I could think of some, but I'm particularly interested in what your thoughts are here!

comment by Lewis_Powell · 2008-07-03T07:47:34.000Z · LW(p) · GW(p)

Eliezer, what if they are all poisoned, and the only antidote is a full blueberry pie? is the obvious fair division still 1/3 to each?

What if only one is poisoned? Is it fair for the other two to get some of the (delicious) antidote?

comment by Anonymous38 · 2008-07-03T07:50:29.000Z · LW(p) · GW(p)

A bit of unfairness is acceptable, if that is needed to get us all back to fairness. Example: Zaire should get a bigger piece of pie if they are on a lifeboat and if he is the only one who can row the boat back ashore, and needs some extra carbs to do that. Xannon and Yancy should agree that this is a useful distribution in this context.

comment by Leif · 2008-07-03T07:59:32.000Z · LW(p) · GW(p)

This is not a cultural argument per se.

Say x and y come from, respectively: a tribe of quasaieugenicists that settle distributions based on "fitness" rankings (using something like IQ - probably largely arbitrary - but that doesn't matter), and a tribe of equal-sharers (that subscribe to y's conclusion is in the dialog). Within each culture the relevant version of "fairness" (or the 'core distributive principle') is intuitive, much like y's system is for us. In the x culture people with low rankings intuit that their superiors are 'entitled' to their larger share, and in fact this reinforces a strictly tiered society with little to no concept(s) of equality (sure there would be squabbles between the closely ranked - but the distinction between low and high would be clear). Their philosophers do speculate on other systems - but, barring the occasional sociopath, people typically retreat to the same intuition. Thus both societies largely avoid the recursion problem. So now what happens when x and y stumble upon the sylvan pastry?

Of course y might not signal the relevant information pertaining to an xish fitness ranking, especially if the ranking system doesn't have anything to do with appearance. So x might be momentarily confused. But, applying his intuitions, he will probably attempt to recreate whatever routines and evaluations the xers use to establish distribution (just as y applies his familiar calculus). The point is: there will be an argument. And as long as this isn't a survival situation, its difficult to see any variety of bedrock within walking distance.

The Xers and Yers are radically different - but similar enough, I think, to be included within the space of possible human cultures (history is replete with every flavor of hierarchy). I think the reality is that we depend precariously on a very sloppy overlapping of billions of similar but clearly distinct conceptions of morality. The more you venture beyond your social bubble the less you overlap with those adjacent to you, and eventually you start getting into situations like x and y's (so you best mind your ps and qs). Of course the knitting gets progressively tighter as we move closer to the preferred world of Marginal Revolution and OB.

comment by TGGP4 · 2008-07-03T08:20:29.000Z · LW(p) · GW(p)

Why not divide the pie equally among cells, which make up the agglomerations we call "persons"? And if there is a distinction between voluntary and fair so that Xannon and Yancy honestly couldn't comfortably eat another bite and gave extra to Zaire, would that be unfair?

We've already got a society in which living things are treated like farm animals, by which of course I speak of farm animals themselves. They are of course privileged over a more defenseless living being that they live as parasites off of, which are plants. Some Swiss officials are working to remedy that situation, but it causes me to wonder why we should privilege the selfish replicating and entropy-producing patterns known as "life" over non-life? Fairness as symmetry is underdetermined and at least to me not particularly compelling.

In a hypothetical gender-chattel society, how does the notion that it is unfair pay rent?

I notice that Eliezer asks the question why it is we discuss fairness and find it compelling. He does not answer that question. My guess is that it signals a desire for the cooperation of others and establishes a Schelling point by which you are willing to cooperate with your allies against defectors. Upon violating the implied promise of future cooperation your reputation would take a significant hit. As I use a pseudonym only relevant in a restricted domain, I am free to ignore the damage to my reputation and the willingness of others to cooperate with me rather than punish.

comment by Paul_Gowder · 2008-07-03T08:25:31.000Z · LW(p) · GW(p)

Eliezer, to the extent I understand what you're referencing with those terms, the political philosophy does indeed go there (albeit in very different vocabulary). Certainly, the question about the extent to which ideas of fairness are accessible at what I guess you'd call the object level are constantly treated. Really, it's one of the most major issues out there -- the extent to which reasonable disagreement on object-level issues (disagreement that we think we're obligated to respect) can be resolved on the meta-level (see Waldron, Democracy and Disagreement, and, for an argument that this leads into just the infinite recursion you suggest, at least in the case of democratic procedures, see the review of the same by Christiano, which google scholar will turn up easy).

I think the important thing is to separate two questions: 1. what is the true object-level statement, and 2. to what extent do we have epistemic access to the answer to 1? There may be an objectively correct answer to 1, but we might not be able to get sufficient grip on it to legitimately coerce others to go along -- at which point Xannon starts to seem exactly right.

Oh, hell, go read Ch. 5. of Hobbes, Leviathan. And both of Rawls's major books.

I mean, Xannon has been around for hundreds of years. Here's Hobbes, from previous cite.

But no one mans Reason, nor the Reason of any one number of men, makes the certaintie; no more than an account is therefore well cast up, because a great many men have unanimously approved it. And therfore, as when there is a controversy in account, the parties must by their own accord, set up for right Reason, the Reason of some Arbitrator, or Judge, to whose sentence they will both stand, or their controversie must either come to blowes, or be undecided, for want of a right Reason constituted by Nature...

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-03T08:44:12.000Z · LW(p) · GW(p)

Okay, how does standard political philosophy say you should fairly / rightly construct an ultrapowerful superintelligence (not to be confused with a corruptible government) that can compute moral and metamoral questions only given a well-formed specification of what is to be computed?

After you've carried out these instructions, what's the standard reply to someone who says, "Friendly to who?" or "So you get to decide what's Friendly"?

comment by Paul_Gowder · 2008-07-03T09:13:40.000Z · LW(p) · GW(p)

That's a really fascinating question. I don't know that there'd be a "standard" answer to this -- were the questions taken up, they'd be subject to hot debate.

Are we specifying that this ultrapowerful superintelligence has mind-reading power, or the closest non-magical equivalent in the form of access to every mental state that an arbitrary individual human has, even stuff that now gets lumped under the label "qualia"/ability to perfectly simulate the neurobiology of such an individual?

If so, then two approaches seem defensible to me. First: let's assume there is an answer out there to moral questions, in a form that is accessible to a superintelligence, and let's just assume the hard problem away, viz., that the questioners know how to tell the superintelligence where to look (or the superintelligence can figure it out itself).

We might not be able to produce a well-formed specification of what is to be computed when we're talking about moral questions (it's easy to think that any attempt to do so would rig the answer in advance -- for example, if you ask it for universal principles, you're going to get something different from what you'd get if you left the universality variable free...). But if the superintelligence could simulate our mental processes such that it could tell what it is that we want (for some appropriate values of we, like the person asking or the whole of humanity if there was any consensus -- which I doubt), then in principle it could simply answer that by declaring what the truth of the matter is with respect to that which it has determined that we desire.

That assumes the superintelligence has access to moral truth, but once we do that, I think the standard arguments against "guardianship" (e.g. the first few chapters of Robert Dahl, Democracy and its Critics) fail, in that if they're true -- if people are really better off deciding for themselves (the standard argument), and making people better off is what is morally correct, then we can expect the superintelligence to return "you figure it out." And then the answer to "friendly to who" or "so you get to decide what's friendly" is simply to point to the fact that the superintelligence has access to moral truth.

The more interesting question perhaps is what should happen if the superintelligence doesn't have access to moral truth (either because there is no such thing in the ordinary sense, or because it exists but is unobservable). I assume here that being responsive to reasons is an appropriate way to address moral questions (if not, all bets are off). Then the superintelligence loses one major advantage over ordinary human reasoning (access to the truth on the question), but not the other (while humans are responsive to reasons in a limited and inconsistent sense, the supercomputer is ideally responsive to reasons). For this situation, I think the second defensible outcome would be that the superintelligence should simulate ideal democracy. That is, it should simulate all the minds in the world, and put them into an unlimited discussion with one another, as if they were bayesians with infinite time. The answers it would come up with would be the equivalent to the most legitimate conceivable human decisional process, but better...

I'm pretty sure this is a situation that hasn't come under sustained discussion in the literature as such (in superintelligence terms -- though it has come up in discussions of benevolent dictators and the value of democracy), so I'm talking out my ass a little here, but drawing on familiar themes. Still, the argument defending these two notions -- especially the second -- isn't a blog comment, it's a series of long articles or more.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-03T09:25:58.000Z · LW(p) · GW(p)

First: let's assume there is an answer out there to moral questions, in a form that is accessible to a superintelligence, and let's just assume the hard problem away

Let's not. See, this is what I mean by saying that political philosophy is written for humans by humans.

Your other answer, "ideal democracy", bears a certain primitive resemblance to this, as you'd know if you were familiar with the Friendliness literature...

Okay, sorry about that, just emphasizing that it's not like I'm making all this up as I go along; and also, that there's a hell of a lot of literature out there on everything, but it isn't always easy to adapt to a sufficiently different purpose.

comment by Dynamically_Linked · 2008-07-03T09:50:55.000Z · LW(p) · GW(p)

Why doesn't Zaire just divide himself in half, let each half get 1/4 of the pie, then merge back together and be in possession of half of the pie?

Or, Zaire might say: Hey guys, my wife just called and told me that she made a blueberry pie this morning and put it in this forest for me to find. There's a label on the bottom of the plate if you don't believe me. Do you still think 'fair' = 'equal division'?

Or maybe Zaire came with his dog, and claims that the dog deserves an equal share.

I appreciate the distinction Eliezer is trying to draw between the object level and the meta level. But why the assumption that the object-level procedure will be simple?

comment by Zubon · 2008-07-03T12:25:11.000Z · LW(p) · GW(p)

I was expecting Xannon and Yancy to get into an exchange, only to find that Zaire had taken half the pie while they were talking. Xannon is motivated by consensus, Yancy is motivated by fairness, and Zaire is motivated by pie. I know who I bet on to end up with more pie.

(The cake was an honest mistake, not a lie.)

comment by Constant2 · 2008-07-03T14:33:51.000Z · LW(p) · GW(p)

And then they discover, in the center of the clearing, a delicious blueberry pie.

If the pie is edible then it was recently made and placed there. Whoever made it is probably close at hand. That person has a much better claim on the pie than these three and is therefore most likely rightly considered the owner. Let the owner of the pie decide. If the owner does not show up, leave the pie alone. Arguably the difficulty the three have in coming to a conclusion is related to the fact that none of the three has anything close to a legitimate claim on the pie.

comment by JamesAndrix · 2008-07-03T15:00:22.000Z · LW(p) · GW(p)

What if they had starved while they were arguing?

comment by Sebastian_Hagen2 · 2008-07-03T15:08:04.000Z · LW(p) · GW(p)

This post reminds me a lot of DialogueOnFriendliness.

There's at least one more trivial mistake in this post:

Is their nothing more to the universe than their conflict?
s/their/there/

Constant wrote:

Arguably the difficulty the three have in coming to a conclusion is related to the fact that none of the three has anything close to a legitimate claim on the pie.
If you modify the scenario by postulating that the pie is accompanied by a note reading "I hereby leave this pie as a gift to whomever finds it. Enjoy. -- Flying Pie-Baking Monster", how does that make the problem any easier?

comment by Constant2 · 2008-07-03T15:26:20.000Z · LW(p) · GW(p)

If you modify the scenario by postulating that the pie is accompanied by a note reading "I hereby leave this pie as a gift to whomever finds it. Enjoy. -- Flying Pie-Baking Monster", how does that make the problem any easier?

If, indeed, it requires that we imagine a flying pie-baking monster in order to come up with a situation in which the concept of 'fairness' is actually relevant (e.g. not immediately trumped by an external factor), then it suggests that the concept of 'fairness' is in the real world virtually irrelevant. I notice also that the three have arrived separately and exactly simultaneously, another rarity, but also important to make 'fairness' an issue.

comment by JamesAndrix · 2008-07-03T15:48:48.000Z · LW(p) · GW(p)

I notice also that the three have arrived separately and exactly simultaneously, another rarity, but also important to make 'fairness' an issue.

Yet most people in a situation of near simultaneity find it easier (or perhaps just safer?) to assume they had arrived simultaneously and come to agreement on dividing the pie 'fairly', rather than argue over who got there first.

comment by Peiter2 · 2008-07-03T15:52:10.000Z · LW(p) · GW(p)

It seems that the 1/3 each is what the recursive buck ends with, anyhow. Upon learning that Zaire claims half for him/herself and Xannon insists on averaging fairness algorithms, Xannon and Yancy merely update their claims to equal Zaire's at all times. That way, the average of the three desires will always turn out 1/3 a piece. Perhaps an argument for why an equal share is most fair. If not, Zaire could just wait until the other two had stated their desires and claimed the whole pie for him/herself, thus always skewing the final average in his/her favor.

comment by tim3 · 2008-07-03T16:00:41.000Z · LW(p) · GW(p)

I don't have an argument here; rather, I just want to see if I understand each position taken in the dialogue. After all, it would be a dreadful waste of time to argue one way or the other against our three musketeers while completely misunderstanding some key point. As far as I can tell, these are the essential arguments being made:

Yancy's position: that fairness is a rational (mathematical) system. There is no moral factor; rather than "to each according to his need," it is "to each according to the equation." This presumes fairness is a universal, natural system which people must follow, uncomplaining; any corruption of the system would be unfair, any bending or breaking of the rules renders them useless.

Zaire's position, that it is fair for individuals to define the product of fairness, handily illustrates this; his conception of fairness breaks down as soon as another conception is introduced. Fairness is entirely relative.

Xannon's initial position is that the product of fairness can be rationally derived from individually relative definitions of fairness; that fairness itself is the sum of differing concepts of fair.

Xannon revises this position, in that fairness is derived from the moral rights of a group and has an intrinsic, understood value. Those who do not inherently comprehend this value are not moral and do not belong to the group, like the murderer. Of course, this assumed that passersby, like Xannon, would side with the victim. Not only could they side with the assailant, they could even refuse to become involved. If the victim is "licensed" to resist being murdered, is the victim likewise licensed to kill the murderer in self-defense, and is that fair? The question of "who started it?" begins a new problem. If the observer is joined by five others who think that the victim, who killed his attacker in self-defense, must be put to death, is that also fair? This position argues for the existence of absolute morality, but only achieves a weak implication of moral relativity.

This is at odds with Xannon's initial position; if Zaire wants more of the pie than Yancy, but Xannon sides with Yancy, Xannon thinks it is fair to average their desires. However, Xannon would not average the desire of the murderer, the victim, and the passerby. If the murderer is presumed in the wrong, then Zaire is also presumed in the wrong. Therefore, it is unfair to attempt to combine Zaire's desire with Yancy and Xannon's.

In essence, Xannon's position is ultimately not far removed from Zaire's; where Zaire believes in the individual's right to define fairness, Xannon believes in the group's right to the same. Both believe in a moral right to inflict their own definition of fairness on the other. Yancy, in believing a universal system of fairness can be applied, would attempt the same. Further, even if Xannon agreed Yancy's proposal was fair, it would not be for the same reason, as Xannon believes fairness is derived from moral right; therefore, arriving at a fair decision through the amoral application of a rational system may not be "morally fair" to Xannon. There is no resolution to be found here.

I would like to see the question of the purpose of fairness addressed more comprehensively. If fairness as a system is not effective, why does it exist? If it is an artificial social construction, it must have a agreed-upon definition; if it is an evolved, biological system, it must have a physical basis; if it is a universal rule, there must be evidence of it.

comment by Eneasz · 2008-07-03T16:07:30.000Z · LW(p) · GW(p)

As for the question "Friendly to who?"/"So you get to decide what's Friendly?", may I suggest Who Gets to Decide? as a reasonable answer? To summarize (while of course skipping a lot of the detail in the original post), no one gets to decide what's Friendly just like no one gets to decide the speed of light. There are simply facts that can be discovered (or that we can be wrong about). Certain desires help the human race, other desires hurt the human race, and these can be discovered in the same way we discover any other facts about the universe.

comment by Anonymous37 · 2008-07-03T16:16:28.000Z · LW(p) · GW(p)

Does anyone think that this disagreement can be resolved without threat-signalling? I think valuing a particular model of 'fairness' over another (the Xers and Yers from Leif's post) ultimately boils down to the cost/benefit of being accepted/rejected by a particular social group.

So does this disagreement take place in a universe consisting only of the entities Xannon, Yancy, and Zaire, or do they all go back to the same village afterward and reminisce about what happened, or do they each go back to their separate villages?

comment by Constant2 · 2008-07-03T16:20:14.000Z · LW(p) · GW(p)

Yet most people in a situation of near simultaneity find it easier (or perhaps just safer?) to assume they had arrived simultaneously and come to agreement on dividing the pie 'fairly', rather than argue over who got there first.

You are claiming it is a common practice. But common practice is common practice - not necessarily "fairness". We often do things precisely because they are commonly done. One common practice which is not equal is, if two cars arrive at the same intersection at right angles, then the car on the right has the right of way. This is the common practice, and we do it because it is common practice, and it is common practice because we do it.

Even if it is not common practice, dividing it into thirds may well be apt to occur to most people. This makes it a likely Schelling point. Schelling points aren't about fairness either. They are about trying to predict what the other guy will predict that you predict, all without communicating with each other. You can use a Schelling point to try to find each other in a large city without a prior agreement on where to meet. Each of you tries to figure out what location the other will choose, keeping in mind that the other guy is trying to pick the location which you're most likely to predict he's going to pick (and you can probably keep recursing).

If all we're trying to do is come to an agreement there is no need to get deeply philosophical about fairness per se.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-06-22T14:18:29.975Z · LW(p) · GW(p)

One common practice which is not equal is, if two cars arrive at the same intersection at right angles, then the car on the right has the right of way. This is the common practice, and we do it because it is common practice, and it is common practice because we do it.

We do it that way because the delay the car on the left will experience if the car on the right goes first is shorter than the delay the car on the right would experience if the car on the left went first.

This rule is reversed in left-hand-of-the-road driving regions, because of the reversal of the asymmetry.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-22T14:35:10.861Z · LW(p) · GW(p)

It would surprise (and delight) me if minimizing delay were the reason we did it this way, though it's certainly a consequence. Do you have sources?

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-07-27T13:33:38.173Z · LW(p) · GW(p)

The NJ driver's manual mentioned it back in 1996. May still do so.

comment by poke · 2008-07-03T16:50:22.000Z · LW(p) · GW(p)

This dialogue leads me to conclude that "fairness" is a form of social lubricant that ensures our pies don't get cold while we're busy arguing. The meta-rule for fairness rules would then be: (1) fast; (2) easy to apply; and (3) everybody gets a share.

Replies from: DSimon
comment by DSimon · 2010-12-16T05:46:29.136Z · LW(p) · GW(p)

I wish I could vote this up twice. The first time for making an excellent point, and the second time for a (perhaps inadvertent) call-out to Catch-22.

comment by Joseph_Knecht · 2008-07-03T17:23:32.000Z · LW(p) · GW(p)
Optimistically, I would say that if the murderer perfectly knew all the relevant facts, including the victim's experience, ve wouldn't do it

The murderer may have all the facts, understand exactly what ve is doing and what the experience of the other will be, and just decide that ve doesn't care. Which fact is ve not aware of? Ve may understand all the pain and suffering it will cause, ve may understand that ve is wiping out a future for the other person and doing something that ve would prefer not to be on the receiving end of, may realize that it is behavior that if universalized would destroy society, may realize that it lessens the sum total of happiness or whatever else, may even know that "ve should feel compelled not to murder" etc. But at the end of the day, ve still might say, "regardless of all that, I don't care, and this is what I want to do and what I will do".

There is a conflict of desire (and of values) here, not a difference of fact. Having all the facts is one thing. Caring about the facts is something altogether different.

--

On the question of the bedrock of fairness, at the end of the day it seems to me that one of the two scenarios will occur:

(1) all parties happen to agree on what the bedrock is, or they are able to come to an agreement.

(2) all parties cannot agree on what the bedrock is. The matter is resolved by force with some party or coalition of parties saying "this is our bedrock, and we will punish you if you do not obey it".

And the universe itself doesn't care one way or the other.

comment by Dmitriy_Kropivnitskiy · 2008-07-03T17:31:05.000Z · LW(p) · GW(p)

I tend to agree with Xannon, that 'fairness' is defined by society. So the question is if the societal moral norms still affect the three opponents. If Xannon decides "we are still members of society where equal shares for everyone are considered fair" he might side with Yancy, share the pie into 1/3's and label Zaire to be a criminal. If he decides "we are out in the desert with no society around to push its moral values unto us" he might side with Zaire, divide the pie in 1/2's and tell Yancy to shove his ideas of equality up his behind.

The whole Y's "fair distribution is an even split, not a distribution arrived at by a 'fair resolution procedure' that everyone agrees on" argument seems to either say 'fair' == 'equal division' or bring in some sort of external source of morality "The Howly Blooble says we shall divide equally and so we shall."

The Y's intuitive grasp of fairness seems to be derived from ideas of modern western society, but even in our world there is, for example, a medical practise of triage where a doctor spends more time with patients who require more treatment. Nobody seems to call that unfair. As already have been mentioned the same situation would be different if X and Y had big dinners an hour ago and Z hasn't eaten in two days. I suppose in that case Y would be arguing that it is fair to give the whole pie to Z.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-03T18:30:59.000Z · LW(p) · GW(p)

Certain desires help the human race, other desires hurt the human race, and these can be discovered in the same way we discover any other facts about the universe.

You simply passed the recursive buck to "help" and "hurt". I will let you take for granted the superintelligence's knowledge of, or well-calibrated probability distribution over, any empirical truth about consequences; but when it comes to the valuation of those consequences in terms of "helping" or "hurting" you must tell me how to compute it, or run a computation that computes how to compute it.

comment by Paul_Gowder · 2008-07-03T18:40:57.000Z · LW(p) · GW(p)

Eliezer,

The resemblance between my second suggestion and your thing didn't go unnoticed -- I had in fact read your coherent extrapolated volition thing before (there's probably an old e-mail from me to you about it, in fact). I think it's basically correct. But the method of justification is importantly different, because the idea is that we're trying to approximate something with epistemic content -- we're not just trying to do what you might call a Xannon thing -- we're not just trying to model what humans would do. Rather, we're trying to model and improve a specific feature of humanity that we see as morally relevant -- responsiveness to reasons.

That's really, really important.

In the context of your dialogue above, it's what reconciles Xannon and Yancy: even if Yancy can't convince Xannon that there's some kind of non-subjective moral truth, he ought to be able to convince Xannon that moral beliefs should be responsive to reasons -- and likewise, even if Xannon can't convince Yancy that what really matters, morally, is what people can agree on, he should be able to convince Yancy that the best way to get at it in the real world is by a collective process of reasoning.

So you see that this method of justification does provide a way to answers to questions like "friendliness to whom." I know what I'm doing, Eliezer. :-)

comment by Unknown · 2008-07-03T18:43:08.000Z · LW(p) · GW(p)

Eliezer: as you are aware yourself, we don't know how to compute it, nor how to run a computation that computes how to compute it. If we leave it up to the superintelligence to decide how to interpret "helping" and "hurting," it will be in a position no worse than our own, and possibly better, seeing that we are not superintelligent.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-03T18:58:40.000Z · LW(p) · GW(p)

Paul: Responsiveness to which reasons? For every mind in mind design space that sees X as a reason to value Y, there are other possible minds that see X as a reason to value ~Y.

Replies from: rkyeun
comment by rkyeun · 2013-07-23T07:54:08.338Z · LW(p) · GW(p)

The answer to "Friendly to who?" had damn well better always be "Friendly to the author and by proxy those things the author wants." Otherwise leaving aside what it actually is friendly to, it was constructed by a madman.

comment by Paul_Gowder · 2008-07-03T19:34:53.000Z · LW(p) · GW(p)

Right, but those questions are responsive to reasons too. Here's where I embrace the recursion. Either we believe that ultimately the reasons stop -- that is, that after a sufficiently ideal process, all of the minds in the relevant mind design space agree on the values, or we don't. If we do, then the superintelligence should replicate that process. If we don't, then what basis do we have for asking a superintelligence to answer the question? We might as well flip a coin.

Of course, the content of the ideal process is tricky. I'm hiding the really hard questions in there, like what counts as rationality, what kinds of minds are in the relevant mind design space, etc. Those questions are extra-hard because we can't appeal to an ideal process to answer them on pain of circularity. (Again, political philosophy has been struggling with a version of this question for a very long time. And I do mean struggling -- it's one of the hardest questions there is.) And the best answer I can give is that there is no completely justifiable stopping point: at some point, we're going to have to declare "these are our axioms, and we're going with them," even though those axioms are not going to be justifiable within the system.

What this all comes down to is that it's all necessarily dependent on social context. The axioms of rationality and the decisions about what constitute relevant mind-space for any such superintelligence would be determined by the brute facts of what kind of reasoning is socially acceptable in the society that creates such a superintelligence. And that's the best we can do.

comment by Eneasz · 2008-07-03T19:42:54.000Z · LW(p) · GW(p)

The only reasons that exist for taking any actions at all are desires. In specific - the desires of the being taking the action. Under any given condition the being will always take the action that best fulfills the most/strongest of it's desires (given it's beliefs). The question isn't which action is right/wrong based on some universal bedrock of fairness, but rather what desires we want the being to have. We can shape many desires in humans (and presumably all the desires of an AI) and thus we want to give it the desires that best help and least hurt humanity.

You say this is passing the recursive buck. Unknown says it's impossible for us to calculate what's helpfull or hurtfull. I disagree in both cases. The desires that we most want to encourage are those that tend to fulfill the desires of other beings ("helpfull" desires). The desires we most want to discourage are those that tend to thwart the desires of other beings ("harmfull" desires). It doesn't have to be some grand confusing thing.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-03T19:51:22.000Z · LW(p) · GW(p)

Paul: Sounds like you're just describing the "thought faster" part of the CEV process, i.e., "What would you decide if you could search a larger argument space for reasons?" However, it seems to me that you're idealizing this process very highly, and overlooking such questions as "What if different orderings of the arguments would end up convincing us of different things?" which a CEV has to handle somehow, e.g. by weighting the possibilities by e.g. length, combining them into a common superposition, and acting only where strong coherence exists... but now we're heading into strictly Friendly AI territory.

If you say "reasons" or "reasons for reasons", that's philosophy written by humans for humans; if you want to put the weight of your theory on "reasons" you have to tell me how to compute a "reason", or how to make a superintelligence compute something that computes a reason.

comment by Paul_Gowder · 2008-07-03T20:19:43.000Z · LW(p) · GW(p)

Eleizer,

Things like the ordering of arguments are just additional questions about the rationality criteria, and my point above applies to them just as well -- either there's a justifiable answer ("this is how arguments are to be ordered,") or it's going to be fundamentally socially determined and there's nothing to be done about it. The political is really deeply prior to the workings of a superintelligence in such cases: if there's no determinate correct answer to these process questions, then humans will have to collectively muddle through to get something to feed the superintelligence. (Aristotle was right when he said politics was the ruling science...)

On the humans for humans point, I'll appeal back to the notion of modeling minds. If we take P to be a reason, then all we have to be able to tell the superintelligence is "simulate us and consider what we take to be reasons," and, after simulating us, the superintelligence ought to know what those things are, what we mean when we say "take to be reasons," etc. Philosophy written by humans for humans ought to be sufficient once we specify the process by which reasons that matter to humans are to be taken into account.

comment by Lake · 2008-07-03T20:23:06.000Z · LW(p) · GW(p)

Eliezer: Are you looking for a new definition of "fairness" which would reconcile the partisans of existing definitions? Or are you just pointing out that this is a sort of damned-if-you-do, damned if-you-don't problem, and that any rule for establishing fairness will piss somebody or other off? If the latter, from the point of view of your larger project, why not just insert a dummy answer for this question - pick any definition that grabs you - and see how it fits with the rest of what you need to work out. Or work through several different obviously computable answers.

As fair as it goes, it seems plausible-ish that fairness has to do with equality of something - resources, or opportunity, or utility, or whatever - but I doubt whether there's any general agreement over what should be equalised, and I don't see the value of descending to a meta level of discussion to sort the question out. Meta-discussions would have to be answerable to fairness anyway, if they were to be fair, and that looks circular. So why not cut the knot and pick whatever answer is nearest to hand?

comment by Lake · 2008-07-03T20:28:59.000Z · LW(p) · GW(p)

I suppose that's just to second Paul Gowder's point that the political problem is insurmountable. But I imagine few things would resolve a political problem faster then the backing of an all-powerful supermind.

@Paul: You seem to suggest that we all take the same things to be reasons, perhaps even the same reasons. Is this warranted?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-03T20:29:39.000Z · LW(p) · GW(p)

Things like the ordering of arguments are just additional questions about the rationality criteria

...which problem you can't hand off to the superintelligence until you've specified how it decides 'rationality criteria'. Bootstrapping is allowed, skyhooking isn't. Suppose that 98% of humans, under 98% of the extrapolated spread, would both choose a certain ordering of arguments, and also claim that this is the uniquely correct ordering. Is this sufficient to just go ahead and label that ordering the rational one? If you refuse to answer that question yourself, what is the procedure that answers it?

comment by Booklegger · 2008-07-03T20:33:02.000Z · LW(p) · GW(p)

Poke has it exactly right. Thinking further along the lines suggested by his "social lubricant" idea, I'd suggest that fairness is no more than efficiency. Or, at the very least, if two prevailing doctrines of fairness exist, the more efficient doctrine will—ceteris paribus—in the long run prevail.

This leaves open the question of how closely to efficiency our notions of fairness have actually evolved, but that's an empirical question.

comment by DonGeddis · 2008-07-03T22:06:23.000Z · LW(p) · GW(p)

This question, of what is fairness / morality, seems a lot easier (to me) than the posters here appear to feel.

Isn't the answer: You start with purely selfish desires. These sometimes cause conflict with limited resources. Then you take Rawl's Veil of Ignorance, and come up with social rules (like "don't murder") that result in a net positive outcome for society. It's not a zero-sum game. Cooperation can result in greater returns for everybody, than constant conflict.

Individuals breaking agreed morality are shunned, in much the same way as someone betraying in a Prisoner's Dilemma, or a herder allowing extra sheep onto a field, resulting in the Tragedy of the Commons.

Yes, any of us could break common morality -- that's easy. The whole point is, that if you didn't know which of the individuals you were going to be, then you wouldn't be so eager to propose some particularly non-moral solution.

Meanwhile, moral dilemmas that actually are zero sum, like two monkeys and a banana that can't be divided, don't have consensus solutions in society.

Finally, this formulation doesn't completely resolve all scenarios, because it matters a lot which group of people/things you consider in the class that "you" "might have been". In morality a few centuries ago, "you" were a white slaveowner, and it didn't occur to you that "you" "might have been" a black slave. So it was not immoral to own slaves, then. Just as, today, you might imagine yourself to be any citizen (of your country? of the world?), but not, say, a cow. So the conflicts become one of what is the population from which the Veil of Ignorance is drawn.

(Of course, all this imagination is beside the point that it is meaningless that "you" "might have been" someone else. But you can still do the computation even though the scenario is not physically plausible.)

But the basic structure seems pretty clear. It's not "right" for strong people to beat up weak people, because if you don't know whether you would have been born strong or weak, you'd much rather a society where nobody does it, than one where the strong dominate the weak. In other words, the gains from beating people up are far less than the losses from being beaten up.

(...we do what we must, because we can. For the good of all of us. Except the ones who are dead.)

comment by Doug_S. · 2008-07-03T22:20:53.000Z · LW(p) · GW(p)
Certain desires help the human race, other desires hurt the human race, and these can be discovered in the same way we discover any other facts about the universe. You simply passed the recursive buck to "help" and "hurt". I will let you take for granted the superintelligence's knowledge of, or well-calibrated probability distribution over, any empirical truth about consequences; but when it comes to the valuation of those consequences in terms of "helping" or "hurting" you must tell me how to compute it, or run a computation that computes how to compute it.

To rephrase the statement without passing the recursive buck:

Certain desires help the human race fulfill desires, other desires prevent the human race from fulfilling desires, and these can be discovered in the same way we discover any other facts about the universe.

comment by George_Weinberg2 · 2008-07-03T22:39:18.000Z · LW(p) · GW(p)

I see no reason to believe there is such a thing as an objective definition of "fair" in this case. The idea that an equal division is "fair" is based on the assumption that none of the three has a good argument as to why he should receive more than either of the others. If one has a reasonable argument as to why he should receive more, the fairness argument breaks down. In fact, none of the three really have a good argument as to why he is entitled to any of it, and I can't see why it would be wrong for any of the first one to grab it to claim the whole pie under "right of capture".

what's the standard reply to someone who says, "Friendly to who?" or "So you get to decide what's Friendly"?

This is an important question. I don't believe there is such a thing as an objective definition of friendliness, I'd doubt that "reasonable" people can come to an agreement as to what friendliness means. But I'm eager to be proven wrong, keep writing.

comment by Lakshmi · 2008-07-03T22:42:46.000Z · LW(p) · GW(p)

Why not divide the pie according to who will ultimately put the pie to the best use? If X and Y intend to take a nap after eating the pie, but Z is willing to plant a tree, wouldn't the best outcome for the pie favor Z getting more?

Before you dismiss the analogy, consider this - what if the pie was $1800.00 that none of the three had earned? What if the $1800.00 had been BORROWED with a certain expectation of its utility? Should X, Y, and Z each get $600.00, even though there is no stipulation as to what each of them must DO with that money? If X intends to save his portion, and Y intends to pay down debt, but Z will spend the money though it may not be is HIS best interests to do so, should he still only get an equal portion, even though his actions with his share best accomplish the purpose of the money?

If we return to pie, you may now see that pie represents potential action (as one of the earlier commenters who mentioned carbs noted). Instead of arguing for division based on merit for PAST actions/attributes (as mentioned by another commenter), why not argue for division based on merit of INTENDED actions? Who provides the best return on the invested carbs? Why assume that 'fair' division should reflect mere existence? Why can't 'fairness' include an evalutation of potential return?

This may simply deflect the argument of 'fairness' to one wherein 'best return' must be determined with regard to each individual and the group as a whole. If Y gets no shade from the tree Z plants, then perhaps her 'best return' might be a contented nap.

The ratio of productive and beneficial action, as a function of the input (pie), calculated across time (a tree has longer benefits than an immediate nap) seems to also be a 'fair' way to divide the pie.

comment by Paul_Gowder · 2008-07-03T22:47:57.000Z · LW(p) · GW(p)

Suppose that 98% of humans, under 98% of the extrapolated spread, would both choose a certain ordering of arguments, and also claim that this is the uniquely correct ordering. Is this sufficient to just go ahead and label that ordering the rational one? If you refuse to answer that question yourself, what is the procedure that answers it?

Again, this is why it's irreducibly social. If there isn't a procedure that yields a justified determinate answer to the rationality of that order, then the best we can do is take what is socially accepted at the time and in the society in which such a superintelligence is created. There's nowhere else to look.

comment by Z._M._Davis · 2008-07-03T23:24:00.000Z · LW(p) · GW(p)

Eliezer (to Roland): "I confess that I'd intended the original reading of the dialogue as simple greed on Zaire's part [...]"

For the book's sake (if this series is for the book), I want to say that this was much clearer (and more entertaining) with Dennis.

comment by Joe_Mathes · 2008-07-04T02:51:00.000Z · LW(p) · GW(p)

Early in the story, Z is hungry, and X and Y are not. Z says that he thinks that because he is hungry, 'fair' is defined with him getting more pie, while X and Y disagree. This seems like a slightly strange story to me, but here's a much stranger one:

Z is hungry, and X and Y are not. X thinks that it would be fair to give Z 1/2 the pie, but Z and Y both think it would be fair to split the pie 1/3;1/3;1/3. In other words, the person who is arguing the fairness of the unequal distribution is not the person who would benefit from it. This feels much less likely to me than the above story. In fact, it informs the following, which I submit: when people have a disagreement about what is fair, each person's opinion usually favors a positive outcome for himself.

If I accept this intuition as true, I can see to reasons why it might be true: One, that in cases where each person thinks fairness means being more generous to another party, they can easily find a compromise, because people are always willing to accept gifts. Two, that 'fair' is really just another word for 'compromise,' in which competing entities agree to a division of a resource simply in order to resolve conflict without violence. The latter seems more likely to me, at least as an explanation for the origin of the idea of fairness in our minds and our vocabularies.

An objective definition of 'fair' seems like it would have to be identical to 'moral'; what is the 'moral' distribution of the pie?

comment by Eneasz · 2008-07-04T07:19:00.000Z · LW(p) · GW(p)

Joe Mathes: I thought it was fairly obvious that a fair distribution is in this case synonymous with a moral distribution (was I wrong?). In this context, the word fair doesn't have any meaning if one tries to remove the concept of morality.

However I don't think that argueing for fairness when one is not the beneficiary is that unusual. The civil rights movement was supported by a lot of white people, and the women's liberation movement was supported by a lot of males. In both cases these people are losing an advantage they previously held in order to preserve fairness, which they viewed as more valuable than their advantages. I think they're right, of course, but people are rarely motivated by abstract arguements about society as a whole being better off. They are motivated by a love of fairness and a desire to promote fairness, which has been inculcated into them by their programmers. A desire strong enough to override their desire for advantages over the oppressed group.

comment by Caledonian2 · 2008-07-04T14:49:00.000Z · LW(p) · GW(p)

They are motivated by a love of fairness and a desire to promote fairness, which has been inculcated into them by their programmers.
Unlikely. The basic principles of fairness are constant between human cultures and societies, and seem to be intuitively understood by humans. What changes is the status of categories of people - but humans agree on what behavior is fair towards an equal.

To deal with the question "what is moral", we need first to establish the purpose of "morality". How can you evaluate the effectiveness of a design unless you first understand what it is intended to do and not do?

comment by tim6 · 2008-07-04T15:02:00.000Z · LW(p) · GW(p)

Eneasz: you're ignoring "moral benefits". Let's say Joe is crossing a desert with enough food and water to live comfortably until he reaches his destination. Midway through, he comes across Bob, who is dying of thirst. If Joe gives Bob sufficient food and water to save his life, Joe can still make it across the desert, but not as comfortably. Giving Bob food and water represents a loss of benefits for Joe; withholding food and water represents a more significant loss, though. Most people would be wracked by guilt at leaving someone to die when they could have saved them; conversely, saving someone's life imparts an enormous feeling of goodwill and self-confidence. Surely the loss of a small amount of comfort is insignificant compared to the loss of moral respectability? Fairness in this scenario benefits both parties; Bob gets to live, and Joe gains an intangible but nevertheless real moral benefit.

Supporting the civil rights movement might have represented the loss of a certain kind of benefit for white people; say, the exercise of force over black people. However, opposing the movement would have represented a moral deficit. Not all benefits are material. In supporting the movement, white people gained moral benefits. They certainly have some advantage over white people who did not support the civil rights movement, do they not?

comment by Si · 2008-07-05T15:24:00.000Z · LW(p) · GW(p)

For every possible division of pie into three pieces (including pieces of 0 size), take each person and ask how fair they would think the division if they received each of the three slices. Average those together to get each person's overall fairness rating for a given pie distribution.

Average those per-person results into an "overall fairness" rating for each pie distribution.

This includes:
- You can have people involved who don't like pie and don't want any. It seems pointless to say that division into thirds is the only fair division, if one of the three people is equally happy with any division.
- There can be more than one fair division
- The inputs are not as simple as "I want half the pie", but are a function of fairness in proportion to size of slice, and distinguish between whether your happiness is in direct proportion to the size of pie slice, or whether it peaks at "half the pie" and stays the same for any value above that, or declines for any value above that.

Zaire says he wants half the pie and the other two want to divide it into thirds, but they may at the same time all have a linear link between happiness and amount of pie, leading to thirds being the fairest division.

- Situations such as the murderer and murderee in the alley, the murderer is happy to kill but unhappy to be killed, the murderee is unhappy to be in either situation, leaning towards both of them walking away as the 'fairest' outcome.

The process may lead to a less than happiest outcome for a given situation, but when applied to many situations over your lifetime such that you may be in a different position in the situation each time, gives the most long-term happiness.


I've wandered into describing "happiness" rather than "fairness" in several places, and seem to be heading towards "fairness as position-independant happiness".

This seems to be similar to Don Geddis's answer, except where he says it is meaningless to talk about "what position you could have been in", I suggest that it's a process you can agree to apply to situations you will be in in the future to get a 'fairest' outcome, even though you don't yet know which position you will be in the next time pie-slicing turns up in your life. So, fair is the process that gets you the best outcome for all situations for the rest of your life, given that you don't yet know what the situations are or what position you will play in them.

comment by demiurge · 2008-07-05T21:10:00.000Z · LW(p) · GW(p)

A possible mathematical rule for fairness in this situation.
1. Select who gets to cut the pie into three pieces by a random process.
2. That individual can cut it into any size sections he chooses along as there are three sections.
3. The order of choice selection again is determined by a random process.
Result: on average everyone receives 1/3 share.

Fairness=underlying intuitive mathematical rules. QED

comment by Zubon · 2008-07-07T13:40:00.000Z · LW(p) · GW(p)

A variant on demiurge: A standard way of dividing something into two parts is to have one person divide and the other choose. Alice cuts the slice of cake in half, and Bob takes whichever piece he likes. If Alice is unhappy with her piece, she should have cut the two more evenly. You can apply the same rule to three people by adding an extra step: glide the knife along the edge to create an increasingly large piece, and any of the three can call a stop and take that piece (then divide the rest as for two people). (For a pie, you might make an initial cut at 0-degrees and proceed clockwise, expecting someone to call for the first piece around 120-degrees.) We expect it to lead to a roughly even distribution.

Is this sort of thing (one cuts, the other chooses) a procedure that would inform "fairness" more generally or just a solution to the problem at hand?

comment by Mass_Driver · 2011-02-26T11:12:26.756Z · LW(p) · GW(p)

Yancy: "If someone wants to murder me, and the two of us are alone, then I am still in the right and they are still in the wrong, even if no one else is present."

So the trick here is to realize that fairness is defined with respect to an expected or typical observer -- when you try to murder me, and I scream "Foul play!", the propositional content of my cry is that I expect any human who happens to pass by to agree with me and to help stop the murder. If nobody passes by this time, well, that's just my bad luck, and I can go to my grave with the small comfort that whatever behavior of mine that led to my being murdered by you was at least marginally more adapative than a behavior that would to our fellow tribespeople thinking that you were justified in murdering me, because then I would have had no chance of survival, as opposed to having my survival depend upon having the good luck of being observed in a timely fashion.

On the other hand, if it were impossible for a disinterested party to pass by, because you and I were the only two intelligent beings in the known world, or because all known intelligent beings would have a political reason to pick one side or the other in our little tiff, then fairness would have no propositional content, and would be meaningless. That seems like a small bullet to bite -- it seems plausible to think that fairness norms really did evolve -- and that people continue to make a big deal about the concept -- because there were and often are disinterested third parties that observe two-party conflicts (or disinterested fourth parties who observe three-party conflicts, and so on). If there weren't any such thing as disinterested parties, it really wouldn't make any sense to talk about "fairness" as an arrangement that's distinct from "equal division".

comment by AnthonyC · 2011-03-28T17:20:54.894Z · LW(p) · GW(p)

My favorite answer to this problem comes from "How to Cut a Cake: And Other Mathematical Conundrums." The solution in the book was that "fair" means "no one has cause to complain." It doesn't work in the case here, since one party wants to divide the pie unevenly, but if you were trying to make even cuts, it works. The algorithm was:

  1. Make a cut from the center to the edge.
  2. Have one person hold the knife over that cut,
  3. Slowly rotate the knife (or the pie) at, say, a few degrees per second.
  4. At any time, any person (including the one holding the knife) can say "cut." A cut is made there, and the speaker gets the thus-cut piece.

At the end, anyone who thinks they got too little (meaning, someone else got too much) could have said "cut" before that other person's cut got too big.

Replies from: wedrifid, cousin_it
comment by wedrifid · 2011-03-28T18:12:48.605Z · LW(p) · GW(p)

That's actually a really good idea. Like the 'cut the deck and the other person gets to pick half' idea but this one actually generalizes to multiple people. Elegant.

Replies from: MarsColony_in10years
comment by MarsColony_in10years · 2015-05-19T16:55:31.641Z · LW(p) · GW(p)

'cut the deck and the other person gets to pick half'

That's the simplest form. AnthonyC adapts it to work for multiple people, provided that everyone agrees that the utility should be divided up evenly. I think it's possible to adapt the principle further, so that it also applies to situations posed by others on this thread. (Insulin should be given preferentially to diabetics, and antidote should be distributed so as to maximize the number of lives saved.)

If no one knows whether they are one of the parties that will benefit from unfair distribution, then even selfish Bayesian agents will agree on a distribution. This might be accomplished if a group can decide in advance what to do in certain circumstances.

For example, say a group of N people thinks that some of them might be poisoned, but no one is exhibiting symptoms yet. The group might decide to administer 1 unit of antidote to the first person to show visible symptoms. If they continue to treat each person who shows symptoms, in order, they may well run out of n units of antidote. Before anyone shows symptoms, even in a worst-case scenario where they all are poisoned, self-interested parties will find it fairly easy to agree to n/N chances of survival. When they are down to their L^th and last unit of antidote, however, all parties but the one showing symptoms have a strong incentive to withhold the antidote. If they are all poisoned, then they have a 0% chance of survival.

This assumes that all parties get equal value out of the same utility, however. It's much more difficult when one party gets an amount of utility that can only be judged qualitatively. For example, if Xannon and Yancy don't really like pie all that much and aren't all that hungry, but Zaire hasn't eaten anything in the last day or two. Alternatively, if we want to compare how much a pig values its own life with the utility of a much more intelligent human's pleasure out of eating bacon.

If you can determine a conversion factor though, or agree on the relative benefits of each, then it becomes pretty obvious which option leads to the greatest total utility. You just choose the option with the highest expected total utility. All of the difficulty is contained in assessing utility between different parties, without making apples-to-oranges comparisons.

comment by cousin_it · 2011-03-29T18:46:54.444Z · LW(p) · GW(p)

Nice! Thanks a lot.

comment by xxd · 2012-01-26T22:42:57.267Z · LW(p) · GW(p)

Xannon decides how much Zaire gets. Zaire decides how much Yancy gets. Yancy decides how much Xannon gets.

If any is left over they go through the process again for the remainder ad infinitum until an approximation of all of the pie has been eaten.

comment by drnickbone · 2012-03-13T21:19:20.480Z · LW(p) · GW(p)

Xannon and Yancy offer Zaire 1/3 of the pie, if he'll accept that.

If he won't, they split the pie 50-50 between them, and leave Zaire with nothing.

Does that sound fair?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-13T23:02:53.698Z · LW(p) · GW(p)

To me? Sure. Not optimal, but fair enough.
To Zaire? No, not at all.

I infer from your question that you would prefer to ignore Zaire's preferences in the matter. As would I.

I further infer that you're content to rely on your intuitions about whose preferences to ignore.
I prefer not to do that, given a choice.

Replies from: drnickbone
comment by drnickbone · 2012-03-14T22:49:22.371Z · LW(p) · GW(p)

This was a pun by the way. I was playing on "fair" in the sense of retributive justice (Xannon and Yancy punishing Zaire for being antisocial) as opposed to distributive justice. Sorry if that wasn't clear.

On reflection, it is important that these senses are closely linked... societies probably can't get the distributive part of justice at all unless they are first firm on the retributive part. Close-knit, egalitarian communities do seem to get very nasty about members taking more than their fair share, and don't truck a lot of long-winded, self-serving debate on what constitutes a fair share. (On a wider scale, it is also interesting how much noisier and greedier the super-Zaires of this world have become in recent years, ever since the threat of socialist revolution went the way of the dodo. A few decades back, the rich really did fear reds under the beds lynching them any time soon, with compromises like Keynes and Social Democracy among the results. Not so much these days.)

Lastly, I don't think I (or you) need to ignore Zaire's preferences, any more than those of Xannon or Yancy's. Each of them, presumably, has a individual utility function which increases with the proportion of pie that they personally get. The real difference is that Xannon and Yancy are at least attempting to construct a symmetric joint utility function (one which is invariant under permutations of the variables X, Y and Z.) Whereas Zaire is just trying it on.

comment by rkyeun · 2012-07-29T23:47:56.194Z · LW(p) · GW(p)

When people get this embroiled in philosophy, I usually start eating pie.

However as I don't like blueberries, we will split the pie into thirds fairly as Yancy wants, then I will give 1/6th of my pie to Zaire so he has the half he wants, and I'll leave the other 1/6th where I found it since A PIE WE FOUND IN THE FOREST AND KNOW NOTHING ABOUT ISN'T NECESSARILY MINE TO STEAL FROM.

comment by Will_Lugar · 2014-08-11T21:28:45.498Z · LW(p) · GW(p)

A great post. It captured a lot of intriguing questions I currently have about ethics. One question I have, which I am curious to see addressed in further posts in this sequence, is: Once we dissolve the question of "fairness" (or "morality" or any other such term) and taboo the term, is there a common referent that all parties are really discussing, or do the parties have fundamentally different and irreconcilable ideas of what fairness (or morality, etc.) is? Is Xannon's "fairness" merely a homonym for Yancy's "fairness" rather than something they could figure out and agree on?

If the different views of "fairness" are irreconcilable, then I am inclined to wonder if moral notions really do generally function (without this intention, oftentimes) as a means for each party to bamboozle the other into given the speaker what she wants, by appealing to a multifaceted "moral" concept that creates the mere illusion of common ground (similar to how "sound" functions in the question of the tree falling). Perhaps Xannon wants agreement, Yancy wants equal division, and there is no common ground between them except for a shared delusion that there is common ground. (I certainly hope this isn't true.)

More generally, what about different ethical systems? Although we can easily rule out non-naturalist systems, if two different moral reductionist systems clash (yet neither contradicts known facts) which one is "best?" How can we answer this question without defining the word "best," and what if the two systems disagree on the definition? It would seem to result in an infinite recursion of criteria disagreements--even between two systems that agree on all the facts. (As I understand it, Luke's discussion on pluralistic moral reductionism is relevant to this, but I have not yet read it and am very distressed that he is apparently never going to finish it.)

I tentatively stand by my own theory of moral reductionism (similar to Fyfe's desirism, with traces of hedonistic utilitarianism and Carrier's goal theory) but it concerns me that different people might be using moral concepts in irreconcilably different ways, and some of those that contradict mine might be equally "legitimate" to mine... After reading the Human's Guide to Words sequence, I am more hesitant to use any kind of appeal to common usage, which is what I'd previously done. My views and arguments may continue to change as I read further, and I try always to be grateful to read things that do this to me.

Anyhow, I expect to enjoy reading the rest of the meta-ethics sequence. (I'll read Luke's perpetually-unfinished meta-ethics sequence afterwards.)

comment by lawrence-hg257 · 2015-02-25T17:17:16.960Z · LW(p) · GW(p)

Interesting. As far as I can tell, the moral is that most definitions in an argument are supplied such that the arguer gets their way, instead of being a solid fact that can be followed in a logical sequence in order to deduce the correct course of action.

But I think it would using the rationalists' Taboo would benefit the three, as the word "fair" is defined differently by each of them: Xannon defines fairness as a compromise between the involved parties. Yancy defines fairness as an objective equality wherein everyone receives the same treatment. Zaire either defines fairness as accounting for the needs of each of the involved parties, or as whatever gets him half the pie. Define "fairness" first before agreeing to divide the pie "fairly", or shut up and compromise.

In this situation, I think it would just be easier to split the pie while the other two are arguing and ask, "Do you guys want to eat the pie or continue arguing?" while holding a piece. Arguing about the definition of fairness while starving in a forest in the middle of who-knows-where is generally not a good idea. An argument that leads nowhere is a waste of time, and not useful.

comment by Epictetus · 2015-02-25T19:05:58.949Z · LW(p) · GW(p)

There's another compromise position. Namely, two can form a coalition against the third and treat the problem as dividing a pie between two individuals with different claims. For example, Xannon and Yancy have a combined claim of 2/3 to Zaire's 1/2. Proportional division according to those terms would give Zaire 3/7 to the duo's 4/7, which they can then split in half to get the distribution {2/7, 2/7, 3/7}. As it turns out, you get this same division no matter how the coalitions form. This sort of principle dates back to the Talmud.

Of course, this only works if Xannon agrees that Zaire can justly claim half the pie. If not, Xannon and Yancy could compel Zaire to accept one-third of the pie.

Rational actors would each claim the entire pie for themselves, and then any fair division scheme would result in each getting a third.

comment by EniScien · 2022-05-29T19:20:08.347Z · LW(p) · GW(p)

Wow. This creates a real moral conflict for me much better than a clash of three worlds (where the problem is that I really agree more with the super happy than with people, and even more so with those who killed themselves)