The Bedrock of Morality: Arbitrary?

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-08-14T22:00:57.000Z · LW · GW · Legacy · 119 comments

Contents

119 comments

Followup toIs Fairness Arbitrary?, Joy in the Merely GoodSorting Pebbles Into Correct Heaps

Yesterday, I presented the idea that when only five people are present, having just stumbled across a pie in the woods (a naturally growing pie, that just popped out of the ground) then it is fair to give Dennis only 1/5th of this pie, even if Dennis persistently claims that it is fair for him to get the whole thing.  Furthermore, it is meta-fair to follow such a symmetrical division procedure, even if Dennis insists that he ought to dictate the division procedure.

Fair, meta-fair, or meta-meta-fair, there is no level of fairness where you're obliged to concede everything to Dennis, without reciprocation or compensation, just because he demands it.

Which goes to say that fairness has a meaning beyond which "that which everyone can be convinced is 'fair'".  This is an empty proposition, isomorphic to "Xyblz is that which everyone can be convinced is 'xyblz'".  There must be some specific thing of which people are being convinced; and once you identify that thing, it has a meaning beyond agreements and convincing.

You're not introducing something arbitrary, something un-fair, in refusing to concede everything to Dennis.  You are being fair, and meta-fair and meta-meta-fair.  As far up as you go, there's no level that calls for unconditional surrender.  The stars do not judge between you and Dennis—but it is baked into the very question that is asked, when you ask, "What is fair?" as opposed to "What is xyblz?"

Ah, but why should you be fair, rather than xyblz?  Let us concede that Dennis cannot validly persuade us, on any level, that it is fair for him to dictate terms and give himself the whole pie; but perhaps he could argue whether we should be fair?

The hidden agenda of the whole discussion of fairness, of course, is that good-ness and right-ness and should-ness, ground out similarly to fairness.

Natural selection optimizes for inclusive genetic fitness.  This is not a disagreement with humans about what is good.  It is simply that natural selection does not do what is good: it optimizes for inclusive genetic fitness.

Well, since some optimization processes optimize for inclusive genetic fitness, instead of what is good, which should we do, ourselves?

I know my answer to this question.  It has something to do with natural selection being a terribly wasteful and stupid and inefficient process.  It has something to do with elephants starving to death in their old age when they wear out their last set of teeth.  It has something to do with natural selection never choosing a single act of mercy, of grace, even when it would cost its purpose nothing: not auto-anesthetizing a wounded and dying gazelle, when its pain no longer serves even the adaptive purpose that first created pain.  Evolution had to happen sometime in the history of the universe, because that's the only way that intelligence could first come into being, without brains to make brains; but now that era is over, and good riddance.

But most of all—why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good?  What is even the appeal of this, morally or otherwise?  At all?  I know people who claim to think like this, and I wonder what wrong turn they made in their cognitive history, and I wonder how to get them to snap out of it.

When we take a step back from fairness, and ask if we should be fair, the answer may not always be yes.  Maybe sometimes we should be merciful.  But if you ask if it is meta-fair to be fair, the answer will generally be yes.  Even if someone else wants you to be unfair in their favor, or claims to disagree about what is "fair", it will still generally be meta-fair to be fair, even if you can't make the Other agree.  By the same token, if you ask if we meta-should do what we should, rather than something else, the answer is yes.  Even if some other agent or optimization process does not do what is right, that doesn't change what is meta-right.

And this is not "arbitrary" in the sense of rolling dice, not "arbitrary" in the sense that justification is expected and then not found.  The accusations that I level against evolution are not merely pulled from a hat; they are expressions of morality as I understand it.  They are merely moral, and there is nothing mere about that.

In "Arbitrary" I finished by saying:

The upshot is that differently structured minds may well label different propositions with their analogues of the internal label "arbitrary"—though only one of these labels is what you mean when you say "arbitrary", so you and these other agents do not really have a disagreement.

This was to help shake people loose of the idea that if any two possible minds can say or do different things, then it must all be arbitrary.  Different minds may have different ideas of what's "arbitrary", so clearly this whole business of "arbitrariness" is arbitrary, and we should ignore it.  After all, Sinned (the anti-Dennis) just always says "Morality isn't arbitrary!" no matter how you try to persuade her otherwise, so clearly you're just being arbitrary in saying that morality is arbitrary.

From the perspective of a human, saying that one should sort pebbles into prime-numbered heaps is arbitrary—it's the sort of act you'd expect to come with a justification attached, but there isn't any justification.

From the perspective of a Pebblesorter, saying that one p-should scatter a heap of 38 pebbles into two heaps of 19 pebbles is not p-arbitrary at all—it's the most p-important thing in the world, and fully p-justified by the intuitively obvious fact that a heap of 19 pebbles is p-correct and a heap of 38 pebbles is not.

So which perspective should we adopt?  I answer that I see no reason at all why I should start sorting pebble-heaps.  It strikes me as a completely pointless activity.  Better to engage in art, or music, or science, or heck, better to connive political plots of terrifying dark elegance, than to sort pebbles into prime-numbered heaps.  A galaxy transformed into pebbles and sorted into prime-numbered heaps would be just plain boring.

The Pebblesorters, of course, would only reason that music is p-pointless because it doesn't help you sort pebbles into heaps; the human activity of humor is not only p-pointless but just plain p-bizarre and p-incomprehensible; and most of all, the human vision of a galaxy in which agents are running around experiencing positive reinforcement but not sorting any pebbles, is a vision of an utterly p-arbitrary galaxy devoid of p-purpose.  The Pebblesorters would gladly sacrifice their lives to create a P-Friendly AI that sorted the galaxy on their behalf; it would be the most p-profound statement they could make about the p-meaning of their lives.

So which of these two perspectives do I choose?  The human one, of course; not because it is the human one, but because it is right.  I do not know perfectly what is right, but neither can I plead entire ignorance.

And the Pebblesorters, who simply are not built to do what is right, choose the Pebblesorting perspective: not merely because it is theirs, or because they think they can get away with being p-arbitrary, but because that is what is p-right.

And in fact, both we and the Pebblesorters can agree on all these points.  We can agree that sorting pebbles into prime-numbered heaps is arbitrary and unjustified, but not p-arbitrary or p-unjustified; that it is the sort of thing an agent p-should do, but not the sort of thing an agent should do.

I fully expect that even if there is other life in the universe only a few trillions of lightyears away (I don't think it's local, or we would have seen it by now), that we humans are the only creatures for a long long way indeed who are built to do what is right.  That may be a moral miracle, but it is not a causal miracle.

There may be some other evolved races, a sizable fraction perhaps, maybe even a majority, who do some right things.  Our executing adaptation of compassion is not so far removed from the game theory that gave it birth; it might be a common adaptation.  But laughter, I suspect, may be rarer by far than mercy.  What would a galactic civilization be like, if it had sympathy, but never a moment of humor?  A little more boring, perhaps, by our standards.

This humanity that we find ourselves in, is a great gift.  It may not be a great p-gift, but who cares about p-gifts?

So I really must deny the charges of moral relativism:  I don't think that human morality is arbitrary at all, and I would expect any logically omniscient reasoner to agree with me on that.  We are better than the Pebblesorters, because we care about sentient lives, and the Pebblesorters don't.  Just as the Pebblesorters are p-better than us, because they care about pebble heaps, and we don't.  Human morality is p-arbitrary, but who cares?  P-arbitrariness is arbitrary.

You've just got to avoid thinking that the words "better" and "p-better", or "moral" and "p-moral", are  talking about the same thing—because then you might think that the Pebblesorters were coming to different conclusions than us about the same thing—and then you might be tempted to think that our own morals were arbitrary.  Which, of course, they're not.

Yes, I really truly do believe that humanity is better than the Pebblesorters!  I am not being sarcastic, I really do believe that.  I am not playing games by redefining "good" or "arbitrary", I think I mean the same thing by those terms as everyone else.  When you understand that I am genuinely sincere about that, you will understand my metaethics.  I really don't consider myself a moral relativist—not even in the slightest!

 

Part of The Metaethics Sequence

Next post: "You Provably Can't Trust Yourself"

Previous post: "Is Fairness Arbitrary?"

119 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Hopefully_Anonymous · 2008-08-14T22:31:02.000Z · LW(p) · GW(p)

I'm fine with a galaxy without humor, music, or art. I'd sacrifice all of that reproductive fitness signalling (or whatever it is) to maximize my persistence odds as a subjective conscious entity, if that "dilemma" was presented to me.

Replies from: diegocaleiro, VAuroch
comment by diegocaleiro · 2010-11-17T03:04:39.625Z · LW(p) · GW(p)

This is because you are more strongly (individually, not the whole of our species) to maximize your survival opportunities than your pleasure opportuinities. If it is the case, for instance, that you would accept to become an entity with a constant focus of attention into almost nothing (a meditating entity) instead of having loads of fun for one fourth of the time (say, 1000 versus 4000 years) this means you are a survival optimization process. You were not designed for this, you were designed to maximize genetic fitness. And you were designed to maximize memetic fitness. You have been designed by two major forces composed of many many minor forces. You have decided, morally, to abdicate all this plurality in the name of conscious survival. You praise individuality, and you praise persistence. I would suggest to let one more meme take over your mind. The meme of fun. Optimize for fun, don't lead us into a universe of "sperm donors" who detect minimal entropy spaces to be at for longer. Don't lead us here: http://www.nickbostrom.com/fut/evolution.html

comment by VAuroch · 2013-11-27T08:38:36.964Z · LW(p) · GW(p)

I would not, or at least probably would not. For sufficiently drastic survival ratio, I might. I would not, because humor and art are tied far too directly into the sense of fun; an excellent game is art just as much as Guernica, Citizen Kane, or a Beethoven symphony, and losing that entire section of the aesthetic sense would spoil too many fun things.

Of course, whatever senses replaced our senses of art/humor might bring other, equal fun with them. In which case I'd be more inclined to accept the deal.

comment by Nominull3 · 2008-08-14T22:56:50.000Z · LW(p) · GW(p)

I kind of think humor, music and art are pretty cool, myself.

comment by Psy-Kosh · 2008-08-14T22:57:19.000Z · LW(p) · GW(p)

Hopefully: But would you replace those with anything else? I'd want persistance, but I'd want growth and, well, fun! :)

comment by roko3 · 2008-08-14T23:03:23.000Z · LW(p) · GW(p)

I think that your use of the word arbitrary differs from mine. My mind labels statements such as "we should preserve human laughter for ever and ever" with the "roko-arbitrary" label. Not that I don't enjoy laughter, but there are plenty of things that I presently enjoy that, if I had the choice, I would modify myself to enjoy less. Activities such as enjoying making fun of other people, eating sweet foods, etc. It strikes me that the dividing line between "things I like but wish I didn't like" and "things I like and want to keep liking" should be made in some non-roko-arbitrary way. One might incorporate my position with eliezer's by saying that my concept of "rightness" relies heavily on my concept of arbitrariness, and that my concept of arbitrariness is clearly different to eliezer's.

comment by Z._M._Davis · 2008-08-14T23:04:43.000Z · LW(p) · GW(p)

Eliezer: "I am not playing games by redefining 'good' or 'arbitrary' [...]"

I imagine the counterargument would be that while you're not playing e-games by e-redefining the terms, you are playing games by redefining the terms.

Upon a preview it looks like Roko beat me to it.

comment by roko3 · 2008-08-14T23:23:53.000Z · LW(p) · GW(p)

It also worries me quite a lot that eliezer's post is entirely symmetric under the action of replacing his chosen notions with the pebble-sorter's notions. This property qualifies as "moral relativism" in my book, though there is no point in arguing about the meanings of words.

My posts on universal instrumental values are not symmetric under replacing UIVs with some other set of goals that an agent might have. UIVs are the unique set of values X such that in order to achieve any other value Y, you first have to do X. Maybe I find this satisfying because I have always been more at home with category theory than logic; I have defined a set of values by requiring them to satisfy a universal property.

comment by Seinberg · 2008-08-15T00:45:19.000Z · LW(p) · GW(p)

But laughter, I suspect, may be rarer by far than mercy.

Curious why you suspect this. Is it particularly mammalian in some respect? I confess I could be naive, but it seems to me that any sufficiently intelligent being/agent would be just as likely as we humans are to have humor. I suppose that raises the question of how likely it is, and are we just incredibly lucky to have inherited such a trait. Still, it's such a core aspect of so much of our species -- even more than mercy, I think! -- that I'm curious why you think that.

Replies from: VAuroch
comment by VAuroch · 2013-11-27T08:41:04.833Z · LW(p) · GW(p)

Only humans laugh, and only the chimpanzee family does anything similar; even that isn't like normal laughter, but the kind of laughter you express when tickled (which is subjectively qualitatively different from laughter at a joke).

comment by J_Thomas2 · 2008-08-15T01:38:04.000Z · LW(p) · GW(p)

Eliezer, you claim that there is no necessity we should accept Dennis's claim he should get the whole pie as fair. I agree.

There is also no necessity he should accept our alternative claim as fair.

There is no abstract notion that is inherently fair. What there is, is that when people do reach agreement that something is fair, then they have a little bit more of a society. And when they can't agree about what's fair they have a little less of a society. There is nothing that says ahead of time that they must have that society. There is nothing that says ahead of time what it is that they must agree is fair. (Except that some kinds of fairness may aid the survival of the participants, or the society.)

Concepts of fairness aren't inherent in the universe, they're an emergent property that comes from people finding ways to coexist and to find mutual aid. If they agree that it's fair for them to hunt down and kill and eat each other because each victim has just as much right and almost as much opportunity to turn the tables, this does not lead to a society that's real useful to its participants and it does not lead them to be particularly useful to each other. It's a morality that in many circumstances will not be fully competitive. But this is a matter of natural selection among ideas, there isn't anything less fair about this concept than other concepts of fairness. It's only less competitive, which is an entirely different thing.

It's an achievement to reach agreement about proper behavior. The default is no agreement. We make an effort to reach agreement because that's the kind of people we are. The kind of people who've survived best so far. When Dennis feels he deserves something different from what we think, we often feel we should try to understand his point of view and see if we can come to a common understanding.

And we have to accept that sometimes we cannot come to any common understanding, that's just how it works. We have to accept that sometimes somebody will feel that it isn't fair, that he's been mistreated, and we have to live with whatever consequences come from that. Society isn't an all-or-none thing. We walk together, we stumble, we fall down, we get back up and try some more.

Why would anybody think that there is a single perfect morality, and if everybody could only see it then we'd all live in peace and harmony?

You might as well imagine there's a single perfect language and if we all spoke it we'd understand each other completely and everything we said would be true.

comment by Richard_Hollerith2 · 2008-08-15T01:55:15.000Z · LW(p) · GW(p)

Though Eliezer does not say it explicitly today, the totality of his public pronouncements on laughter leads me to believe that he considers laughter an instrinsic good of very high order. I hope he does not expect me to accept the highness of the probability of the rareness of humor in the universe as evidence for humor's intrinsic goodness. After all, spines are probably very rare in the universe, too. At least spines with 32 (or however many humans have) vertebrae are.

Eliezer does not explicitly say today that happiness is an intrinsic good, but he does contrast pebble sorting with "the human vision of a galaxy in which agents are running around experiencing positive reinforcement."

I take it Eliezer does not wish to see the future light cone tiled with tiny computers running Matt Mahoney's Autobliss 1.0. Pray tell me, What is wrong with such a future that is not also wrong with a future in which the resources of the future light cone are devoted to helping humans run around and experience positive reinforcement? Eliezer answer might refer to the difference between the simplicity of Autobliss 1.0 and the complexity of a human. Well, my reply to that is that it is relatively easy to make Autobliss more complex. We can even employ an evolutionary algorithm to create the complexity, increasing the resemblance between Autobliss 2.0 and humans. Eliezer probably has a reply to that, too. But when does this dialog reach the point where it is obvious that the distinction that makes humans intrinsically valuable and Autobliss 1.0 not valuable is being chosen so as to have the desired consequence? And did we not have a sermon some day in the last couple of weeks about how it is bad to gather evidence for a desired conclusion while ignore evidence against the conclusion?

comment by Kip_Werking · 2008-08-15T01:55:22.000Z · LW(p) · GW(p)

I find Eliezer's seemingly-completely-unsupported belief in the rightness of human benevolence, as opposed to sorting pebbles, pretty scary.

comment by Wiseman · 2008-08-15T02:01:25.000Z · LW(p) · GW(p)

@J Thomas: "Why would anybody think that there is a single perfect morality, and if everybody could only see it then we'd all live in peace and harmony?"

Because they have a specific argument which leads them to believe that?

You know, there's no reason why one couldn't consider one language more efficient at communication than others, at least by human benchmarks, all else being equal (how well people know the language, etc.). Ditto for morality.

Thomas, you are running in to the same problem Eliezer is: you can't have a convincing argument about what is fair, versus what is not fair, if you don't explicitly define "fair" in the first place. It's more than a little surprising that this isn't very obvious.

comment by Richard_Hollerith2 · 2008-08-15T02:05:55.000Z · LW(p) · GW(p)

Eliezer can reply that moral conclusions are different, so the sermon does not apply. Well, I think it should apply, in certain case, such as when you are contemplating the launch of the seed of a superintelligence, which is an occasion that IMO demands a complete reevaluation of one's terminal values and the terminal values of one's society.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-08-15T02:06:50.000Z · LW(p) · GW(p)
I find Eliezer's seemingly-completely-unsupported belief in the rightness of human benevolence, as opposed to sorting pebbles, pretty scary.

...

comment by Richard_Hollerith2 · 2008-08-15T02:07:59.000Z · LW(p) · GW(p)

Kip and Wiseman slipped in. Busy tonight.

comment by Z._M._Davis · 2008-08-15T02:12:45.000Z · LW(p) · GW(p)

Richard: "Eliezer answer might refer to the difference between the simplicity of Autobliss 1.0 and the complexity of a human."

I'm pretty sure he wouldn't say that. Rather, the claim (if I'm reading him correctly) is that the true referent of good really is a really complicated bundle of human values. In a material universe, you can't cash out "intrinsic goodness" in the intuitive way.

comment by komponisto2 · 2008-08-15T02:13:13.000Z · LW(p) · GW(p)

I'm really having trouble understanding how this isn't tantamount to moral relativism -- or indeed moral nihilism. The whole point of "morality" is that it's supposed to provide a way of arbitrating between beings, or groups, with different interests -- such as ourselves and Pebblesorters. Once you give up on that idea, you're reduced, as in this post, to the tribalist position of arguing that we humans should pursue our own interests, and the Pebblesorters be damned. When a conflict arises (as it inevitably will), the winner will then be whoever has the bigger guns, or builds AI first.

Mind you, I don't disagree that this is the situation in which we in fact find ourselves. But we should be honest about the implications. The concept of "morality" is entirely population-specific: when groups of individuals with common interests come into contact, "morality" is the label they give to their common interests. So for us humans, "morality" is art, music, science, compassion, etc. in short, all the things that we humans (as opposed to Pebblesorters) like. This is what I understand Eliezer to be arguing. But if this is your position, you may as well come out and admit that you're a moral relativist, because this is the position that the people who are scared of moral relativism are in fact scared of. What they dread is a world in which Dennis could go on saying that Dennis-morality is what really matters, the rest of us disagree, war breaks out, Dennis kills us all, eats the whole pie, and is not spanked by any cosmic force. But this is indeed the world we live in.

Replies from: Yosarian2, MugaSofer, army1987, TheAncientGeek
comment by Yosarian2 · 2013-01-05T00:52:52.475Z · LW(p) · GW(p)

Once you give up on that idea, you're reduced, as in this post, to the tribalist position of arguing that we humans should pursue our own interests, and the Pebblesorters be damned. When a conflict arises (as it inevitably will), the winner will then be whoever has the bigger guns, or builds AI first.

I don't think so. I don't think it's h-right to destroy or reprogram the pebblesorters, so if we're exploring space and find them, I don't think we'll do that.

It may be P-right for them to re-program us to forget about h-right and just start sorting pebbles, though, so we want to watch out to make sure that doesn't happen..

I think the mistake that most "moral relativists" make is that they forget about the shared human morality we all have, and therefore claim that it's all arbitrary and meaningless.

comment by MugaSofer · 2013-01-05T01:08:43.573Z · LW(p) · GW(p)

The whole point of "morality" is that it's supposed to provide a way of arbitrating between beings, or groups, with different interests -- such as ourselves and Pebblesorters.

Could you taboo "point" and "arbitrating"? I'm not sure if I'm interpreting this correctly.

comment by A1987dM (army1987) · 2013-10-01T22:33:42.411Z · LW(p) · GW(p)

the tribalist position of arguing that we humans should pursue our own interests, and the Pebblesorters be damned.

He's also arguing that the pebblesorters p-shold pursue their own p-interests and we humans be p-damned, for that matter.

comment by TheAncientGeek · 2013-10-01T22:56:45.501Z · LW(p) · GW(p)

Moral realists don't believe in karmic retribution. most popular critiques of MR strawman it as something much stronger than any other kind of realism or objectivism. Objectivisms only require the availability of mind independent truths to those capable of, and interested in finding them.

comment by Carl_Shulman · 2008-08-15T02:14:03.000Z · LW(p) · GW(p)

Roko,

"UIVs are the unique set of values X such that in order to achieve any other value Y, you first have to do X." Roko,

You know that all of the so-called 'UIVs' that have been postulated only apply for some Y under some conditions (the presence of other powerful agents and game theoretic considerations or manipulation, self-referential utility functions, preferences over mathematical truths, and many other considerations make so-called UIVs useless or sources of terminal disvalue for an infinite number of cases), and an agent could have the terminal value Y1, where Y1 is not valuing anything in X, so X is an empty set.

Why are 'often instrumental values,' or OIVs, non-arbitrary terminal values as well?

comment by MichaelAnissimov · 2008-08-15T02:14:10.000Z · LW(p) · GW(p)

Kip, can you expound?

comment by Tarzan,_me_Jane2 · 2008-08-15T02:14:15.000Z · LW(p) · GW(p)

There has never been, so far as I able to determine, any force so unfriendly to humans as humans. Yet we read day after day about one very smart man's philosophizing about the essence of humanity, supposedly so that it can be included in the essence of fAI. Wouldn't it be incredible if tomorrow, or sometime in the near future, someone who has been working and actually come up with some designs for fAI or AGI produces a real product, and it makes all the hubris of these responses irrelevant? What is the purpose of an intelligence that is able to take all the unkind things mankind has been able to do, and do them faster and more efficiently? Paper clips may be the answer; certainly humans cannot use their record to debate it. Finally, the fact that one man, no matter how gifted, thinks that he is the only possible answer makes one shudder at the potential attitude of the superhuman intelligence he would create. It will not only have the attitude, "I am unlikely to to take your advice, or even to take it seriously, so stop wasting your time", as E. Y. said to one poster, but it will have that attitude toward it's programmers as well, at the level of superhuman effectiveness. I want fAI as much as anyone. All this public rumination is not the approach.

comment by komponisto2 · 2008-08-15T02:24:12.000Z · LW(p) · GW(p)

Clarification: in the first paragraph of the above comment, when I wrote "The whole point of 'morality' is..." what I meant was "The whole point of non-relativist 'morality' is...".

comment by Carl_Shulman · 2008-08-15T02:24:47.000Z · LW(p) · GW(p)

"I find Eliezer's seemingly-completely-unsupported belief in the rightness of human benevolence, as opposed to sorting pebbles, pretty scary."

Kip,

Given Eliezer's definition of rightness (which is different from current object-level views), if there is a sufficiently cogent and convincing argument for pebblesorting, then pebblesorting is both right and p-right. Do you think that there is a significant chance you would ever view pebblesorting as k-right with expanded intelligence and study?

comment by Vladimir_Nesov · 2008-08-15T02:31:39.000Z · LW(p) · GW(p)

It looks like fairness can be said to be f-morality, built from current morality so that it is known to be sufficiently stable under reflection (that is, (meta)*-f-moral), and as moral as possible. While we travel the road of moral progress, avoiding getting trapped in the simplistic ditches of fake moralities, we need a solid target for agreement, and this is what a particular fairness is. Morality unfolds in a moral way, while casting a shadow of unfolded fairness.

comment by Richard_Hollerith2 · 2008-08-15T02:34:23.000Z · LW(p) · GW(p)

Yes, Z.M., human happiness is not what Eliezer plans to use the superintelligence to maximize. Good to make that clear. But it might be worthwhile to question the intrinsic goodness of human happiness, as a warm-up to questioning the coherent extrapolated volition (CEV) of the humans.

comment by J_Thomas2 · 2008-08-15T04:17:01.000Z · LW(p) · GW(p)

But most of all - why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good? What is even the appeal of this, morally or otherwise? At all?

I don't think you ought to try to optimise fitness. Your opinion about fitness might be quite wrong, even if you accept the goal of optimising fitness. Say you sacrifice trying to optimise fitness and then it turns out you failed. Like, you try to optimise for intelligence just before a plague hits that kills 3/4 of the public. You should have optimised for plague resistance. What a loser.

And what would you do to optimise genetic fitness anyway? Carefully choose who to have children with?

Perhaps you would want to change the environment so that it will be good for humans, or for your kind of human being. That makes a kind of sense to me, but again it's hard to do. Not only do you have the problem of actually changing the world. You also have the problem of ecological succession. Very often, species that take over an ecosystem change it in ways that leave something else better able to grow than that species' own children. Some places, grasses provide a good environment for pine seedlings that then shade out the grass. But the pines in turn create an environment where hardwood saplings can grow better than pine saplings. Etc. If you like human beings or your own kind of human beings then it makes some sense to create an environment where they will thrive. But do you know how to do that?

If you knew all about how to design ecosystems to get the results you want, that might provide some of the tools you'd need to design human societies. I don't think those tools exist yet.

On a different level, I feel like it's important to avoid minimising mimetic fitness. If you have ideas that you believe are true or good or beautiful, and those ideas seem to kill off the people who hold them faster than they can spread the ideas, that's a sign that something is wrong. It should not be that the good, true, or beautiful ideas die out. Either there's something wrong with the ideas, or else there should be some way to modify the environment so they spread easier, or at least some way to modify the environment so the bearers of the ideas don't die off so fast. I can't say what it is that's wrong, but there's something wrong when the things that look good tend to disappear.

If they're good then there ought to be a way for them to persist until they can mutate or recombine into something better. They don't need to take over the world but they shouldn't just disappear.

I don't like it when the things I like go extinct.

So I don't want to maximise the fitness of things I like, but I sure do want that fitness to be adequate. When it isn't adequate then something is wrong and I want to look carefully at what's wrong. Maybe it's the ideas. Maybe something else.

Similarly, if you run a business you don't need to maximise profits. But if you run at a loss on average then you have a problem that needs to be fixed.

comment by J_Thomas2 · 2008-08-15T05:36:57.000Z · LW(p) · GW(p)

"Why would anybody think that there is a single perfect morality, and if everybody could only see it then we'd all live in peace and harmony?"

Because they have a specific argument which leads them to believe that?

Sure, but have you ever seen such an argument that wasn't obviously fallacious? I have not seen one yet. It's been utterly obvious every time.

Thomas, you are running in to the same problem Eliezer is: you can't have a convincing argument about what is fair, versus what is not fair, if you don't explicitly define "fair" in the first place. It's more than a little surprising that this isn't very obvious.

I gave a simple obvious definition. You might disagree with it, but how is it unclear?

comment by Tim_Tyler · 2008-08-15T07:03:35.000Z · LW(p) · GW(p)

Re: why on Earth would any human being think that one ought to optimize inclusive genetic fitness

"Ought" is a word that only makes sense in the context of an existing optimisation strategy. As far as biologists can reasonably tell, the optimisation strategy of organisms involves maximising their inclusive genetic fitness. So the short answer to this is: because nature built them that way.

The bigger puzzle is not why organisms act to maximise their inclusive genetic fitness, but why they sometimes systematically fail to do so. What cognitive malfunction causes phenomena such as the western demographic shift - that are unlikely to be adaptive.

With humans, it's not such a puzzle - the answer is usually: humans exist in a bizarre environment, and this causes their genetic program to malfunction. The problem will get fixed eventually.

comment by Tim_Tyler · 2008-08-15T07:12:02.000Z · LW(p) · GW(p)

For evolution being wasteful, see: http://alife.co.uk/essays/evolution_is_good/

For evolution being stupid, see: http://alife.co.uk/essays/evolution_sees/

comment by Virge2 · 2008-08-15T07:41:48.000Z · LW(p) · GW(p)

komponisto: "I'm really having trouble understanding how this isn't tantamount to moral relativism"

I think I see an element of confusion here in the definition of moral relativism. A moral relativist holds that "no universal standard exists by which to assess an ethical proposition's truth". However, the word universal in this context (moral philosophy) is only expected to apply to all possible humans, not all conceivable intelligent beings. (Of all the famous moral relativist philosophers, how many have addressed the morals of general non-human intelligences?)

So we can ask two different questions:

#1. Is there a standard by which we can assess an ethical proposition's truth that applies to all humans?

#2. Is there a standard by which we can assess an ethical proposition's truth that applies to all conceivable intelligent beings?

I expect that Eliezer would answer yes to #1 and no to #2.

If you interpret universal in the broader sense (#2), then Eliezer would indeed be a moral relativist, but I think that distorts the concept of moral relativism, since the philosophy was developed with only humans of different cultures in mind.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-08-15T09:00:33.000Z · LW(p) · GW(p)

You can apply the standard of goodness to all intelligent beings, no problem. It's just that they won't apply it to themselves.

The content of "good" is an abstracted idealized dynamic, or as Steven put it, a rigid designator (albeit a self-modifying rigid designator). Thus what is good, or what is not good, is potentially as objective as whether a pile of pebbles is prime. It is just that not every possible optimization process, or every possible mind, does what is good. That's all.

comment by Ben_Jones · 2008-08-15T09:02:17.000Z · LW(p) · GW(p)

I wish Arnie's character had made a long speech in the middle of the film explaining that Predator wasn't evil or even wrong, he was just working around a different optimization process.

@HA

I'm fine with a galaxy without humor, music, or art. I'd sacrifice all of that...to maximize my persistence odds as a subjective conscious entity....

So existing is a terminal value in and of itself for you HA. Wouldn't you get bored? Or would you try to excise your boredom circuits, along with your humour, music and art circuits? How about your compassion circuits? Do you strive for the condition of perfect, empty, value-less ghost in the machine, just for its own sake...?

...because if so, I can tell you that you're 100% b-wrong about that.

comment by Ian_C. · 2008-08-15T10:20:34.000Z · LW(p) · GW(p)

In the real world, everything worth having comes from someone's effort -- even wild fruit has to be picked, sorted, and cleaned and fish need to be caught, gutted etc. I think this universal fact of required effort is probably part of the data we get the concept of fairness from in the first place, so reasoning in a space where pies pop in to existence from nothing seems like whatever you conclude might not be applicable to the real world anyway.

comment by Hopefully_Anonymous · 2008-08-15T11:14:19.000Z · LW(p) · GW(p)

Ben, you write "Do you strive for the condition of perfect, empty, value-less ghost in the machine, just for its own sake...?".

But my previous post clearly answered that question: "I'd sacrifice all of that reproductive fitness signalling (or whatever it is) to maximize my persistence odds as a subjective conscious entity, if that "dilemma" was presented to me."

comment by Mario2 · 2008-08-15T11:18:18.000Z · LW(p) · GW(p)

It is pretty clever to suggest objective morality without specifying an actual moral code, as it is always the specifics that cause problems.

My issue would be how Eliezer appears to suggest that human morality and alien morality could be judged separately from the genetics of each. Would super intelligent alien bees have the same notions of fairness as we do, and could we simply transplant our morality onto them, annd judge them accordingly, with no adjustments made for biological differences? I think it is very likely that such a species would consider the most fair distribution of a found pie to be one that involved a sizeable portion going to the queen, and that a worker who disagreed would be acting immorally. Is this something that we can safely say is objectively wrong?

comment by Lakshmi · 2008-08-15T11:52:56.000Z · LW(p) · GW(p)

To be fair (cough), your argument that '5 people means the pie should be divided into 5 equal parts' assumes several things...

1) Each person, by virtue of merely being there, is entitled to pie.

2) Each person, by virtue of merely being there, is entitled to the same amount of pie as every other person.

While this division of the pie may be preferable for the health of the collective psyche, it is still a completely arbitrary (cough) way to divide the pie. There are several other meaningful, rational, logical ways to divide the pie. (I believe I suggested one in a previous post.) Choosing to divide the pie into 5 equal parts simply asserts the premise 'existence = equal right' as the dominate principle by which to guide the division of the pie.

You have to remove all other considerations (including hunger, health, and any existing social relationships such as parent-child) in order to allow the 'existence = equal right' principle to be an acceptable way to divide the pie. This doesn't make that principle the 'bedrock' of morality. Quite the contrary. It says that this principle only dominates when all other factors are ignored.

comment by Roko · 2008-08-15T11:54:47.000Z · LW(p) · GW(p)

Eliezer: "I really don't consider myself a moral relativist - not even in the slightest!"

Meta-ethical relativism (wikipedia)

Meta-ethical relativists, in general, believe that the descriptive properties of terms such as "good", "bad", "right", and "wrong" do not stand subject to universal truth conditions, but only to societal convention and personal preference. Given the same set of verifiable facts, some societies or individuals will have a fundamental disagreement about what one ought to do based on societal or individual norms, and one cannot adjudicate these using some independent standard of evaluation. The latter standard will always be societal or personal and not universal, unlike, for example, the scientific standards for assessing temperature or for determining mathematical truths.


I think that this describes Eliezer's position. He can adjudicate a disagreement between the pebblesorters and the humans, but he does it in a rather trivial way: he uses the standards of the humans not an independent standard.

comment by J_Thomas2 · 2008-08-15T13:08:27.000Z · LW(p) · GW(p)

Lakshmi, Eliezer does have a point, though.

While there are many competing moral justifications for different ways to divide the pie, and while a moral relativist can say that no one of them is objectively correct, still many human beings will choose one. Not even a moral relativist is obligated to refrain from choosing moral standards. Indeed, someone who is intensely aware that he has chosen his standards may feel much more intensely that they are his than someone who believes they are a moral absolute that all honest and intelligent people are obligated to accept.

So, once you have made your moral choice, it is not fair to simply put it aside because somebody else disagrees. If he convinces you that he's right, then it's OK. But if you believe you know what's right and you agree to do wrong, you are doing wrong.

If all but two members of the group -- you and Aaron -- think it's right to do something that Aaron thinks is unfair to him, then it's wrong for you to violate your ethics and go along with the group. If everybody but you thinks it's right then it's still wrong for you to agree, when you believe it's wrong.

Unless, of course, you belong to a moral philosophy which says it's right to do that.

When Dennis says he deserves the whole pie and you disagree, and it violates your ethical code to say it's right when you think it's wrong, then you should not agree for Dennis to get the whole pie. It would be wrong.

I believe that what you ought to do in the case when there's no agreement, should still be somewhat undecided. If you have the power to impose your own choice on someone else or everybody else then that might be the most convenient thing to do. But it takes a special way of thinking to say it's fair to do that. Is it in general a fair thing to impose your standards on other people when you think you are right? I guess a whole lot of people think so. But I'm convinced they're wrong. It isn't fair. And yet it can be damn convenient....

comment by Caledonian2 · 2008-08-15T14:55:05.000Z · LW(p) · GW(p)

As far up as you go, there's no level that calls for unconditional surrender.
I see no reason to presume that a concept of fairness would never require that one involved entity cede their demands and give in.

comment by Larry_D'Anna · 2008-08-15T15:21:30.000Z · LW(p) · GW(p)

Roko: What the heck does morality have to do with category theory at all?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-08-15T15:34:23.000Z · LW(p) · GW(p)
Meta-ethical relativists, in general, believe that the descriptive properties of terms such as "good", "bad", "right", and "wrong" do not stand subject to universal truth conditions, but only to societal convention and personal preference. Given the same set of verifiable facts, some societies or individuals will have a fundamental disagreement about what one ought to do based on societal or individual norms, and one cannot adjudicate these using some independent standard of evaluation. The latter standard will always be societal or personal and not universal, unlike, for example, the scientific standards for assessing temperature or for determining mathematical truths.

And there you have it: I am not a meta-ethical relativist. Humans and Pebblesorters do not have a fundamental disagreement about what one ought to do; humans do what they ought to do and Pebblesorters do what they p-ought to do. You can't have a disagreement without some particular fact or computation or idealized abstract dynamic that you are both arguing about, different beliefs with the same referent. "That which an optimization process does" is not a belief that can be argued about, it is a property of that optimization process. What is right, on the other hand, or what is p-right, is something that can be argued about; but a human does not dispute which piles of pebbles are prime.

Two people who actually disagree about the same question, a common referent, can try to adjudicate it (even if they don't have a nice neat formal procedure which is readily computable and known to them). I'm not sure what a "universal truth condition" is, but statements about rightness have truth conditions just as much as p-rightness.

Furthermore, I believe that human beings are better than Pebblesorters. This is not written upon the very stars, it is written in Platonia as the objective answer to that question that we ask when we ask "Is it better?" and not "Is it more xyblz?"

Replies from: lmm
comment by lmm · 2013-10-08T11:46:51.168Z · LW(p) · GW(p)

If you follow through on this view it seems to lead to the position that everyone has their own referent for "good", and there is no meaningful way for two different humans to argue about whether a given action is good. Which would suggest there is little point trying to persuade other people to be good, or hoping to collaboratively construct a friendly AI (since an l-friendly AI is unlikely to be e-friendly).

Replies from: wedrifid
comment by wedrifid · 2013-10-08T11:52:06.577Z · LW(p) · GW(p)

If you follow through on this view it seems to lead to the position that everyone has their own referent for "good", and there is no meaningful way for two different humans to argue about whether a given action is good. Which would suggest there is little point trying to persuade other people to be good, or hoping to collaboratively construct a friendly AI (since an l-friendly AI is unlikely to be e-friendly).

Cooperation does not require modification of others to have identical values. Even agents with actively opposed values can cooperate (and so create a mutually friendly AI) so long as the opposition is not perfect in all regards.

Replies from: lmm
comment by lmm · 2013-10-08T18:18:00.630Z · LW(p) · GW(p)

This site has been at pains to emphasise that an AI will be an optimization process of never-before-seen power, rewriting reality in ways that we couldn't possibly predict, and as such an AI whose values are even slightly misaligned with one's own would be catastrophic for one's actual values.

Replies from: wedrifid
comment by wedrifid · 2013-10-08T19:08:39.895Z · LW(p) · GW(p)

This site has been at pains to emphasise that an AI will be an optimization process of never-before-seen power, rewriting reality in ways that we couldn't possibly predict, and as such an AI whose values are even slightly misaligned with one's own would be catastrophic for one's actual values.

What is relevant to the decision to create or prevent such an AI from operating is the comparison between what will occur in the absence of the AI and what the AI will do. For example gwern's values are not identical to mine but if I had the choice between pressing a button to release an FAI or a button to destroy it then I would press the button to release it. FAI isn't as good as FAI (by subjective tautology) but FAI is overwhelmingly better than nothing. I expect FAI to allow me to live for millions of years, and for the cosmic commons to be exploited to do things that I generally approve of. Without that AI I think it is most likely that myself and my species will go to oblivion.

The above doesn't even take into account cooperation mechanisms. That's just flat acceptance of optimisation for another's values over distinctly sub-optimisation of my own. When it comes to agents with conflicting values cooperating negotiation applies and if both agents are rational and in a situation where mutual FAI creation is possible but unilateral FAI creation can be prevented then the result will be an FAI that optimises for a compromise of the value systems. To whatever extent the values of the two agents are not perfectly opposed this outcome will be superior to the non-cooperative outcome. For example if gwern and I were in such a situation the expected result would be the release of FAI>. Neither of us will prefer that option over the FAI that is personalised to ourselves but there is still a powerful incentive to cooperate. That outcome is better than what we would have without cooperation. The same applies if a paperclip maximiser and a staple maximiser are put in that situation. (It does not apply is a paperclip maximiser meets a paperclip minimiser.)

comment by Roko · 2008-08-15T17:22:09.000Z · LW(p) · GW(p)

@Eliezer: "what one ought to do" vs. "what one p-ought to do"

Suppose that the pebblesorter civilization and the human civilization meet, and (fairly predictably) engage in a violent and bitter war for control of the galaxy. Why can you not resolve this war by bringing the pebblesorters and the humans to a negotiating table and telling them "humans do what they ought to do and Pebblesorters do what they p-ought to do"?

You cannot play this trick because p-ought is grounded in what the pebblesorters actually do, which is in turn grounded in the state of the universe they aim for, which is the same universe that we live in. The humans and the pebblesorters seem to be disagreeing about something as they fight each other: the usual way that people would put this disagreement into words is by saying "they are disagreeing about what is right".

However, you are using the word "right" in a nonstandard way. You have changed the meaning of the entire ethical vocabulary in this same way, to represent a specific constant answer rather than a variable, so it becomes very hard to say what the humans and pebblesorters are disagreeing about. It seems a little odd to say that these hated enemies are in complete agreement, and it is certainly not the standard way that people use the ethical vocabulary. Perhaps it is a better way: I'm just taking some time getting used to it.

In fact in your new use of the English language, you probably are not a relativist, for the way you are using the ethical vocabulary it is in fact impossible to be a relativist: all ethical theories T describe some objective predicate, T-right, and any act is either T-right or it isn't. In your new language, it isn't possible to talk of "rightness" detached from any particular predicate.

But I think that in your new use of language, you will need a word for the idea of a justification for an ethical theory, for example Kant's arguments "from first principles" in favor of the categorical imperative. Perhaps you could call ethical theories with this property "first-principles justified theories"? You may argue that no such theory exists, but a lot of philosophers would disagree, so you should have a word for it. And your ethical theory doesn't even try for this property, it is unashamedly unjustified.

Eliezer said: "Furthermore, I believe that human beings are better than Pebblesorters.

In your new use of the ethical vocabulary, this is a vacuous applause light. Of course the humans are better than the pebblesorters: you defined "good" as "the predicate that describes the particular set of things that humans do".

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-08-15T17:53:53.000Z · LW(p) · GW(p)
You cannot play this trick because p-ought is grounded in what the pebblesorters actually do, which is in turn grounded in the state of the universe they aim for, which is the same universe that we live in. The humans and the pebblesorters seem to be disagreeing about something as they fight each other

Does a human being disagree with natural selection? About what, exactly? How would we argue natural selection into agreement with us?

Standard game theory talks about interactions between agents with different goals. It does not presume that all agents must theoretically be arguable into exactly equal utility functions. These interactions are not called "disagreements", they are called "games" or "problems".

You have changed the meaning of the entire ethical vocabulary in this same way, to represent a specific constant answer rather than a variable

Lemme get this straight: I'm talking about rightness as a constant, you're talking about rightness as a variable, and you accuse me of being an moral relativist?

the way you are using the ethical vocabulary it is in fact impossible to be a relativist: all ethical theories T describe some objective predicate, T-right, and any act is either T-right or it isn't. In your new language, it isn't possible to talk of "rightness" detached from any particular predicate.

You're correct that I think moral relativism is an incoherent position for the same reason that factual relativism is an incoherent position. If anything that anyone wanted were right, this itself would be an ethical theory and not a relative one. Just as if anything that anyone believed were true, this itself would be reality.

But I think that in your new use of language, you will need a word for the idea of a justification for an ethical theory, for example Kant's arguments "from first principles" in favor of the categorical imperative.

All attempts to justify an ethical theory take place against a background of what-constitutes-justification. You, for example, seem to think that calling something "universally instrumental" constitutes a justification for it being a terminal value, whereas for me this is a nonstarter. For every mind that thinks that terminal value Y follows from moral argument X, there will be an equal and opposite mind who thinks that terminal value not-Y follows from moral argument X. I do indeed have a word for theories that deny this: I call them "attempts to persuade an ideal philosopher of perfect emptiness".

My theory is unabashedly justified; it is justified by arguments on the level of morality-as-morality. It so happens that human beings are the sort of creatures that respond to such arguments, and Pebblesorters are not; but we are not trying to "be human" in responding to such arguments - the justification for doing so, is that they are right (not xyblz).

In your new use of the ethical vocabulary, this is a vacuous applause light. Of course the humans are better than the pebblesorters: you defined "good" as "the predicate that describes the particular set of things that humans do".

No, "good" is defined as that which leads to sentient beings living, to people being happy, to individuals having the freedom to control their own lives, to minds exploring new territory instead of falling into infinite loops, to the universe having a richness and complexity to it that goes beyond pebble heaps, etc.

It so happens that humans are the sort of beings who do good, and that Pebblesorters are not; but this is a mere happenstance of a moral miracle, not the justification for having fun instead of sorting pebbles.

Replies from: army1987
comment by A1987dM (army1987) · 2012-01-28T23:49:40.105Z · LW(p) · GW(p)

It so happens that humans are the sort of beings who do good, and that Pebblesorters are not; but this is a mere happenstance of a moral miracle, not the justification for having fun instead of sorting pebbles.

It so happens that Pebblesorters are the sort of beings who do p-good, and that humans are not; but this is a mere happenstance of a p-moral miracle, not the p-justification for sorting pebbles instead of having fun. :-)

comment by Tim_Tyler · 2008-08-15T18:45:01.000Z · LW(p) · GW(p)

Do the fox and the rabbit disagree? It seems reasonable so say that they do if they meet: the rabbit thinks it should be eating grass, and the fox thinks the rabbit should be in the fox's stomach. They may argue passionately about the rabbit's fate - and even stoop to violence.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-08-15T19:28:47.000Z · LW(p) · GW(p)

Do the fox and the rabbit disagree? It seems reasonable so say that they do if they meet: the rabbit thinks it should be eating grass, and the fox thinks the rabbit should be in the fox's stomach. They may argue passionately about the rabbit's fate - and even stoop to violence.

Really? I would be interested in hearing their philosophical arguments then as for why the rabbit should be eating grass or the rabbit should be in the fox's stomach. I understand, of course, that the rabbit does eat grass and that the fox does hunt the rabbit, but I was not aware that these were persuasive moral arguments. A rock does roll downhill, but I wasn't aware that this had any particular correlation to whether it should roll downhill - if gravity pulls a rock, it will just as readily roll over a toddler.

It would seem that you are not distinguishing between what a system does and what it should do. The former is not necessarily a statement about the latter; a rock, in rolling over a toddler, is not offering evidence or even argument about the worthlessness of human life - it's just showing that gravity doesn't care. Neither does a fox or rabbit.

comment by Roko · 2008-08-15T20:46:12.000Z · LW(p) · GW(p)

Eliezer: No, "good" is defined as that which leads to sentient beings living, to people being happy, to individuals having the freedom to control their own lives, to minds exploring new territory instead of falling into infinite loops, to the universe having a richness and complexity to it that goes beyond pebble heaps, etc.

  • you're going to have to give me a definition that doesn't involve "etc" for me to critique this further. As I understand it, you've taken some averaged or CEV'd collection of things that humans like doing, or things that humans find valuable.

But the actual initial segment of the list that you've proceeded with "etc" above is nothing like what most people think is good or valuable in life. An honest answer would probably read more like:

"good" is defined as that which leads to me having positive and exciting emotions, to people who I meet and am close to being happy, to me having children, to me having sex, to me having higher status and prestige than those around me, to everyone in the world worshiping the one true religion, to rich and undeserving westerners (excluding me) being punished for their sins against the third world/the natural environment, to the rival tribe/country/political party being destroyed and humiliated, to human dignity and natural things, etc

My list is the current human notion of goodness to 5 decimal places. Your list seems a lot more reasonable, but that's probably because you made it up yourself and you are a lot more reasonable than most humans. Are you claiming that the result of CEV, applied to my list, will be your list?

comment by Tim_Tyler · 2008-08-15T21:58:33.000Z · LW(p) · GW(p)
I would be interested in hearing their philosophical arguments then as for why the rabbit should be eating grass or the rabbit should be in the fox's stomach. I understand, of course, that the rabbit does eat grass and that the fox does hunt the rabbit, but I was not aware that these were persuasive moral arguments.

They are to the parties in question:

The rabbit argues that if it is eaten by the fox, then it will die - and that should not happen.

The fox argues that if it doesn't eat rabbits, then it will die - and that should not happen.

Neither considers the death of the other to be of much conseqence: for rabbits, foxes are evil rabbit-eaters, while foxes see rabbits as mere dumb lifestock.

comment by Roko · 2008-08-15T22:17:49.000Z · LW(p) · GW(p)

@Eli: All attempts to justify an ethical theory take place against a background of what-constitutes-justification. You, for example, seem to think that calling something "universally instrumental" constitutes a justification for it being a terminal value, whereas for me this is a nonstarter. For every mind that thinks that terminal value Y follows from moral argument X, there will be an equal and opposite mind who thinks that terminal value not-Y follows from moral argument X. I do indeed have a word for theories that deny this: I call them "attempts to persuade an ideal philosopher of perfect emptiness".

  • I think that there are quite canonical justifications for particular axiological statements. I should make these on my own blog, because that's where they belong.

Your argument that "For every mind that thinks that terminal value Y follows from moral argument X, there will be an equal and opposite mind who thinks that terminal value not-Y follows from moral argument X" is true if you regard "mind" as an abstract turing machine, but false if you regard it as an embodied agent. For example, you will not find an agent who thinks that it should delete itself immediately, though it would be possible for a disembodied mind to think this. Reality itself breaks the symmetry of abstract computations.

There are substantial things we can say about what real world agents are likely to think or do, purely based on them being agents. We can also say things about what real world societies of agents are likely to think, purely based on them being societies.

I think that there are shades of grey between "An argument that is universally compelling", and "An argument that compels only those who already believe its conclusion". The former is clearly impossible, the latter is what you have given.

comment by Allan_Crossman · 2008-08-15T23:22:13.000Z · LW(p) · GW(p)

Eliezer, I think I kind-of understand by now why you don't call yourself a relativist. Would you say that it's the "psychological unity of mankind" that distinguishes you from relativists?

A relativist would stress that humans in different cultures all have different - though perhaps related - ideas about "good" and "right" and so on. I believe your position is that the bulk of human minds are similar enough that they would arrive at the same conclusions given enough time and access to enough facts; and therefore, that it's an objective matter of fact what the human concepts of "right" and "good" actually mean.

And since we are human, there's no problem in us continuing to use those words.

Am I understanding correctly?

It seems like your position would become more akin to relativism if the "psychological unity" turned out to be dubious, or if our galaxy turned out to be swarming with aliens, and people were forced to deal with genuinely different minds. In those cases, would there still be anything to separate you from actual relativists?

(In either case, it would still be an objective matter of fact what any given mind would call "good" if given enough time - but that would be a much less profound fact than it is for a species all alone and in a state of psychological unity.)

comment by Tim_Tyler · 2008-08-15T23:48:52.000Z · LW(p) · GW(p)
It would seem that you are not distinguishing between what a system does and what it should do.

In my book, there's not really any such thing as what a system should do.

Should only makes sense with respect to the morals of some agent.

If you don't specify an agent, should becomes an extremely vague and ambiguous term.

Should statements are not about what happens, but about the desirability of what might happen - according to the moral system of some agent.

comment by Z._M._Davis · 2008-08-16T00:13:15.000Z · LW(p) · GW(p)

Concerning the charge of relativism: it seems clear that Eliezer is a moral relativist in the way that the term is normally understood, but not as he understands it. There may be a legitimate dispute here, but as far as communication goes, we should not be having problems. In deference to common usage, I would reserve right for the moral realism of Roko et al. and use something like h-right for Eliezer's notion of humanity's abstracted idealized dynamic--but I don't think it really matters right now.

Roko writes: "My list is the current human notion of goodness to 5 decimal places. Your list seems a lot more reasonable, but that's probably because you made it up yourself and you are a lot more reasonable than most humans. Are you claiming that the result of CEV, applied to my list, will be your list?"

This, I think, is the interesting question. Eliezer has been leaning heavily on the psychological unity of humankind, but I don't think this is enough to carry his argument. The unity of which we speak is a (you will forgive me:) relative term. We can agree that complex functional adaptations are species-typical modulo sex, and that all humans are virtually alike compared to the space of all possible minds, but that doesn't mean that there's no room at all for variation in morality in that tiny dot of human minds--variation that cannot be waved away as trivial. Evopsych can only go so far; the SSSM might have been a mistake, but that doesn't necessarily mean cultural and individual differences don't matter at all. That would take a separate, stronger argument, at least. (Cf. Virge and myself in comments to "Moral Error and Moral Disagreement.")

So we are left with a difficult empirical question: to what extent do moral differences amongst humans wash out under CEV, and to what extent are different humans really in different moral reference frames? I fear that there is no way to resolve this issue without a tremendous amount of data. Even if you had all the data you needed, it might be easier just to build the AI!

comment by Jadagul · 2008-08-16T04:13:00.000Z · LW(p) · GW(p)

Eliezer, I think you come closer to sharing my understanding of morality than anyone else I've ever met. Places where I disagree with you:

First, as a purely communicative matter, I think you'd be clearer if you replaced all instances of "right" and "good" with "E-right" and "E-good."

Second, as I commented a couple threads back, I think you grossly overestimate the psychological unity of humankind. Thus I think that, say, E-right is not at all the same as J-right (although they're much more similar than either is to p-right). The fact that our optimization processes are close enough in many cases that we can share conclusions and even arguments doesn't mean that they're the same optimization process, or that we won't disagree wildly in some cases.

Simple example: I don't care about the well-being of animals. There's no comparison in there, and there's no factual claim. I just don't care. When I read the famous ethics paper about "would it be okay to torture puppies to death to get a rare flavor compound," my response was something along the lines of, "dude, they're puppies. Who cares if they're tortured?" I think anyone who enjoys torturing for the sake of torturing is probably mentally unbalanced and extremely unvirtuous. But I don't care about the pain in the puppy at all. And the only way you could make me care is if you showed that causing puppies pain came back to affect human well-being somehow.

Third, I think you are a moral relativist, at least as that claim is generally understood. Moral absolutists typically claim that there is some morality demonstrably binding upon all conscious agents. You call this an "attempt to persuade an ideal philosopher of perfect emptiness" and claim that it's a hopeless and fundamentally stupid task. Thus you don't believe what moral absolutists believe; instead, you believe different beings embody different optimization processes (which is the name you give to what most people refer to as morality, at least in conscious beings). You're a moral relativist. Which is good, because it means you're right.

Excuse me. It means you're J-right.

comment by Caledonian2 · 2008-08-16T05:02:00.000Z · LW(p) · GW(p)

Odd how, despite the psychological unity of mankind and the ease of 'extrapolating' human volition, these discussions always seem to end in establishing specialized words to refer to the perceptions and beliefs of specific individuals.

comment by Yvain2 · 2008-08-16T08:54:00.000Z · LW(p) · GW(p)

Why "ought" vs. "p-ought" instead of "h-ought" vs. "p-ought"?

Sure, it might just be terminology. But change

"So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right."

to

"So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is h-right."

and the difference between "because it is the human one" and "because it is h-right" sounds a lot less convincing.

Replies from: MarsColony_in10years
comment by MarsColony_in10years · 2015-09-22T18:27:56.815Z · LW(p) · GW(p)

the difference between "because it is the human one" and "because it is h-right" sounds a lot less convincing.

If I see a toddler in the path of a boulder rolling downhill, I don't ask myself "should I help the bolder, or the toddler?" and conclude "the toddler, because it is the human one."

If I were to even pause and ask myself a question, it would be "what should I do?" and the answer would be "save the toddler, because it is h-right".

Perhaps h-right is "just the human perspective", but that's not the reason I save the toddler. Similarly, the bolder rolls downhill because F=G(m1m2)/r^2, not because it is what boulders do. It is what boulders do, but that's different from the question of why they do what they do.

comment by Nominull2 · 2008-08-16T14:36:00.000Z · LW(p) · GW(p)

To say that Eliezer is a moral relativist because he realizes that a primality sorter might care about primality rather than morality, is equivalent to calling him a primality relativist because he realizes that a human might care about morality rather than primality.

Replies from: Kutta
comment by Kutta · 2010-09-20T21:51:36.068Z · LW(p) · GW(p)

This made me laugh, upvoted.

comment by J_Thomas2 · 2008-08-16T17:36:00.000Z · LW(p) · GW(p)

Nominull, don't the primalists have a morality about heaps of stones?

They believe there are right ways and wrong ways to do it. They sometimes disagree about the details of which ways are right and they punish each other for doing it wrong.

How is that different from morality?

comment by Nominull2 · 2008-08-16T18:43:00.000Z · LW(p) · GW(p)

If you've ever taken a mathematics course in school, you yourself may have been introduced to a situation where it was believed that there were right and wrong ways to factor a number into primes. Unless you were an exceptionally good student, you may have disagreed with your teacher over the details of which way was right, and been punished for doing it wrong.


It strikes me as plainly apparent that math homework is not morality.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-08-16T19:25:00.000Z · LW(p) · GW(p)
To say that Eliezer is a moral relativist because he realizes that a primality sorter might care about primality rather than morality, is equivalent to calling him a primality relativist because he realizes that a human might care about morality rather than primality.

Thank you, Nominull. I'm glad someone gets it, anyway.

comment by J_Thomas2 · 2008-08-16T20:11:00.000Z · LW(p) · GW(p)

If you've ever taken a mathematics course in school, you yourself may have been introduced to a situation where it was believed that there were right and wrong ways to factor a number into primes. Unless you were an exceptionally good student, you may have disagreed with your teacher over the details of which way was right, and been punished for doing it wrong.

My experience with math classes was much different from yours. When we had a disagreement, the teacher said, "How would we tell who's right? Do you have a proof? Do you have a counter-example?". And if somebody had a proof we'd listen to it. And if I jumped up and said "Wait, this proof is wrong!" then the teacher would say, "First you have to explain what he said up to the point you disagree, and see if he agrees that's what he means. Then you can tell us why it's wrong."

I never got punished for being wrong. If I didn't do homework correctly then I didn't get credit for it, but there was no punishment involved.

But Eliezer described people who disagreed about how many stones to put in a pile and who had something that looked very much like wars about it. That isn't like the math I experienced. But it's very much like the morality I've experienced.

comment by Yvain2 · 2008-08-16T20:55:00.000Z · LW(p) · GW(p)

To say that Eliezer is a moral relativist because he realizes that a primality sorter might care about primality rather than morality, is equivalent to calling him a primality relativist because he realizes that a human might care about morality rather than primality.

But by Eliezer's standards, it's impossible for anyone to be a relativist about anything.

Consider what Einstein means when he says time and space are relative. He doesn't mean you can just say whatever you want about them, he means that they're relative to a certain reference frame. An observer on Earth may think it's five years since a spaceship launched, and an observer on the spaceship may think it's only been one, and each of them is correct relative to their reference frame.

We could define "time" to mean "time as it passes on Earth, where the majority of humans live." Then an observer on Earth is objectively correct to believe that five years have passed since the launch. An observer on the spaceship who said "One year has passed" would be wrong; he'd really mean "One s-year has passed." Then we could say time and space weren't really relative at all, and people on the ground and on the spaceship were just comparing time to s-time. The real answer to "How much time has passed" would be "Five years."

Does that mean time isn't really relative? Or does it just mean there's a way to describe it that doesn't use the word "relative"?

Or to give a more clearly wrong-headed example: English is objectively the easiest language in the world, if we accept that because the word "easy" is an English word it should refer to ease as English-speakers see it. When Kyousuke says Japanese is easier for him, he really means it's mo wakariyasui translated as "j-easy", which is completely different. By this way of talking, the standard belief that different languages are easier, relative to which one you grew up speaking, is false. English is just plain the easiest language.

Again, it's just avoiding the word "relative" by talking in a confusing and unnatural way. And I don't see the difference between talking about "easy" vs. "j-easy" and talking about "right" vs. "p-right".

Replies from: army1987, Ghatanathoah
comment by A1987dM (army1987) · 2012-01-28T23:47:23.736Z · LW(p) · GW(p)

Or to give a more clearly wrong-headed example: English is objectively the easiest language in the world, if we accept that because the word "easy" is an English word it should refer to ease as English-speakers see it. When Kyousuke says Japanese is easier for him, he really means it's mo wakariyasui translated as "j-easy", which is completely different. By this way of talking, the standard belief that different languages are easier, relative to which one you grew up speaking, is false. English is just plain the easiest language.

Until I read that, I thought I understood (and agreed with) Eliezer's point, but that got me thinking. Now, I guess Eliezer would agree that it's easy for Japanese people to speak Japanese, while he wouldn't agree that it's right for Baby-Eaters to keep on eating their children. So there must be something subtler I'm missing.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-29T00:04:58.156Z · LW(p) · GW(p)

FWIW, my understanding of the original claim was precisely that morality is special in this way: that it means something to describe what humans value as "right" compared to what nonhumans value (and what nobody values), whereas it doesn't mean anything analogous to describe the languages humans speak as "easily speakable" compared to the languages nonhumans speak (and the languages nobody speaks). And whatever that something is, eating babies simply doesn't possess it, even for Baby-Eaters.

Personally I've never understood what that something might be, though, nor seen any evidence that it exists.

Replies from: nshepperd
comment by nshepperd · 2012-05-16T01:23:29.403Z · LW(p) · GW(p)

Have you forgotten that what it means to describe something by a word is given precisely by the sense of that word that the speaker has in mind? That you can call eudaimonia "right", and heaps of prime pebbles "prime" is a fact about the words "right" and "prime" as used by humans, not about eudaimonia and pebbles themselves (except insofar as eudaimonia and prime-pebbled heaps by their nature satisfy the relevant definitions of "right" and "prime", of course). Is English the easiest language, if you define "easiest" as "easiest for an English-speaker to speak"? How many legs does a dog have, if you call a tail a leg?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-16T03:09:21.232Z · LW(p) · GW(p)

When I assert "eudaimonia is right" (supposing I believed that), there exist two structures in my brain, S1 and S2, such that S1 is tagged with the lexical entry "right" and S2 is tagged with the lexical entry "eudaimonia", and S1 and S2 are related such that if my brain treats some thing X as an instance of S2, it also treats X as having the property S1.

Well, for a certain use of "is," anyway.

Replies from: nshepperd
comment by nshepperd · 2012-05-16T04:33:11.173Z · LW(p) · GW(p)

I was going to ask how that relation came about, and how it behaves when your brain is computing counterfactuals... but even though those are good questions to consider, I realised that wouldn't really be that helpful. So...

What I'm really trying to say is that there's nothing special about morality at all. There doesn't have to be anything special about it for eudaimonia to be right and for pebble-sorting to not. It's just a concept, like every other concept. One that includes eudaimonia and excludes murder, and is mostly indifferent to pebble-sorting. Same as the concept prime includes {2, 3, 5, 7, ...} and excludes {4, 6, 8, 9, ...}.

The only thing remotely "special" about it is that it happens to be a human terminal value -- which is the only reason we care enough to talk about it in the first place. The only thing remotely special about the word "right" is that it happens to mean, in english, this morality-concept (which happens to be a human terminal value).

So, to say that "eudaimonia is right" is simply to assert that eudaimonia is included in this set of things that includes eudaimonia and excludes murder (in other words, yes, "X ∈ S2 implies X ∈ S1", where S2 is eudaimonia and S1 is morality). To say that what babyeaters value is right would be to assert that eating babies is included in this set ("X ∈ babyeating implies X ∈ S1", which is clearly wrong, since murderbabyeating).

Replies from: Ghatanathoah, TheOtherDave
comment by Ghatanathoah · 2012-05-16T07:03:18.696Z · LW(p) · GW(p)

I generally agree with everything you say here, except that I'd like to clarify what you mean by "special" when you say that morality need not be special, as I'm not sure it would be clear to everyone reading your post. Obviously morality has no mystical properties or anything. It isn't special in that sense, which is what I think you mean.

But morality does differ (in a totally nonmystical way) from many other terminal values in being what Eliezer calls "subjectively objective and subjunctively objective." That is, there is only one way, or at least an extremely limited number of ways, to do morality correctly. Morality is not like taste, it isn't different for every person.

You obviously already know this, but I think that it's important to make that point clear because this subject has huge inferential distances. Hooray for motivational externalism!

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-16T13:41:28.823Z · LW(p) · GW(p)

Yeah, it's precisely the assumption that the computation we refer to by "morality" is identical for every human that makes this whole approach feel inadequate to me. It's just not clear to me that this is true, and if it turns out not to be true, then we're faced with the problem of reconciling multiple equally valid moralities.

Of course, one approach is to stop caring about humans in general, and only care about that subset of humanity that agrees with me.

Replies from: nshepperd, Ghatanathoah
comment by nshepperd · 2012-05-16T18:10:50.745Z · LW(p) · GW(p)

You mean, the assumption that every human uses the word "morality" to refer to the same computation. Clearly, if I use "morality" to refer to X, and you also use the word "morality" to refer to X, then X and X are identical trivially. We refer to the same thing. Keep careful track of the distinction between quotation and referent.

Anyway, before I answer, consider this...

If other people use "morality" to refer to something else... then what? How could it matter how other people use words?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-16T18:41:11.560Z · LW(p) · GW(p)

I agree that if you and I both use "morality" to refer to X, then we refer to the same thing.

If I use "morality" to refer to X1 and you use it to refer to X2, it doesn't matter at all, unless we try to have a conversation about morality. Then it can get awfully confusing. Similar things are true if I use "rubber" to refer to a device for removing pencil marks from paper, and you use "rubber" to refer to a prophylactic device... it's not a problem at all, unless I ask you to fetch a bunch of rubbers from the supply cabinet for an upcoming meeting.

Replies from: nshepperd
comment by nshepperd · 2012-05-17T05:07:33.165Z · LW(p) · GW(p)

But what does that mean for what you should do? Nothing, right? It doesn't matter that someone else uses "morality" to refer to X2. If I call murder "right", murder is still wrong. And you should still lock up murderers.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-17T13:10:52.955Z · LW(p) · GW(p)

Talking about the truth-value of the assertion "murder is right" seems unjustified at this point, much like the truth-value of "rubbers help prevent pregnancy." Is it true? Yes. Is it false? Yes. When a word means different things within a conversation, ambiguity is introduced to many sentences containing that word. It helps at that point to set aside the ambiguous label and introduce more precise ones. Which is why I introduced X1 and X2 in the first place.

I agree that the fact that X1 rejects murder doesn't necessarily change just because X2 endorses it.

But I don't agree that what X1 endorses is necessarily independent of what X2 endorses.

For example, if I don't value the existence of Gorgonzola in the world, and I value your preferences being satisfied, then I value Gorgonzola IFF you prefer there exist Gorgonzola in the world.

To the extent that what I should do is a function of what I value, and to the extent that X2 relates to your preferences, then X2 (what you call "right") has a lot to do with what I should do.

Replies from: nshepperd
comment by nshepperd · 2012-05-17T15:27:57.828Z · LW(p) · GW(p)

The assertion "murder is right" -- by your definition of "right", which is the only definition you should care about, being the person who formulates the question "what is right for me to do?" -- has a value of TRUE precisely if X1 endorses murder. There's nothing unjustified about saying that, since X1 was brought in specifically defined as the thing your definition of "right" refers to.

I'll grant that it's perfectly possible that X1 might have a term in it (to borrow terminology from the utility function world) for other peoples' terminal values. But if so that's a question of object-level ethics, not meta-ethics.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-17T16:16:56.485Z · LW(p) · GW(p)

your definition of "right", which is the only definition you should care about...

It is not clear to me that X1 is the only definition of "right" I should care about, even if it is mine... any more than thing-to-erase-pencil-marks-with is the only definition of "rubber" I should care about.

Regardless, whether I should care about other people's definitions of these words or not, the fact remains that I do seem to care about it.

And I also seem to care about other people's preferences being satisfied, especially the preferences that they associate with the emotional responses that lead them to talk about that preference being "right" (rather than just "my preference").

Again, maybe I oughtn't... though if so, it's not clear to me why... but nevertheless I do.

...being the person who formulates the question "what is right for me to do?"

It may be relevant that this is not the only moral question I formulate. Other moral questions include "what is right for others to do?" and "what is right to occur?" Indeed, that last one is far more important to me than the others, which is one reason I consider myself mostly a consequentialist.

I'll grant that it's perfectly possible that X1 might have a term in it (to borrow terminology from the utility function world) for other peoples' terminal values. But if so that's a question of object-level ethics, not meta-ethics.

Maybe so. What follows from that?

Replies from: nshepperd
comment by nshepperd · 2012-05-17T18:31:11.556Z · LW(p) · GW(p)

Any question you could possibly want the answer to relating in any sense to "rightness" is not a question at all unless you have a definition of "right" in mind (or at the least a fuzzy intuitive definition that you don't have full access to). You want to know "what is right to occur". You won't get anywhere unless you have an inkling of what you meant by "right". It's built into the question that you are looking for the answer to your question. It's your question!

Maybe you decide that X1 (which is the meaning of your definition of "right") includes, among things such as "eudaimonia" and "no murder", "other humans getting what they value". Then the answer to your question is that it's right for people to experience eudaimonia and to not be murdered, and to get what they value. And the answer to "what should I do" is that you should try and bring those things about.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-17T19:11:53.704Z · LW(p) · GW(p)

Yes, that's true.

Or maybe I decide that X1 doesn't include other humans getting what they value, and I'm only under the impression that it does because there are some things that other humans happen to value that X1 does include, or because X1 includes something that is similar but not quite identical to other humans getting what they value, or for some other reason.

Either way, whichever of those things turns out to be the case, that's what I should do... agreed (1).

Of course, in some of those cases (though not others), in order to work out what that is in practice, I also need to know what other humans' equivalents of X1 are. That is, if it turns out X1 includes you getting what you value as long as you're alive, and what you value is given by X2, then as long as you're alive I should bring about X2 as well as X1. And in this scenario, when you are no longer alive, I no longer should bring about X2.

====
(1) Or, well, colloquially true, anyway. I should certainly prefer those things occurring, but whether I should do anything in particular, let alone try to do anything in particular, is less clear. For example, if there exists a particularly perverse agent A who is much more powerful than I, and if A is such that A will bring about what I value IFF I make no efforts whatsoever towards bringing them about myself, then it follows that what I ought to do is make no efforts whatsoever towards bringing them about. It's not clear that I'm capable of that, but whether I'm capable of it or not it seems clear that it's what I ought to do. Put a different way, in that situation I should prefer to be capable of doing so, if it turns out that I'm not.

comment by Ghatanathoah · 2012-05-17T04:17:16.595Z · LW(p) · GW(p)

As I said before:

Now, there might be room for moral disagreement in that people care about different aspects of wellbeing more. But that would be grounds for moral pluralism, not moral relativism. Regardless of what specific aspects of morality people focus on, certain things, like torturing the human population for all eternity, would be immoral [wellbeing non-enhancing] no matter what.

If morality refers to a large computation related to the wellbeing of eudaemonic creatures it might be possible that some people value different aspects of wellbeing more than others (i.e. some people might care more about freedom, others more about harm). But there'd still be a huge amount of agreement.

I think a good analogy is with the concept of "health." It's possible for people to care about different aspects of health more. Some people might care more about nutrition, others about exercise. But there are very few ways to be healthy correctly, and near infinite ways to be unhealthy. And even if someone thinks you have your priorities wrong when trying to be healthy, they can still agree that your efforts are making you healthier than no effort at all.

Of course, one approach is to stop caring about humans in general, and only care about that subset of humanity that agrees with me.

I care about the wellbeing of animals to some extent, even though most of them don't care about morality at all. I also care, to a limited extent, about the wellbeing of sociopathic humans even though they don't care about morality at all. I admit that I don't care about them as much as I do about moral beings, but I do care.

If other moral humans have slightly different moral priorities from you I think they'd still be worth a great deal of caring. Especially if you care at all about animals or sociopaths, who are certainly far less worthy of consideration than people who merely disagree with you about some aspect of morality.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-17T13:44:52.109Z · LW(p) · GW(p)

I agree that we should expect significant (though not complete) overlap within the set of moral judgments made by all humans.

I would expect even more overlap among those made by non-pathological humans, and even more overlap among those made by non-pathological humans who share a cultural heritage.

I would expect less overlap (though not zero) among the set of moral judgments made by non-humans.

I agree that if statement X (e.g. "murder is wrong") is endorsed by all the moral judgments in a particular set, then the agents making those judgments will all agree that X is right, although perhaps to different degrees depending on peripheral particulars.
Similarly, if statement Y is not endorsed by all the moral judgments in a particular set, then the agents making those judgments will not all agree that X is right.

It's clear in the first case that right action is to abide by the implications of X.
In the second case, it's less clear what right action is.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-05-18T05:04:55.784Z · LW(p) · GW(p)

I would expect even more overlap among those made by non-pathological humans, and even more overlap among those made by non-pathological humans who share a cultural heritage.

I would expect less overlap (though not zero) among the set of moral judgments made by non-humans.

I think the point I am trying to get across, and one of the major points made by Eliezer in this sequence is that some of the things you are referring to as moral judgements aren't really moral judgements. Eliezer is basically saying that when you make a moral judgement you are making computations about various aspects of the wellbeing of eudaemonic creatures. A judgement that refers to the huge and complex concept "the wellbeing of eudaemonic creatures" is a moral judgement. A judgement that refers to some other concept is not a moral judgement, even if we use the same word to describe each.

When a sociopath says "It is good for me to kill people" he is not making a moral judgement. That is, he is not making computations related to the wellbeing of people. Quite the contrary, he is completely ignoring the wellbeing of everyone but himself. Calling what he does a moral judgement obscures the issue.

Similarly, when the pebblesorter says "It is good for pebbles to be sorted into prime numbered heaps" it is not making a moral judgement. It isn't doing computations about the wellbeing of people, it's doing computations about the numbers of pebbles.

You, the sociopath, and the pebblesorter are not referring to the same concepts. You are referring to the wellbeing of people, the sociopath is referring to the gratification of his impulse, the pebblesorter is referring to the primality of pebble heaps. The phrase "moral judgement" should probably not be used to refer to all these different types of judgements, as they are not judgements about the same concepts at all.

I would submit that if you removed the word "moral" and asked a pebblesorter "What action would best enhance the wellbeing of eudaemonic creatures" you and the pebblesorter would agree about quite a lot. The pebblesorter would then go back to sorting pebbles because it doesn't care about the wellbeing of eudaemonic creatures. (obviously this thought experiment would not work for a sociopath because sociopaths evolved to impersonate moral people, so they would never give an honest answer).

I think most moral disagreement among creatures who care about the wellbeing of others is a case of the blind men and the elephant. People disagree because wellbeing is a complex concept and it is possible to focus on one aspect of it at the expense of others (see scope insensitivity). Another source is self deception, people want to do immoral things, but still think of themselves as moral, so they fool themselves. A final source is that, some people may genuinely care more about some aspects of wellbeing more than other people even if you remove scope insensitivity. It is only that last kind of disagreement that is irresolvable, and as I said before, it is a case for moral pluralism, not moral relativism.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-18T14:11:13.332Z · LW(p) · GW(p)

For convenience, I am using the abbreviation "woec" for "wellbeing of eudaemonic creatures".

I agree that if I asked a pebblesorter "What action would best enhance woec", assuming we could work out a shared definition of "eudaemonic", we would agree about quite a lot.

If a pebblesorter asked me "What action would maximize prime-numbered heaps?" we would also agree about quite a lot.

If we were to both answer the question "What action would optimize for my values?" our answers would be almost completely unrelated.

I am willing to stop using the phrase "moral judgments" in this discussion to refer to judgments about what best implements the judger's values. This is entirely because disagreements about lexical usage are rarely productive when what we're really interested in is the referents. That said, I also prefer in that case to avoid using the phrase "moral judgments" to refer to judgments about what best achieves woec, since I don't actually use the phrase to mean that, which will get confusing. In fact, it's perhaps best to avoid the phrase altogether.

I agree that a lot of disagreements about what action would best enhance woec, among creatures who value woec, is a blind men and the elephant problem.

I agree that humans often want to do things that would not best enhance woec, even when we are aware that the thing we want to do would not best enhance woec.

I agree that even among creatures who care about woec, there may not be agreement about values.

I agree that when creatures whose values matter to me don't share values, I do well to embrace value pluralism.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-05-21T07:44:38.996Z · LW(p) · GW(p)

I am happy we are on the same page.

That said, I also prefer in that case to avoid using the phrase "moral judgments" to refer to judgments about what best achieves woec, since I don't actually use the phrase to mean that, which will get confusing. In fact, it's perhaps best to avoid the phrase altogether.

If you really think that the phrase "moral judgements" is a useless and ambiguous phrase and that we shouldn't use it, I can respect that. But if enhancing weoc isn't what we should use the phrase "morality" to describe then what is? You also seem to nominate "optimizing for my values" as an alternative referent, but that doesn't seem right to me. Sociopaths are generally regarded as gravely immoral, even if they efficiently implement their values, because they don't care about the wellbeing of others. Should we really just jettison the word "morality" altogether?

I suppose that could work. Since I've read Eliezer's work I've found that I can make the same points by substituting naturalistic statements for ones that use the word "moral." For instance, saying "The world would be a happier place if X didn't exist" is technically a naturalistic statement containing no value judgements, I use that a lot. But it seems like a shame to stop using such a powerful and effective word.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-21T13:18:34.109Z · LW(p) · GW(p)

It's not that I think the phrase is useless; it has many uses.

It's that I think we use it to mean such different things I think this conversation is not well-served by introducing it. (You use it to refer to judgments related to weoc, I use it to refer to judgments related to the judger's values.)

Yes, I would say that sociopaths make moral judgments, although their moral judgments differ from mine. I realize you would not say this, not because we disagree about sociopaths, but because we disagree about whether what sociopaths make can properly be labelled "moral judgment".

I don't think the labeling question is terribly important or interesting. As you say, "moral" can usefully be cashed out in other terms.

comment by TheOtherDave · 2012-05-16T13:36:33.570Z · LW(p) · GW(p)

So, let us assume there exists some structure S3 in my head that implements my terminal values.

Maybe that's eudaimonia, maybe that's Hungarian goulash, I don't really know what it is, and am not convinced that it's anything internally coherent (that is, I'm perfectly prepared to believe that my S3 includes mutually exclusive states of the world).

I agree that when I label S3 "morality" I'm doing just what I do when I label S2 "eudaimonia" or label some other structure "prime". There's nothing special about the label "morality" in this sense. And if it turns out that you and I have close-enough S3s and we both label our S3s "morality," then we mean the same thing by "morality." Awesome.

If, OTOH, I have S3 implementing my terminal values and you have some different structure, S4, which you also label "morality", then we might mean different things by "morality".

Some day I might come to understand S3 and S4 well enough that I have a clear sense of the difference between them. At that point I have a lexical choice.

I can keep associating S3 with the label "morality" or "right" and apply some other label to S4 (e.g., "pseudo-morality" or "nsheppard's right" or whatever). You might do the same thing. In that case, if (as you say) the only thing that might be remotely special about the label "morality" or "right" is that it might happens to refer to human terminal value, then it follows that there's nothing special about that label in that case, since in that case it doesn't refer to a common terminal value. It's just another word.

Conversely, I can choose to associate the label "morality" or "right" with some new S5, a synthesis of S3 and S4... perhaps their intersection, perhaps something else. You might do the same thing. At that point we agree that "morality" means S5, even though S5 does not implement either of our terminal values.

comment by Ghatanathoah · 2012-05-16T00:33:07.048Z · LW(p) · GW(p)

Again, it's just avoiding the word "relative" by talking in a confusing and unnatural way. And I don't see the difference between talking about "easy" vs. "j-easy" and talking about "right" vs. "p-right".

The reason people think that Eliezer is really a relativist is that they see concepts like "good" and "right" as reducing down to mean, "the thing that I [the speaker, whoever it is] values." Eliezer is arguing that that is not what they reduce down to. He argues that "good" and "right" reduce down to something like "concepts related to enhancing the wellbeing of conscious eudaemonic life forms." It's not a trick of the language, Eliezer is arguing that "right" refers to [wellbeing related concept] and p-right refers to [primality sorting related concept]. The words "good" and "right" might be relative but the referent [wellbeing of conscious eudaemonic life forms] is not. The reason Eliezer focuses on fairness is that the concept of fairness is less nebulous than the concept of "right" so it is easier to see that it is not arbitrary.

Pebble sorters and humans can both objectively agree on what it means to enhance the wellbeing of conscious eudaemonic life forms. Where they differ is whether they care about doing it. Pebble sorters don't care about the wellbeing of others. Why would they, unless it happened to help them sort pebbles?

Similarly, humans and pebble sorters can both agree on which pebble heaps are prime-numbered. Where they differ is if they care about sorting pebbles. Humans don't care about pebble-sorting. Why would they, unless it helped then enhance the wellbeing of themselves and others?

So if you define morality as "the thing that I care about," then I suppose it is relative, although I think that is not a proper use of the word "morality." But if you define it as "enhancing the wellbeing of eudaemonic life forms" then it is quite objective.

Now, there might be room for moral disagreement in that people care about different aspects of wellbeing more. But that would be grounds for moral pluralism, not moral relativism. Regardless of what specific aspects of morality people focus on, certain things, like torturing the human population for all eternity, would be immoral [wellbeing non-enhancing] no matter what.

So what is the difference between easy vs j-easy and right vs p-right? Well, easy and j-easy both refer to the concept "can be done with little effort expended, even by someone who is completely new and unpracticed in it." English is not "easy" because only those practiced in it can speak it with little effort expended. Ditto for Japanese. The concept is the same in both languages. "Right," by contrast, refers to "enhances wellbeing of eudaemonic creatures," while p-right refers to "sorting pebbles in prime numbered heaps" They are two completely different concepts and that fact has nothing to do with the language being used.

comment by Vladimir_Nesov · 2008-08-16T21:39:00.000Z · LW(p) · GW(p)

Thanks, Yvain. Comparing well-understood special relativity to things characterized as "subjective" helps to clarify the sense in which they are really "objective", but look differently for different minds and are meaningless without any mind at all. You need a reference frame, and phenomenon does look different in different reference frames, but there are strict and consistent rules for converting between reference frames.

comment by Nick_Tarleton · 2008-08-16T22:02:00.000Z · LW(p) · GW(p)

Very well said, Yvain!

comment by Nominull2 · 2008-08-16T22:41:00.000Z · LW(p) · GW(p)

I think it's is more akin to saying that "easy" could just as well mean difficult in some alien language, and so words don't mean anything and language is a farce. That's the true linguistic relativist position.

comment by steven · 2008-08-16T23:45:00.000Z · LW(p) · GW(p)

Yvain, I don't see why I would care about this thing you would call "moral", or refer to it often enough to justify such a short name.

comment by steven · 2008-08-16T23:57:00.000Z · LW(p) · GW(p)

Or indeed why it's the same thing that people have traditionally meant by the word.

comment by J_Thomas2 · 2008-08-17T01:20:00.000Z · LW(p) · GW(p)

People keep using the term "moral relativism". I did a Google search of the site and got a variety of topics with the term dating from 2007 and 2008. Here's what it means to me.

Relative moral relativism means you affirm that to the best of your knowledge nobody has demonstrated any sort of absolute morality. That people differ in moralities, and if there's anything objective to say one is right and another is wrong that you haven't seen it. That very likely these different moralities are good for different purposes and different circumstances, and if a higher morality shows up it's likely to affirm that the different moralities you've heard of tend to each have its place.

This is analogous to being an agnostic about gods. You haven't seen evidence there's any such thing as an objectively absolute morality, so you do not assert that there is such a thing.

Absolute moral relativism accepts all this and takes two steps further. First, the claim is that there is no objective way to judge one morality better than another. Second, the claim is that without any objective absolute morality you should not have any.

This is analogous to being an atheist. You assert that there is no such thing and that people who think there is suffer from fallacious superstitions.

I can be a relative moral relativist and stil say "This is my morality. I chose it and it's mine. I don't need it to be objectively true for me to choose it. You can choose something else and maybe it will turn out we can get along or maybe not. We'll see."

Why should you need an absolute morality that's good all times and all places before you can practice any morality at all? Here I am, here's how I live. It works for me. If you want to politely tell me I'm all wrong then I'll listen politely as long as I feel like it.

comment by Z._M._Davis · 2008-08-17T02:17:00.000Z · LW(p) · GW(p)

Submitted humbly for consideration: Ayn Rand is to libertarianism as Greg Egan is to transhumanism as Eliezer Yudkowsky is to moral relativism?

comment by Caledonian2 · 2008-08-17T03:25:00.000Z · LW(p) · GW(p)

I don't think Eliezer belongs in the same category as Ayn Rand and Greg Egan.

comment by Russell_Wallace · 2008-08-17T03:42:00.000Z · LW(p) · GW(p)

"But most of all - why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good?"

You are asking why anyone would choose life rather than what is good. Inclusive genetic fitness is just the long term form of life, as personal survival is the short-term form.

The answer is, of course, that one should not. By definition, one should always choose what is good. However, while there are times when it is right to give up one's life for a greater good, they are the exception. Most of the time, life is a subgoal of what is good, so there is no conflict.

comment by Erik3 · 2008-08-17T10:05:00.000Z · LW(p) · GW(p)

A sidenote:

Eliezer: "It has something to do with natural selection never choosing a single act of mercy, of grace, even when it would cost its purpose nothing: not auto-anesthetizing a wounded and dying gazelle, when its pain no longer serves even the adaptive purpose that first created pain."

It always costs something; it is cheaper to build a gazelle that always feels pain than one that does so until some conditions are met. This is the related to the case of supposing a spaceship that has passed out of your lightcone still exists.

Natural selection isn't fair; it doesn't compute fairness at all. This has nothing to do with the question whether there are situations where it could be fair to no cost, even if perhaps humans are more easily made aware of this fact if presented with cases where natural selection is outright unfair. Such cases very probably exists (ponder gene fixation) but to find them is nigh to impossible: you have to prove an absence of costs.

comment by Roko · 2008-08-17T10:43:00.000Z · LW(p) · GW(p)

Yvian: "So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is h-right."

- well said. Modulo Eliezer's lack of explicitness about his definition of "h-right", I fail to see how the human perspective could be anything other than h-right. This post is just an applause light for the values that we currently like, and I think that that is a bad sign.

If human values were so great, you wouldn't have to artificially make them look better by saying things like

"So which of these two perspectives do I choose? The human one, of course; not because it is the human one, but because it is right "

@Z.M. Davis: So we are left with a difficult empirical question: to what extent do moral differences amongst humans wash out under CEV, and to what extent are different humans really in different moral reference frames?

- yes, this is a good point. And I fear that the answer depends on the details of the CEV algorithm.

comment by Roko · 2008-08-17T11:51:00.000Z · LW(p) · GW(p)

Z.M Davis: "Submitted humbly for consideration: Ayn Rand is to libertarianism as Greg Egan is to transhumanism as Eliezer Yudkowsky is to moral relativism?"

- not sure I get this... Rand abhorred libertarianism because she thought it was half-baked and amateurish, but actually she is a libertarian, Egan spoke out against transhumanism because, uuum, he thinks we're all crackpots, but actually he's a transhumanist, Yudkowsky speaks out against moral relativism but actually he's the canonical example of a relativist. Ah, yes, ok.

Spot on, Z.M, Seconded.

comment by Kip_Werking · 2008-08-17T22:25:00.000Z · LW(p) · GW(p)

Michael Anissimov, August 14, 2008 at 10:14 PM asked me to expound.

Sure. I don't want to write smug little quips without explaining myself. Perhaps I'm wrong.

It's difficult to engage Eliezer in debate/argument, even in a constructive as opposed to adversarial way, because he writes so much material, and uses so many unfamiliar terms. So, my disagreement may just be based on an inadequate appreciation of his full writings (e.g. I don't read every word he posts on overcomingbias; although I think doing so would probably be good for my mind, and I eagerly look forward to reading any book he writes).

Let me just say that I'm a skeptic (or "anti-realist") about moral realism. I think there is no fact of the matter about what we should or should not do. In this tradition, I find the most agreement with Mackie (historically) and Joshua Greene at Harvard (today). I think Eliezer might benefit greatly from reading both of them. You can find Greene's Ph.d thesis here:

http://www.wjh.harvard.edu/~jgreene/GreeneWJH/Greene-Dissertation.pdf

It's worth reading in entirety.

Why am I a moral skeptic? Before I give good reasons, let me suggest some possibly bad ones: it's a "shocking" and unpopular position. And I certainly love to be a gadfly. So, if Eliezer and I do have a real disagreement here, it may be drawn along the same lines we have with the free will debate: Eliezer seems to have strong compatibilist leanings, and I'm more inclined towards non-realism about free will. Thus, Eliezer may be inclined to resist shocking or uncomfortable truths, or I may be overly eager to find them. That's one possible reason for my moral skepticism.

I certainly believe that any philosophical investigations which lead people to generally safe and comfortable positions, in which common sense is vindicated, should give us pause. And people who see their role as philosopher as vindicating common sense, and making cherished beliefs safe for the world, are dishonoring the history of philosophy, and doing a disservice to themselves and the world. To succeed in that project, fully at least, one must engage in the sort of rationalization Eliezer has condemned over and over.

Now let me give my good reasons:

P1. An essential aspect of what it means for something to be morally right is that the something is not just morally right because everyone agrees that the something is. Thus, everyone agrees that if giving to charity, or sorting pebbles, is morally right, it is not just right because everyone says that it is. It is right in some deeper sense.

P2. But, all we have to prove that giving to charity, etc., is right, is that everyone thinks it is (to the extent they do, which is not 100%).

You might say: well, giving to charity increases the sum amount of happiness in the world, or is more fair, or follows some Kantian rule. But, then again, we ask: why? And the only answer seems to be that everyone agrees that happiness should be maximized, or fairness maximized, or that rule followed. But, as we said when we started, the fact that everyone agreed wasn't a good enough reason.

So we're left with reasons which we already agree are not good enough. We can only get around this through fancy rationalization, and in particular by forgetting P1.

Eliezer offers his own reasons for believing something is right:

"The human one, of course; not because it is the human one, but because it is right. I do not know perfectly what is right, but neither can I plead entire ignorance."

What horribly circular logic is that? It's right because it's right?

The last few words present a link to another article. And there you find quotes like these:

"Why not accept that, ceteris paribus, joy is preferable to sorrow?"

"You might later find some ground within yourself or built upon yourself with which to criticize this - but why not accept it for now? Not just as a personal preference, mind you; but as something baked into the question you ask when you ask "What is truly right"?"

"Are you willing to relinquish your Socratean ignorance?"

This is special pleading. It is hand waving. It is the sort of insubstantial, waxing poetic that pastors use to captivate their audiences, and young men use to romance young women. It is a sweet nothing. It should make you feel like you're being dealt with by a used car salesman; that's how I feel when I read it.

The question isn't "why not prefer joy over sorrow?" That's a wild card that can justify anything (just flip it around: "why not prefer sorrow over joy?"). You might not find a decisive reason against preferring joy to sorry, but that's just because you're not going to find a decisive reason to believe anything is right or wrong. Any given thing might make the world happier, or follow a popular rule, but what makes that "right"? Nothing. The problem above, involving P1 and P2, does not go away.

The content of morality is not baked into the definitions of words in our moral vocabulary, either (as Eliezer implies when he writes: "you will have problems with the meaning of your words, not just their plausibility"---another link). Definitions are made by agreement and, remember, P1 says that something can't be moral just because everyone agrees that the something is moral. The language of morality just refers to what we should do. The words themselves, and their definitions, are silent about what the content of that morality is, what the things are that we should actually do.

So I seem to disagree with Eliezer quite substantially about morality, and in a similar way to how we disagree about free will.

Finally, I can answer the question: what scares me about Eliezer's view? Certainly not that he loves joy and abhors suffering so much. Believe me when I say, about his mission to make the universe one big orgasm: godspeed.

Rather, it's his apparent willingness to compromise his rationalist and critical thinking principles in the process. The same boy who rationalized a way into believing there was a chocolate cake in the asteroid belt, should know better than to rationalize himself into believing it is right to prefer joy over sorrow.

What he says sounds nice, and sexy, and appealing. No doubt many people would like for it to be true. As far as I can tell, it generally vindicates common sense. But at what cost?

Joy feels better than sorrow. We can promote joy instead of sorrow. We will feel much better for doing so. Nobody will be able to criticize us for doing the wrong thing. The world will be one big orgasm. Let's satisfy ourselves with that. Let's satisfy ourselves with the merely real.

comment by Virge2 · 2008-08-18T14:17:00.000Z · LW(p) · GW(p)

Kip Werking: "P2. But, all we have to prove that giving to charity, etc., is right, is that everyone thinks it is"

You're stating that there exists no other way to prove that giving to charity is right. That's an omniscient claim.

Still, it's unlikely to be defeated in the space of a comment thread, simply because your sweeping generalization about the goodness of charity is far from being universally accepted. A very general claim like that, with no concrete scenario, no background information on where it is to be applied, makes relativism a foregone conclusion.

I'd like to hear your arguments for something a little more fundamental. Would you apply the same reasoning to the goodness of life? Would you be prepared to claim that
"all we have to prove that your life is better than your death, is that everyone thinks it is"?

And with regard to the joy/sorrow question, would you be prepared to claim that
"all we have to prove that your not suffering is better than your suffering, is that everyone thinks it is"?

comment by Alberto_Gómez · 2008-08-20T08:52:00.000Z · LW(p) · GW(p)

I think that all the morality is is just to be sure that the persons close to me rank high in the prisioner dilemma game, and to assure others that I rank high too. Even higuer than I really am.

For this purpose is all that has been done by evolution, intellectual and religious thinking.

comment by J_Thomas2 · 2008-08-21T07:45:00.000Z · LW(p) · GW(p)

The same boy who rationalized a way into believing there was a chocolate cake in the asteroid belt, should know better than to rationalize himself into believing it is right to prefer joy over sorrow.

Obviously, he does know. So the next question is, why does he present material that he knows is wrong?

Professional mathematicians and scientists try not to do that because it makes them look bad. If you present a proof that's wrong then other mathematicians might embarrass you at parties. But maybe Eliezer is immune to that kind of embarrassment. Socrates presented lots of obvious nonsense and people don't think badly of him for it.

The usual reasons why not probably don't apply to him. I don't have any certainty why he does it, though.

comment by minnmass · 2010-04-09T19:51:56.531Z · LW(p) · GW(p)

I do apologize for coming late to the party; I've been reading, and really feel like I'm missing an inferential step that someone can point me towards.

I'll try to briefly summarize, knowing that I'll gloss over some details; hopefully, the details so glossed over will help anyone who wishes to help me find the missing step.

It seems to me that Eliezer's philosophy of morality (as presented in the metaethics sequence) is: morality is the computation which decides which action is right (or which of N actions is the most right) by determining which action maximizes a complex system of interrelated goals (eg. happiness, freedom, beauty, etc.). Each goal is assumed to be stated in such a way that "maximizes" is the appropriate word (ie. given a choice between "maximize X" and "minimize ~X", the former wording is chosen; "maximize happiness" rather than "minimize unhappiness").

Further, humanity must necessarily share the same fundamental morality (system of goals) due to evolutionary psychology, by analogy with natural selection's insistence that we share the same fundamental design.

One of Eliezer's primary examples is Alice and Bob's apparent disagreement over the morality of abortion, which must, it seems, come down to one of them having incomplete information (at least relative to the other). The other is the Pebblesorting People who have a completely different outlook on life, to the point where they don't recognize "right" but "p-right".

My first problem (which may well be a missed inferential step) is with the assumed universality, within humanity, of a system of goals. Most humans agree that freedom is a good thing, but a large minority doesn't believe it the most important thing (China comes to mind; my understanding is that a great many of China's citizens don't care that their government is censoring their own history). In point of of fact, it seems that "freedom", and especially individual freedom, is a relatively modern invention. But, let's visit Alice and Bob.

Alice believes that abortion is morally acceptable, while Bob disagrees. Eliezer's assertion seems to be that this disagreement means that either Alice or Bob has incomplete information (a missing fact, argument, or both, but information). Why is it not possible simply that Alice holds freedom as more important that life and Bob the reverse? A common argument against abortion holds that the future life of the fetus is more important than the restriction on the pregnant woman's freedom of choice. Eliezer's morality seems to imply that that statement must, a priori be either true or false; it cannot be an opinion like "walnuts taste better than almonds" or even "Christmas is more important than Easter" (without the former, the latter would not be possible; without the latter, the former would not be important).

It seems to be a priori true that 2+2=4. Why does it necessarily hold that "life is more (or less) important than choice"? And, for the latter, how would we decide between "more" and "less"? What experiment would tell us the difference; how would a "more" world differ from a "less" world?

I also have a question about the Pebble-people: how is it that humans have discovered "right" while the Pebble-people have discovered only "p-right"? Even if I grant the assertion that all humans are using the same fundamental morality, and Alice and Bob would necessarily agree if they had access to the same information, how is it that humans have discovered "right" and not "h-right"? H-morality values abstract concepts like beauty, art, love, and freedom; p-morality values concrete things like pleasingly prime piles of pebbles. P-morality doesn't prohibit beauty or art, but doesn't value them - it is apathetic towards them except so far as they further proper pebble piles. Similarly, h-morality is apathetic towards prime piles of pebbles, except so far as they are beautiful or artistic. Yet Eliezer asserts that h-morality is better, is closer to morality. I don't see the supporting evidence, though, except that humans invented h-morality, so h-morality matches what we expect to see when we look for morality, so h-morality must be closer to morality than p-morality is. This looks like a circular trip through Cultural Relativism and Rule Utilitarianism (albeit with complex rules) with strong Anthropic favoritism. I really feel like I'm missing something here.

A standard science fiction story involves two true aliens meeting for the first time, and trying to exist in the same universe without exterminating each other in the process. How would this philosophy of morality influence the aliens that humans should be allowed to flourish even if we may in principle become a threat to their existence (or vice-versa)? H-morality includes freedom of individual choice as a good thing; a-morality (alien, not "amoral") may not - perhaps they evolved something more akin to an ant colony where only a few have any real choice. How would h-morality (being closer to morality than a-morality, of course) influence them to stop using slave labor to power their star-ships or gladiatorial combat to entertain their citizens?

Fundamentally, though, I don't see what the difference is between Eliezer's philosophy of morality and simply saying "morality is the process by which one decided what is right, and 'what is right' is the answer one gets back from running their possible actions through morality". It doesn't seem to offer any insight into how to live, or how to choose between two mutually-exclusive actions, both of which are in classically morally gray areas (eg. might it be right to steal to feed one's hungry family?); as I understand it, Eliezer's morality simply says "do whatever the computation tells you to do" without offering any help on what that computation actually looks like (though, as I said, it looks suspiciously like Cultural [Personal?] Relativism blended with Rule Utilitarianism).

As I said, I really feel like I'm missing some small, key detail or inferential step. Please, take pity on this neophyte and help me find The Way.

Replies from: Rain, Kutta
comment by Rain · 2010-04-09T20:05:12.435Z · LW(p) · GW(p)

My first problem (which may well be a missed inferential step) is with the assumed universality, within humanity, of a system of goals.

From what I've seen, others have the same objection; I do as well, and I have not seen an adequate response.

how is it that humans have discovered "right" while the Pebble-people have discovered only "p-right"? Even if I grant the assertion that all humans are using the same fundamental morality, and Alice and Bob would necessarily agree if they had access to the same information, how is it that humans have discovered "right" and not "h-right"?

From what I understand, everyone except Eliezer is more likely to hold the view that he found "h-right", but he seems unwilling to call it that even when pressed on the matter. It's another point on which I agree with your confusion.

as I understand it, Eliezer's morality simply says "do whatever the computation tells you to do" without offering any help on what that computation actually looks like

We don't have quite the skill to articulate it just yet, but possibly AI and neuroscience will help. If not, we might be in trouble.

As I said, I really feel like I'm missing some small, key detail or inferential step. Please, take pity on this neophyte and help me find The Way.

I assign a high probability that Eliezer is wrong, or at the least, providing a very incomplete model for metaethics. This sequence is the one I disagree with most. Personally, I think you have a good grasp of what he's said, and its weaknesses.

comment by Kutta · 2010-09-20T22:00:26.210Z · LW(p) · GW(p)

Yet Eliezer asserts that h-morality is better, is closer to morality. I don't see the supporting evidence, though, except that humans invented h-morality, so h-morality matches what we expect to see when we look for morality, so h-morality must be closer to morality than p-morality is.

"Better" and "closer to morality" and "h-morality" refer to the same thing here. "H-morality is better" roughly means "better is better". Seeing no evidence that h-morality is better is like seeing no evidence that 2=2.

As far as I can see this is a reason why Eliezer doesn't bother with calling morality "h-morality" though I might be erring.

comment by TheOtherDave · 2010-11-09T21:20:40.867Z · LW(p) · GW(p)

OK... let me see if I'm following.

The idea, trying to rely on as few problematic words as possible, is:

  • There exists a class of computations M which sort proposed actions/states/strategies into an order, and which among humans underly the inclination to label certain actions/states "good", "bad", "right", "wrong", "moral", etc.(4) For convenience I'll label the class of all labels like that "M-labels."

  • If two beings B1 and B2 don't implement (1) a common M-instance, M-labels may not be meaningful, even in principle, in discussions between B1 and B2. For example, B1 and B2 may fundamentally not mean the same thing by "right" or "moral."

  • If B1 and B2 implement (1) one and only one M-instance (Mb), then M-labels are in-principle meaningful in discussions between B1 and B2 (although this is no guarantee that B1 and B2 will actually understand one another, or even that they are capable of discussion in the first place).

  • There exists (6) an M-instance at-least-partially implemented (1) by all humans (2). We label this the Coherent Extrapolated Volition (CEV).

  • Two humans might implement other M-instances in addition to CEV, or might not, but either way all human M-instances are (by definition) consistent with CEV. In other words, all within-group moralities among humans can be treated as special cases of CEV, and implementing CEV will satisfy all of them. (3) Edit: Later, I conclude that this isn't what you're claiming. Rather, you're claiming that CEV is the intersection of all within-group moralities among humans. Implementing CEV won't necessarily fully satisfy all of them, nor even necessarily fully satisfy any of them, it simply won't violate any of them. That is, it may not do anything right, but it's guaranteed not to do anything wrong.

  • We therefore want to ensure that any system X powerful enough to impose its preferences on us also implements CEV. This will ensure that it is at least meaningful for us to communicate with it using M-labels... e.g., talk about whether a given course of action is right or wrong. (5)

  • Other optimization processes (like the Pebblesorters, or natural selection) might implement an M-instance that is inconsistent with CEV. There is no guarantee that implementing CEV will satisfy all within-group moralities among all sapient species, let alone among all optimization processes. Edit: It seems to be important to you that we not call M-instances within nonhuman species "moralities", though I haven't quite understood why.

  • There might exist an M-instance at-least-partially implemented (1) by all sapient species. We could label this the Universal Coherent Extrapolated Volition (U-CEV) and desire to ensure that X also implements U-CEV. (7)

=== (1) Note that implementing a shared M doesn't necessarily mean B1 and B2 can apply M consistently to a specific situation, any more than knowing what a prime number is means I can always recognize or calculate one. It also doesn't mean B1 and B2 can articulate M. It doesn't even guarantee that any given M-label will be correctly or consistently used and understood when they converse. All of this means it may be difficult in practice to determine whether B1 and B2 share an M, or what that M might be.

(2) I'm not sure if this is quite what is being asserted... there may be humans excluded from this formulation, such as psychopaths who would refuse treatment.

(3) I haven't seen this actually being asserted, but it seems implicit. Otherwise, we shouldn't expect CEV to converge and include everybody's volition. (2) above seems relevant here.

(4) We're deliberately ignoring issues of language here and doing everything in English, but we expect that other human languages are isomorphic to English in relevant respects.

(5) There seems to also be an expectation that this is sufficient to avoid X doing bad things to us. I don't quite follow that leap, but never mind that for now.

(6) I don't actually see why I should believe that any such thing exists, though it would be nice if it did. Presumably arguments for this are coming. Edit: Given the "intersection not superset" correction above, then this definitely exists, and there's good reason to believe it's non-empty. Whether it's useful once we leave out all the stuff anyone disagrees with is still unclear to me.

(7) Although apparently we don't, judging from what I've seen so far... either we don't believe U-CEV exists, or we don't care. Presumably arguments for this are coming, as well.

comment by buybuydandavis · 2011-10-26T10:17:25.024Z · LW(p) · GW(p)

So I really must deny the charges of moral relativism: I don't think that human morality is arbitrary at all, and I would expect any logically omniscient reasoner to agree with me on that. We are better than the Pebblesorters, because we care about sentient lives, and the Pebblesorters don't. Just as the Pebblesorters are p-better than us, because they care about pebble heaps, and we don't. Human morality is p-arbitrary, but who cares? P-arbitrariness is arbitrary.

Is the Logically Omniscient Reasoner agreeing that human morality is not h-arbitrary, or that it is not lor-arbitrary?

How do we know that The LOR (ha! walked into that one) isn't a Pebblesorter?

comment by Lukas_Gloor · 2013-04-03T08:12:50.905Z · LW(p) · GW(p)

What p-bothers me (sorry couldn't resist!) about this approach is that "rightness" nowhere explicitly refers to "others", i.e. other conscious beings / consciousness-moments. Isn't there an interesting difference between a heap of eight pebbles (very p-bad) and a human getting tortured (very bad)? Concerning the latter, we can point to that human's first-person-perspective directly evaluating its current conscious state and concluding that the state is bad, i.e. that the person wants to get the hell out of it. This is a source of disvalue, an unfulfilled "want" -- for the "other" concerned -- which exists independent from what we, or the pebblesorters, might consider to be "moral". A heap of pebbles doesn't do anything like that, there is nothing which could be bad for it, and it seems puzzling why the mere existence of a heap of pebbles would be bad in any meaninful sense, all else being equal. The heap of pebbles is only bad for some being if the being is aware of the heap of pebbles and for whatever reason reacts aversively to it. Whereas being tortured, or suffering (defined as unfulfilled desires) is always bad for the being.

Maybe Eliezer's account already incorporates this implicitly, assuming that most humans terminally care about others. But as I said, it bothers me that this isn't made explicit. If it were, I think the quasi-relativist conclusion of this post would be less disturbing. If some heaps of pebbles are p-bad, that needn't bother us because the pebblesorters don't care about others, so they're egoists and not ethical, and even though they use analogous ways to label their terms like "p-ethical" doesn't imply that they compute their ethics according to the same content-criteria (others matter!) as we do.

Replies from: PrawnOfFate
comment by PrawnOfFate · 2013-04-06T20:14:12.104Z · LW(p) · GW(p)

If you define "morality" broadly, as maximising values you can end up with that sort of thing. Some would take the attitude that if your definitions covers counterintuitive cases, your definition is too broad.

comment by Will_Lugar · 2014-08-20T00:37:24.095Z · LW(p) · GW(p)

You, the human, might say we really should pursue beauty and laughter and love (which is clearly very important), and that we p-should sort pebbles (but that doesn't really matter). And that our way of life is really better than the Pebblesorters, although their way of life has the utterly irrelevant property of being p-better.

But the Pebblesorters would say we h-should pursue beauty and laughter and love (boring!), and that we really should sort pebbles (which is the self-evident meaning of life). Further, they will say their way of life is really better than ours, even though ours has some stupid old h-betterness.

I side with you the human, of course, but perhaps it would be better (h-better and p-better) to say we are only h-right, not right without qualification. Of course, from the inside of our minds it feels like we are simply right. But the Pebblesorters feel the same way, and if they're as metaethically astute as us then it seems they are not more likely to be wrong than us.

For what it's worth, my ethic is "You should act on that set of motives which leads to the most overall value." (Very similar to Alonzo Fyfe's desirism, although I define value a bit differently.) On this view, we should pursue beauty and laughter and love, while the Pebblesorters should sort pebbles, on the same definition of "should."

EDIT: Upon reading "No License To Be Human" I am embarrassed to realize my attempted-coining of the term "h-should" in response to this is woefully unoriginal. Every time I think I have an original thought, someone else turns out to have thought of it years earlier!