The Moral Void

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-30T08:52:58.000Z · LW · GW · Legacy · 111 comments

Contents

111 comments

Followup toWhat Would You Do Without Morality?, Something to Protect

Once, discussing "horrible job interview questions" to ask candidates for a Friendly AI project, I suggested the following:

Would you kill babies if it was inherently the right thing to do?  Yes [] No []

If "no", under what circumstances would you not do the right thing to do?   ___________

If "yes", how inherently right would it have to be, for how many babies?     ___________

Yesterday I asked, "What would you do without morality?"  There were numerous objections to the question, as well there should have been.  Nonetheless there is more than one kind of person who can benefit from being asked this question.  Let's say someone gravely declares, of some moral dilemma—say, a young man in Vichy France who must choose between caring for his mother and fighting for the Resistance—that there is no moral answer; both options are wrong and blamable; whoever faces the dilemma has had poor moral luck.  Fine, let's suppose this is the case: then when you cannot be innocent, justified, or praiseworthy, what will you choose anyway?

Many interesting answers were given to my question, "What would you do without morality?".  But one kind of answer was notable by its absence:

No one said, "I would ask what kind of behavior pattern was likely to maximize my inclusive genetic fitness, and execute that."  Some misguided folk, not understanding evolutionary psychology, think that this must logically be the sum of morality.  But if there is no morality, there's no reason to do such a thing—if it's not "moral", why bother?

You can probably see yourself pulling children off train tracks, even if it were not justified.  But maximizing inclusive genetic fitness?  If this isn't moral, why bother?  Who does it help?  It wouldn't even be much fun, all those egg or sperm donations.

And this is something you could say of most philosophies that have morality as a great light in the sky that shines from outside people.  (To paraphrase Terry Pratchett.)  If you believe that the meaning of life is to play non-zero-sum games because this is a trend built into the very universe itself...

Well, you might want to follow the corresponding ritual of reasoning about "the global trend of the universe" and implementing the result, so long as you believe it to be moral.  But if you suppose that the light is switched off, so that the global trends of the universe are no longer moral, then why bother caring about "the global trend of the universe" in your decisions?  If it's not right, that is.

Whereas if there were a child stuck on the train tracks, you'd probably drag the kid off even if there were no moral justification for doing so.

In 1966, the Israeli psychologist Georges Tamarin presented, to 1,066 schoolchildren ages 8-14, the Biblical story of Joshua's battle in Jericho:

"Then they utterly destroyed all in the city, both men and women, young and old, oxen, sheep, and asses, with the edge of the sword...  And they burned the city with fire, and all within it; only the silver and gold, and the vessels of bronze and of iron, they put into the treasury of the house of the LORD."

After being presented with the Joshua story, the children were asked:

"Do you think Joshua and the Israelites acted rightly or not?"

66% of the children approved, 8% partially disapproved, and 26% totally disapproved of Joshua's actions.

A control group of 168 children was presented with an isomorphic story about "General Lin" and a "Chinese Kingdom 3,000 years ago".  7% of this group approved, 18% partially disapproved, and 75% completely disapproved of General Lin.

"What a horrible thing it is, teaching religion to children," you say, "giving them an off-switch for their morality that can be flipped just by saying the word 'God'." Indeed one of the saddest aspects of the whole religious fiasco is just how little it takes to flip people's moral off-switches.  As Hobbes once said, "I don't know what's worse, the fact that everyone's got a price, or the fact that their price is so low."  You can give people a book, and tell them God wrote it, and that's enough to switch off their moralities; God doesn't even have to tell them in person.

But are you sure you don't have a similar off-switch yourself?  They flip so easily—you might not even notice it happening.

Leon Kass (of the President's Council on Bioethics) is glad to murder people so long as it's "natural", for example.  He wouldn't pull out a gun and shoot you, but he wants you to die of old age and he'd be happy to pass legislation to ensure it.

And one of the non-obvious possibilities for such an off-switch, is "morality".

If you do happen to think that there is a source of morality beyond human beings... and I hear from quite a lot of people who are happy to rhapsodize on how Their-Favorite-Morality is built into the very fabric of the universe... then what if that morality tells you to kill people?

If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic—anywhere you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say "Pain Is Good"?  What then?

Maybe you should hope that morality isn't written into the structure of the universe.  What if the structure of the universe says to do something horrible?

And if an external objective morality does say that the universe should occupy some horrifying state... let's not even ask what you're going to do about that.  No, instead I ask:  What would you have wished for the external objective morality to be instead?  What's the best news you could have gotten, reading that stone tablet?

Go ahead.  Indulge your fantasy.  Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted?  If you could write the stone tablet yourself, what would it say?

Maybe you should just do that?

I mean... if an external objective morality tells you to kill people, why should you even listen?

There is a courage that goes beyond even an atheist sacrificing their life and their hope of immortality.  It is the courage of a theist who goes against what they believe to be the Will of God, choosing eternal damnation and defying even morality in order to rescue a slave, or speak out against hell, or kill a murderer...  You don't get a chance to reveal that virtue without making fundamental mistakes about how the universe works, so it is not something to which a rationalist should aspire.  But it warms my heart that humans are capable of it.

I have previously spoken of how, to achieve rationality, it is necessary to have some purpose so desperately important to you as to be more important than "rationality", so that you will not choose "rationality" over success.

To learn the Way, you must be able to unlearn the Way; so you must be able to give up the Way; so there must be something dearer to you than the Way.  This is so in questions of truth, and in questions of strategy, and also in questions of morality.

The "moral void" of which this post is titled, is not the terrifying abyss of utter meaningless.  Which for a bottomless pit is surprisingly shallow; what are you supposed to do about it besides wearing black makeup?

No.  The void I'm talking about is a virtue which is nameless.

 

Part of The Metaethics Sequence

Next post: "Created Already In Motion"

Previous post: "What Would You Do Without Morality?"

111 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Unknown · 2008-06-30T09:05:39.000Z · LW(p) · GW(p)

"I mean... if an external objective morality tells you to kill babies, why should you even listen?"

This is an incredibly dangerous argument. Consider this : "I mean... if some moral argument, whatever the source, tells me to prefer 50 years of torture to any number of dust specks, why should I even listen?"

And we have seen many who literally made this argument.

Replies from: None, jacoblyles
comment by [deleted] · 2012-02-08T18:16:51.883Z · LW(p) · GW(p)

Maybe they are right.

People have been demonstrably willing to make everyone live at a lower standard of living rather than let a tiny minority grow obscenely rich and everyone else be moderately well off. In other words we seem to be willing to pay a price for equality. Why wouldn't this work in the other direction? Maybe we prefer to induce more suffering overall if this prevents a tiny minority suffering obscenely.

Too many people seem to think perfectly equally weighed altruism (everyone who shares the mystical designation of "person" has a equal weight and after that you just do calculus to maximize overall "goodness") that sometimes hides under the word "utilitarianism" on this forum, is anything but another grand moral principle that claims to, but fails, to really compactly represent our shards of desire. If you wouldn't be comfortable building an AI to follow that rule and only that rule, why are so many people keen on solving all their personal moral dilemmas with it?

Replies from: thomblake, Multiheaded
comment by thomblake · 2012-02-08T20:56:13.115Z · LW(p) · GW(p)

People have been demonstrably willing to make everyone live at a lower standard of living rather than let a tiny minority grow obscenely rich and everyone else being moderately well off.

Sure, horrible people.

mind-killed

Replies from: None
comment by [deleted] · 2012-02-08T21:16:22.101Z · LW(p) · GW(p)

You do realize that valuing equality in itself to any extent at all is always (because of opportunity cost at least) an example of this:

People have been demonstrably willing to make everyone live at a lower standard of living rather than let a tiny minority grow obscenely rich and everyone else be moderately well off.

But I agree with you in sense. Historically lots of horrible people have vastly overpaid (often in blood) and overvalued that particular good according to my values too.

Replies from: thomblake, DanielLC
comment by thomblake · 2012-02-08T21:59:27.018Z · LW(p) · GW(p)

You do realize that valuing equality in itself to any extent at all is always (because of opportunity cost at least) a example of this

Yes.

Replies from: None
comment by [deleted] · 2012-02-08T23:19:19.544Z · LW(p) · GW(p)

Ok just checking, surprisingly many people miss this. :)

comment by DanielLC · 2012-06-20T05:44:09.172Z · LW(p) · GW(p)

You do realize that valuing equality in itself to any extent at all is always (because of opportunity cost at least) an example of this:

Are you sure?

If you take a concave function, such as a log, of the net happiness of each individual, and maximize the sum, you'd always prefer equality to inequality when net happiness is held constant, and you'd always prefer a higher minimum happiness regardless of inequality.

Replies from: Articulator
comment by Articulator · 2014-03-27T06:24:53.898Z · LW(p) · GW(p)

Excellent! Thanks for the mathematical model! I've been trying to work out how to describe this principle for ages.

comment by Multiheaded · 2012-02-10T18:26:59.664Z · LW(p) · GW(p)

Konkvistador, I applaud your thougtful and weighed approach to the problem of equality. It has been troubling me too, and I'm glad to see that you're careful not to lean in any one direction before observing the wider picture. That's a grave matter indeed.

comment by jacoblyles · 2012-07-18T23:47:40.223Z · LW(p) · GW(p)

I'm glad I found this comment. I suffer from an intense feeling of cognitive dissonance when I browse LW and read the posts which sound sensible (like this one) and contradictory posts like the dust specks. I hear "don't use oversimplified morality!" and then I read a post about torturing people because summing utilons told you it was the correct answer. Mind=>blown.

Replies from: wedrifid, Kenny
comment by wedrifid · 2012-07-19T11:49:02.042Z · LW(p) · GW(p)

I'm glad I found this comment. I suffer from an intense feeling of cognitive dissonance when I browse LW and read the posts which sound sensible (like this one) and contradictory posts like the dust specks. I hear "don't use oversimplified morality!" and then I read a post about torturing people because summing utilons told you it was the correct answer. Mind=>blown.

There is no contradiction between this post and Eliezer's dust specks post.

Replies from: wizzwizz4
comment by wizzwizz4 · 2020-04-16T17:08:57.518Z · LW(p) · GW(p)

It would be good to elaborate on this. Whilst they're not strictly logically contradictory, a few reasonable assumptions here and there when extrapolating and they appear to suggest different courses of action.

comment by Kenny · 2013-04-05T00:48:55.056Z · LW(p) · GW(p)

The comment was making the opposite point, namely that some people refuse to accept that there is even a common 'utilon' with which torture and 'dust specks' can be compared.

Replies from: None
comment by [deleted] · 2013-04-05T01:33:24.711Z · LW(p) · GW(p)

By what criteria do we judge that there should be a common 'utilon'?

Not VNM, it just says we must be consistent in our assignment of utility to whole monolithic possible worlds. I can be VNM rational and choose specks.

Utilitarianism says so, but as far as I can tell, utilitarianism leads to all sorts of repugnant conclusions, and only repugnant conclusions.

Maybe we are only concerned with unique experience, and all the possible variation in dust-speck-experience-space is covered by the time you get to 1000.

Replies from: TimS
comment by TimS · 2013-04-05T01:43:23.215Z · LW(p) · GW(p)

I can be VNM rational and choose specks.

I'm confused. I'm not a mathematician, but I understood this post as saying a good VNM agent has a continuous utility function.

And my take away from the torture/specks thing was that having a continuous utility function requires choosing torture.

I assume I'm misunderstanding the terminology somewhere. If you are willing, can you explain my misunderstanding?

Replies from: None
comment by [deleted] · 2013-04-05T02:22:44.213Z · LW(p) · GW(p)

I'm confused. I'm not a mathematician, but I understood this post as saying a good VNM agent has a continuous utility function.

hnnnng. What? Did you link the wrong article? A VNM agent has a utility function (a function from outcomes to reals), but says nothing more. "Continuous" in particular requires your outcome space to have a topology, which it may not, and even if it does, there's still nothing in VNM that would require continuity.

And my take away from the torture/specks thing was that having a continuous utility function requires choosing torture.

Not necessarily. To choose torture by the usual argument the following must hold:

  1. You can assign partial utilities separately to amount of torture and amount of dust-speck-eyes, where "partial utilities" means roughly that your final utility function is a sum of the partial utilities.

  2. The partial utilities are roughly monotonic overall (increasing or decreasing, as opposed to having a maximum or minimum, or oscillating) and unbounded.

  3. Minor assumptions like more torture is bad, and more dust specks is bad, and there are possibilities in your outcome space with 3^^^^3 (or sufficiently many) dust speck eyes. (if something is not in your outcome space, it better be strictly impossible, or you are fucked).

I am very skeptical of 1. Once you look at functions as "arbitrary map from set A to set B", special things like this kind of decomposability seem very particular and very special, requiring a lot more evidence to locate than anyone seems to have gathered. As far as I can tell, the linear independence stuff is an artifact of people intutively thinking of the space of functions as the sort of things you can write by composing from primitives (ie computer code or math).

I am also skeptical of 2, because in general, it seems to be that unbounded utility functions produce repugnant conclusions. See all the problems with utilitarianism, and pascals mugging, etc.

As Eliezer says (but doesn't seem to take seriously), if a utility function gives utility assignments that I disagree with, I shouldn't use it. It doesn't matter how many nice arguments you can come up with that declare the beauty of the internal structure of the utility function (which is a type error btw), if it doesn't encode my idealized preferences, it's junk.

The only criteria by which a utility function can be judged is the preferences it produces.

That said, it may be that we will have to enforce certain consistencies on our utilities to capture most of our preferences, but those must be done strictly by looking at preference implications. I tried to communicate this in "pinpointing utility", but it really requires its own post. So many posts to write, and no time!

I assume I'm misunderstanding the terminology somewhere. If you are willing, can you explain my misunderstanding?

You may be confused by the continuity axiom in VNM which is about your preferences over probabilities, not over actual outcomes.

Replies from: wizzwizz4
comment by wizzwizz4 · 2020-04-16T17:19:38.336Z · LW(p) · GW(p)

The trouble is, any utility function where 1 doesn't hold is vulnerable to intuition pumps. If you can't say which of A, B and C is better (e.g. A > B, B > C, C > A), then I can charge you a penny to switch from C → B, then B → A, then A → C, and you're three pennies poorer.

I really, really hope my utility function's "set B" can be mapped to the reals. If not, I'm screwed. [LW · GW] (It's fine if what I want varies with time, so long as it's not circular at a given point in time.)

comment by Reductio_Post_Absurdum · 2008-06-30T09:34:49.000Z · LW(p) · GW(p)

'The Way' 'privileges' itself above the 'non-path'. Our sense of a 'way' is derived from not doing anything at all. 'The Way' is a form of eastern ideological colonialism; a hegemonic ideology that occupies the native moralities of our mind.

You also 'privilege' something that is filled to that of a 'void'. The 'void' is only derived from the 'whole' and the 'tangible' that is around it. Do we make donuts without holes -- without 'voids'? No. 'Voids' and wholes are part of the same, in some cases, the 'void' is needed to complete the whole, so the 'void' is more important than the thing filled. The only reason that privileged hegemons think 'voids' are bad is because the 'void' is female. To be filled is typical male dominant thinking. The filling of a void is tantamount to ideological, spiritual, and moral rape.

comment by Reductio_Post_Absurdum · 2008-06-30T09:44:59.000Z · LW(p) · GW(p)

Unknown@5:05am is incorrect.

Individuals who 'privilege' non-torture to torture are neo-colonialists of the mind. They want to invade and instill objective morality of non-torture on subcultures. Torture is a relative morality, as such, when a subculture like an intelligence agency tortures a terrorist, then it is allowed and it is moral. Any moral 'critique' of the torture is tantamount to a universal moralist rule: Torture is universally bad.

comment by Vladimir_Golovin · 2008-06-30T11:08:09.000Z · LW(p) · GW(p)

Personally, I don't know what morality is, or what's the 'inherently the right thing to do'. For me, the situation is simple.

If I hurt someone, my mirror neurons will hurt me. If I hurt someone's baby, I'll experience the pain I inflicted upon the baby, plus the pain of the parents, plus the pain of everyone who heard about this story and felt the pain thanks, in turn, to their mirror neurons.

And I'll re-experience all this pain in the future, every time I remember the episode -- unless I invent some way to desensitize myself to this memory.

I'm a meat machine built by evolution. One of my many indicators of 'inclusive genetic fitness' is a little green light titled "I currently feel no pain". If I hurt someone, this indicator will go red, which means that I will tend to avoid such behavior.

So, the short answer: I won't be inclined to kill anyone even after some 'authority' tells me that killing is now 'moral' and 'right'.

(A personal inside perspective: if I ever murder someone, I hope I'll have enough guts to remove myself from the gene and meme pools -- you can have my brain for cryo-slicing and my meat to feed some stray dogs).

comment by Manon_de_Gaillande · 2008-06-30T11:37:17.000Z · LW(p) · GW(p)

I'm pretty sure you're doing it wrong here.

"What if the structure of the universe says to do something horrible? What would you have wished for the external objective morality to be instead?" Horrible? Wish? That's certainly not according to objective morality, since we've just read the tablet. It's just according to our intuitions. I have an intuition that says "Pain is bad". If the stone tablet says "Pain in good", I'm not going to rebel against it, I'm going to call my intuition wrong, like "Killing is good", "I'm always right and others are wrong" and "If I believe hard enough, it will change reality". I'd try to follow that morality and ignore my intuition - because that's what "morality" means.

I can't just choose to write my own tablet according to my intuitions, because so could a psychopath.

Also, it doesn't look like you understand what Nietzsche's abyss is. No black makeup here.

Replies from: None
comment by [deleted] · 2012-02-08T18:33:59.865Z · LW(p) · GW(p)

I can't just choose to write my own tablet according to my intuitions

Why?

because so could a psychopath

I thought psychopaths where bad because they hurt people not because they construct their own moral philosophies.

comment by IL · 2008-06-30T11:38:03.000Z · LW(p) · GW(p)

Vladimir, if there was a pill that would make the function of the mirror neurons go away, in other words, a pill that would make you able to hurt people without feeling remorse or anguish, would you take it?

Replies from: Micah71381
comment by Micah71381 · 2010-12-24T18:51:22.233Z · LW(p) · GW(p)

Yes I would, assuming we are talking about just being able to not feel the pain of others at this stage of my life and forward, perhaps even by choice (so I could toggle it back on). Though, if we are not talking about a hypothetical "magic pill" then turning these off would have side effects I would like to avoid.

comment by Vladimir_Golovin · 2008-06-30T11:48:54.000Z · LW(p) · GW(p)

@IL: Would I modify my own source code if I were able to? In this particular case, no, I wouldn't take the pill.

comment by anonymous7 · 2008-06-30T11:50:00.000Z · LW(p) · GW(p)

I don't believe in the existence of morals, which is to say there is no "right" or "wrong" in the universe. However, I'll still do actions that most people would rate "moral". The reasons I do this are found in my brain architecture, and are not simple. Also, I don't care about utilitarianism. One can probably find some extremely complex utility function that describes my actions, which makes everybody on earth a utilitarianist, but I don't consciously make utility calculations. On the other hand, if morality is defined as "the way people make decisions", then of course everybody is moral and morality exists.

comment by Frank_Hirsch2 · 2008-06-30T11:51:52.000Z · LW(p) · GW(p)

It is the courage of a theist who goes against what they believe to be the Will of God, choosing eternal damnation and defying even morality in order to rescue a slave, or speak out against hell, or kill a murderer...
I once read in some book about members of the Inquisition who thought that their actions - like torture and murder - might preclude them from going to heaven. But these people where so selflessly moral that they gave up their own place in heaven for saving the souls of the witches... great, isn't it?

comment by GBM · 2008-06-30T12:18:40.000Z · LW(p) · GW(p)

If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic - anywhere you care to put it - then what if you get a chance to read that stone tablet, and it turns out to say "Pain Is Good"? What then?

Well, Eliezer, since I can't say it as eloquently as you:

"Embrace reality. Hug it tight."

"It is always best to think of reality as perfectly normal. Since the beginning, not one unusual thing has ever happened."

If we find that Stone Tablet, we adjust our model accordingly.

comment by Laura__ABJ · 2008-06-30T13:12:20.000Z · LW(p) · GW(p)

Eliezer: "Go ahead. Indulge your fantasy. Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted? If you could write the stone tablet yourself, what would it say?"

Excellent way of putting it... I would certainly want the option of living as long as I liked. (Though I find it worth noting that when I was depressed, I found the idea of needing to choose when to end program abhorrent, since I figured I could go several billion years in agony before making such a choice... Many people you talk to about the meaning of death may be longing for it now. Excellent Murakami story on the subject- in the collection 'Super Frog Saves Tokyo', but I forgot the title. Many people are very dissatisfied with their lives.)

Some of my problems with the 'positive singularity' do not involve uploading proper, but parameter manipulation that anything similar to our current identity may get completely lost in... Also not quite off the problem of 'is your clone really you?' ... or what we do with our physical selves after the upload.... All seem troubling to me.

comment by IL · 2008-06-30T13:22:57.000Z · LW(p) · GW(p)

Vladimir, why not? From reading your comment, it seems like the only reason you don't hurt other people is because you will get hurt by it, so if you would take the pill, you would be able to hurt other people. Have I got it wrong? Is this really the only reason you don't hurt people?

comment by prase · 2008-06-30T13:34:10.000Z · LW(p) · GW(p)

The nice thing with believing in no objective morality is that you needn't to solve such poorly intelligible questions. I hope Eliezer is trying to demonstrate the absurdity of believing in objective morality, if so, then good luck!

"I mean... if an external objective morality tells you to kill babies, why should you even listen?" - this is perhaps a dangerous question, but still I like it. Why should you do what you should do? Or put differently, what is the meaning of "should"?

comment by ME3 · 2008-06-30T14:31:21.000Z · LW(p) · GW(p)

If everything I do and believe is a consequence of the structure of the universe, then what does it mean to say my morality is/isn't built into the structure of the universe? What's the distinction? As far as I'm concerned, I am (part of) the structure of the universe.

Also, regarding the previous post, what does it mean to say that nothing is right? It's like if you said, "Imagine if I proved to you that nothing is actually yellow. How would you proceed?" It's a bizarre question because yellowness is something that is in the mind anyway. There is simply no fact of the matter as to whether yellowness exists or not.

Replies from: thomblake
comment by thomblake · 2011-12-07T22:50:22.698Z · LW(p) · GW(p)

"Imagine if I proved to you that nothing is actually yellow. How would you proceed?"

A propos: Magenta isn't a color.

Replies from: wnoise, Luke_A_Somers
comment by wnoise · 2011-12-07T23:54:08.899Z · LW(p) · GW(p)

It's not a spectral color. That is, no one wavelength of light can reproduce it. But I've seen magenta things, and there is widespread intersubjective agreement about what is magenta and what isn't. It damn well is a color.

Replies from: rkyeun
comment by rkyeun · 2012-07-29T23:31:39.381Z · LW(p) · GW(p)

Do not confuse concepts when you use a confusing word. There is no wavelength simultaneously above 740nm and below 450nm. There is a vector for monitor pixels. Whatever it is you mean by "color", these two facts explain magenta. Think like the star, not like the starfish.

comment by Luke_A_Somers · 2012-06-20T14:49:42.565Z · LW(p) · GW(p)

That's... precipitating a question, providing a mysterious answer to a question too simple to ask, and probably a few other things.

Replies from: thomblake
comment by thomblake · 2012-06-20T17:30:41.489Z · LW(p) · GW(p)

I still think it's spooky.

That said, it makes it a lot easier to ward off the "color means such-and-such wavelength of light" simplification in discussions of color experience. That definition fails to find equivalent the "yellow experience" that you see from yellow light and the "yellow experience" that you see from combined red and green light - but it's much cheaper to note that it simply fails to classify magenta (and nearby colors) as colors.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-06-24T23:51:03.616Z · LW(p) · GW(p)

Yes, it's a very interesting thing they're pointing out. The article deserves to exist. It just needs to use words right.

comment by Peter_Turney · 2008-06-30T14:46:51.000Z · LW(p) · GW(p)

Eliezer, Your post is entirely consistent with what I said to Robin in my comments on "Morality Is Overrated": Morality is a means, not an end.

comment by Fly2 · 2008-06-30T15:19:06.000Z · LW(p) · GW(p)

"...if there was a pill that would make the function of the mirror neurons go away, in other words, a pill that would make you able to hurt people without feeling remorse or anguish, would you take it?"

The mirror neurons also help you learn from watching other humans. They help you intuit the feelings of others which makes social prediction possible. They help communication. They also allow you to share in the joy and pleasure of others...e.g., a young child playing in a park.

I would like more control over how my mind functions. At times it would be good to turn-off some emotional responses, especially when someone is manipulating my emotions. So if the pill had only temporary effects, it were safe, and it would help me achieve my goals then, yes, I'd take the pill.

comment by Sean_C. · 2008-06-30T15:20:23.000Z · LW(p) · GW(p)

Isaac Asimov said it well: "Never let your morals get in the way of doing the right thing."

comment by Peter_Turney · 2008-06-30T16:04:42.000Z · LW(p) · GW(p)

See: Good, Evil, Morality, and Ethics: "What would it mean to want to be moral (to do the moral thing) purely for the sake of morality itself, rather than for the sake of something else? What could this possibly mean to a scientific materialistic atheist? What is this abstract, independent, pure morality? Where does it come from? How can we know it? I think we must conclude that morality is a means, not an end in itself."

comment by Constant2 · 2008-06-30T16:19:02.000Z · LW(p) · GW(p)

I think we must conclude that morality is a means, not an end in itself.

Morality is commonly thought of neither as a means nor as an end, but as a constraint. This view is potentially liberating, because the conception of morality as a means to an end implies the idea that any two possible actions can be compared to see which is the best means to the end and therefore which is the most moral. To choose the less moral of the two choices is, on this conception, the very definition of immoral. Thus on this conception, our lives are in principle mapped out for us in the minutest detail, because at each point it is immoral to fail to take the unique most moral path.

An alternative conception is that morality is a set of constraints, and within those constraints you are free to do whatever you like without your choice being immoral. This is potentially liberating, because if the constraints are minimal (and on most conceptions they are) then our lives are not mapped out for us.

comment by Nominull3 · 2008-06-30T16:27:43.000Z · LW(p) · GW(p)

This is horrible, this is non-rational. You are telling us to trust our feelings, after this blog has shown us that our feelings think it's just as good to rescue ten men as a million? What is your command to "shut up and multiply", but an off switch for my morality that replaces it with math?

If it were inherently right to kill babies, I would hope I had the moral courage to do the right thing.

comment by Caledonian2 · 2008-06-30T17:00:19.000Z · LW(p) · GW(p)

This is horrible, this is non-rational. You are telling us to trust our feelings, after this blog has shown us that our feelings think it's just as good to rescue ten men as a million? What is your command to "shut up and multiply", but an off switch for my morality that replaces it with math?
I just wish Eliezer would take his own advice. But for some reason he seems quite unwilling to show us the mathematical demonstration of the validity of his opinions, and instead of doing the math he persists with talking.

comment by Ian_C. · 2008-06-30T17:03:30.000Z · LW(p) · GW(p)

"Maybe you should just do that?"

Heck, hell with physics too. Let's just make up all human knowledge. If we're going to invent the prescriptive, why not the descriptive too?

Replies from: None
comment by [deleted] · 2012-02-08T18:44:19.688Z · LW(p) · GW(p)

Why bother with friendly AI, surely it will stumble upon the built in objective rules of morality too. Hm, he may not follow them and instead tile the universe with paper-clips. This might sound crazy, but why don't we follow the AI's lead on this? Maybe paperclip the universe with utopia instead of making giant cheesecakes or plies of pebbles or turning all matter into radium atoms or whatever "objective morality" prescribes?

comment by Patrick_(orthonormal) · 2008-06-30T17:32:55.000Z · LW(p) · GW(p)

Eliezer,

Every time I think you're about to say something terribly naive, you surprise me. It looks like trying to design an AI morality is a good way to rid oneself of anthropomorphic notions of objective morality, and to try and see where to go from there.

Although I have to say the potshot at Nietzsche misses the mark; his philosophy is not a resignation to meaninglessness, but an investigation of how to go on and live a human or better-than-human life once the moral void has been recognized. I can't really explicate or defend him in such a short remark, but I'll say that most of the people who talk about Nietzsche (including, probably, me) read their own thoughts over his own; be cautious for that reason of dismissing him before reading any of his major works.

comment by Vladimir_Golovin · 2008-06-30T17:48:49.000Z · LW(p) · GW(p)

@IL: Of course, "I just feel that hurting living things is bad" sums the inner perspective quite well, but this isn't really an answer to the question why exactly hurting living things feels bad for me, and why I wouldn't take the pill that shuts down my mirror neurons.

By taking the pill, I create a people-hurter, a thing-that-hurts-people, which is undoubtedly a bad thing to do judging from the before-the-pill POV. It's not that different from pressing a button that says "pressing this button will result in a random person being hurt or killed every day for 40 years since this moment".

comment by Adam_M · 2008-06-30T19:33:09.000Z · LW(p) · GW(p)

Doesn't the use of the word 'how' in the question "If "yes", how inherently right would it have to be, for how many babies?" presuppose that the person answering the question believes that the 'inherent rightness' of an act is measurable on some kind of graduated scale? If that's the case, wouldn't assigning a particular 'inherent rightness' to an act be, by definition, the result of a several calculations?

What I mean is, if you've 'finished' calculating, and have determined that killing the babies is a morally justifiable (and/or necessary) act, and there is a residual unwillingness in your psyche to actually perform the act, isn't that just a sign that you haven't finished your calculations yet, and that what you thought of as your moral decision-making framework is in fact incomplete?

Then we'd be talking about the interaction of two competing moral frameworks...but from a larger perspective, the framework you used to calculate the original 'inherent rightness' of the act is a complicated process that could arguably be broken down conceptually into competing sub-frameworks.

So maybe what we're actually dealing with, as we ponder this conundrum, is the issue of 'how do we detect when we've finished running our moral decision-making software?'

comment by Infotropism2 · 2008-06-30T19:34:35.000Z · LW(p) · GW(p)

"Would you kill babies if it was inherently the right thing to do? Yes [] No []"

-->

"Imagine that you wake up one morning and your left arm has been replaced by a blue tentacle. The blue tentacle obeys your motor commands - you can use it to pick up glasses, drive a car, etc. ... How would I explain the event of my left arm being replaced by a blue tentacle? The answer is that I wouldn't. It isn't going to happen. "

If morality was objective and it said we should kill babies, we'd have, and likely want to do it. Appears it isn't objective, though, and that we just don't feel that way. Another question ?

comment by Adam_M · 2008-06-30T19:37:42.000Z · LW(p) · GW(p)

What I'm wondering, in other words is this: Is our reluctance to carry out an act that we may have judged to be morally justifiable a symptom that the decision-making software we think we're running is not the software we're actually running?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-30T19:46:10.000Z · LW(p) · GW(p)

I admit that I own no great familiarity with the works of Nietzsche - I've read only one or two things and that turned me off the rest - so I've edited the main article accordingly.

comment by bamonster · 2008-06-30T20:09:30.000Z · LW(p) · GW(p)

"Torture is a relative morality, as such, when a subculture like an intelligence agency tortures a terrorist, then it is allowed and it is moral. Any moral 'critique' of the torture is tantamount to a universal moralist rule: Torture is universally bad."

Torture is universally bad, with the exception of imperatives which are heirarchally superior.

"On the other hand, if morality is defined as "the way people make decisions", then of course everybody is moral and morality exists."

It's more like "the way people ought to make (certain sorts of) decisions". Morality doesn't describe the way people do act, it describes the way they should act (in situations with certain variables).

"I hope Eliezer is trying to demonstrate the absurdity of believing in objective morality, if so, then good luck!"

Perhaps. I think he believes in a sort of "objective morality" - that is, a morality which is distinct from arbitrary beliefs and such. That's different than saying that morality really exists, that we can find it somewhere, that it's divine, or part of the natural universe. It's not real, in that sense. It's a human construct - but that doesn't mean it's not objective. Math is a human construct, but that's not to say the it's arbitrary, that it is not objective.

To Eliezer's query: I would want to be able to live forever, but only for so long. (I would have to retain the power to end it.)

I think what you've done here is sort of examined one horn of the Euthyphro dilemma (A refutation of Divine Command Theory: Is it right because God commands it, or does God command it because it's right?)

If it's "right" because God commands it, then conceivably he could command that killing a baby is right (and did so in the Bible, apparently). The devout either have to eat this bullet (say that infanticide really becomes moral if God commands it), or dodge it - "God is good, he would never command such a thing" (but, with this, they acknowledge the fact that God is adhering to a set of rules outside of himself).

If it did come about that I needed to kill a baby, morally needed, then I would. But, while God could pick and chose any moral rules he wants, killing a baby is something that My-Moral-Theory is unlikely ever to require.

comment by Nick_Tarleton · 2008-06-30T23:14:27.000Z · LW(p) · GW(p)

I'm with Manon and Nominull: if, somehow, I actually believed such a Tablet existed, I hope I would overwrite my own moral intuitions with it, even if it meant killing babies. Not that I believe the Tablet is any more likely or coherent than fundamental apples - why should I listen, indeed? - although my volition extrapolating to something inhuman is.

comment by Caledonian2 · 2008-07-01T00:03:47.000Z · LW(p) · GW(p)

The idea that there is no right and wrong is simply laughable.

The idea that our culturally inculcated senses of right and wrong have no objective basis is about as shocking to me as the idea that fashion has no objective basis. Oh no! However will we determine whether hemlines should be high or low next season? The topic itself has no interest for me, and even if it did, the idea simply wouldn't have anything to do with any of my opinions on it.

The sounds of words usually have no objective connection to the things they describe, either. The words are basically arbitrary. Oh, existential horror!

I mean, really - to be upset about these sorts of ideas, you have to be almost terminally naive.

comment by AndyWood · 2008-07-01T00:14:44.000Z · LW(p) · GW(p)

For me, these questions create a tangle of conflicts between the real and the hypothetical. This is my best attempt to untangle, so far. First, if there were a tablet that could actually somehow be shown to reveal objective morality, I suspect that I might never have had any qualms about committing atrocities in the first place, since I would be steeped in a culture that unanimously approved. We already see this in the real world, merely as a result of controversial tablets that only some agree on! If you mean, what if I suddenly discovered the tablet just now, then I find I am unable even to imagine how the present real me could be convinced of the authenticity of the tablet. I don't believe in (do not find evidence for) objective morality, so what possible argument could pursuade me that the tablet was it? And if, purely for the sake of the thought experiment, I grant the possibility that I could be convinced, even though I cannot imagine it, my conception of what it must mean to be convinced seems to imply surrender to that morality by definition. For if I hold out and continue not to murder, then I have not truly conceded that the morality of the tablet is objective. In short, the full implication of true belief in the objectivity of the tablet IS commitment to do its will, but I don't believe in any such thing, so to me there is no question.

As to what I would want the tablet to say: Minimize physical and psychological pain in the individual. Maximize happiness in the individual (in a way that is not vulnerable to silly arguments about "pegging the bliss-o-meter"). I say, "in the individual", in strong opposition to dust specks. I remain puzzled by why the "shut up and multiply" maxim would not be accompanied by "shut up and divide". (That is, 3^^^3 specks / 3^^^3 individuals = no pain.) I remain open to good arguments to the contrary - I haven't read one yet. I note that my tablet would be made completely obsolete if we ever engineered the capacities for pain and pleasure out of ourselves. I wonder what moralities, if there were even a use for them, would look like then?

comment by Manon_de_Gaillande · 2008-07-01T00:18:25.000Z · LW(p) · GW(p)

Caledonian: 1) Why is it laughable? 2) If hemlines mattered to you as badly as a moral dilemma, would you still hold this view?

comment by Nick_Tarleton · 2008-07-01T00:20:52.000Z · LW(p) · GW(p)

Or, you have to want more justification than is really necessary or possible, which is quite understandable when it comes to fundamental values.

comment by Roland2 · 2008-07-01T00:37:04.000Z · LW(p) · GW(p)

What is inclusive genetic fitness? Is it the same as inclusive fitness as defined on wikipedia?

comment by HalFinney · 2008-07-01T00:38:49.000Z · LW(p) · GW(p)

What if you build a super-intelligent AI and you are convinced that it is Friendly, and it tells you to do something like this? Go kill such-and-such a baby, and you will massively increase the future happiness of the human race. You argue and ask if there isn't some other way to do it, and the FAI explains that every other alternative will involve much greater human suffering. Killing a baby is relatively humane, as newborn babies have only limited consciousness, and their experiences are not remembered anyway. You will kill the baby instantly and painlessly. And the circumstances are such that no one will know (the baby will have been abandoned but would be discovered and saved if you don't intervene), so there is no hardship on anyone else, except you.

You ask, why me, and the FAI says (A) there's not much time, and (B) you're the kind of amoral person who can do this without carrying an enormous burden afterwards. I'll just point out that from reading the various comments here, it sounds like many readers of this blog would fit this description!

So, would you do it?

comment by Laura__ABJ · 2008-07-01T01:11:55.000Z · LW(p) · GW(p)

Hal Finney-

I probably wouldn't have argued that much with the AI... I've done things I've personally found more morally questionable since I didn't have quite as good a reason to believe I was right about the outcome... Moral luck, I was.

comment by Joseph_Knecht · 2008-07-01T01:23:57.000Z · LW(p) · GW(p)

Hal: as an amoralist, I wouldn't do it. If there is not enough time to explain to me why it is necessary and convince me that it is necessary, no deal. Even if I thought it probably would substantially increase the future happiness of humanity, I still wouldn't do it without a complete explanation. Not because I think there is a moral fabric to the universe that says killing babies is wrong, but because I am hardwired to have an extremely strong aversion to like killing babies. Even if I actually was convinced that it would increase happiness, I still might not do it, because I'm still undecided on the idea that some number of people experiencing a refreshing breeze on a hot day is worth more than some person being tortured -- ditto for killing babies.

It seems to me that if you want to find people who are willing to torture and kill babies because "it will increase happiness", you need to find some extremely moral utilitarians. I think you'd have much better luck in that community than among amoralists ;-).

comment by AndyWood · 2008-07-01T01:24:16.000Z · LW(p) · GW(p)

Hal: I wouldn't do it, nor do I think I'd want to live in a world governed thusly. My reasoning is that it violates individual liberty and self-possession. It seems to imply that individuals are somehow the "eminent domain", as it were, of society. I reject that. I say that nobody has the right to spend the baby's life. Granted, this is more of a political stance than a moral one. I can't claim that there's an objective reason to value individual rights so highly, but it is a fact that I do. I know you said the baby wouldn't suffer, but this question still put me in mind of the idea that pain and happiness may not be the same currency. It may not be valid to try to offer suffering as a payment for happiness.

comment by Laura__ABJ · 2008-07-01T02:22:46.000Z · LW(p) · GW(p)

Andy: "I can't claim that there's an objective reason to value individual rights so highly, but it is a fact that I do."

Hal: "You argue and ask if there isn't some other way to do it, and the FAI explains that every other alternative will involve much greater human suffering."

These things seem grossly disproportionate. Do you really believe utility(individual rights of one person)>>>utility(end great human suffering)

Andy- A man who is on the brink the death has a key to a safe deposit box in which there is an asthma inhaler. He owns both the inhaler and the safe deposit box. His son's son is having a very serious asthma attack that might lead to death. Since said man currently hates his son, he decides not to tell him where the key is, since it's his property and he doesn't have to. "Call an ambulance and wait." He tells his son. You know where the key is. Do you steal it?

comment by Stephanie · 2008-07-01T02:44:53.000Z · LW(p) · GW(p)

Reading this thread has been fascinating. I'm perhaps naive & simplistic in my thinking but here are some of my thoughts.

  1. How does one decide between the lesser of two evils? Logic? Instinct? Emotion? How does one decide anything? For me, it depends on a variety of factors such as mood, fear, access to information, time, proximity to the situation, and the list goes on. Furthermore, I don't know that I am always consistent in how I decide. Is it really always a question of morality?
  2. I'm not sure how convinced I am regarding the effectiveness of mind over matter (read it on a tablet, told myself to think it, therefore I think it). I think some people are better at controlling their thoughts than others. I can't personally justify every wrong action I've ever done. I suppose there might be a strong pill to suppress the memories of such actions and a strong pill that would allow one to do things one would not "normally" do, but that would be regardless of a moral compass.
  3. I do believe in the power of persuasion. I think that this power has more to do with the effectiveness of people preying on the emotions of others rather than the act of defining a moral doctrine. I don't know that math or AI could ever show me that killing a baby could yield a "better" result than not killing a baby. But if you threaten the life of my own child I just might be pushed over the edge, and I've never had a child.
comment by Unknown · 2008-07-01T03:03:55.000Z · LW(p) · GW(p)

There's no particular need to renew the torture and dust specks debate, so I'll just point out that GBM, Nominull, Ian C., and Manon de Gaillande have all made similar points: if you say, "if there is an external objective morality that says you should kill babies, why should you listen?" the question is the same as "if you should kill babies, why should you do it?"

Yes, and if 2 and 2 make 5, why should I admit it?

It isn't in fact true that I should kill babies, just as 2 and 2 don't make 5. But if I found out that 2 and 2 do make 5, of course I should admit it, and if I found out that I should kill babies, of course I should do it. As Nominull says, Eliezer's objection to this is an objection to reason itself: if an argument establishes conclusively something you happen not to like, you should reject the conclusion.

comment by AndyWood · 2008-07-01T03:25:11.000Z · LW(p) · GW(p)

Laura: Yes, I absolutely steal the key. Given the context of the original question, I had in mind the right to life, in particular. I didn't make this distinction until you asked this question. I happen not to think that the right to property is anything like as valuable as the right to life. (By "right" I mean nothing more than ground rules that society has "agreed" on.) Again, I have a problem with acting as though an individual's life is the eminent domain of society. As in Shirley Jackson's "The Lottery," the picture looks very different depending on whether you are the beneficiary or the sacrifice.

It could be that ordinary political ideas are inadequate for a world in which a superintelligence is available. Part of the reason that the idea of forcefully sacrificing the few for the many is repulsive to me is that, in general in the present world, nobody knows enough to be trusted to make reliable utility predictions of such gravity. Even still, in a world with AI, the problem of non-consent remains. It's all well and good to speak of utility, but next time, it could be you! How does it come to be that each individual has forfeited control over her/his own destiny? Is it just part of "the contract?"

comment by Laura__ABJ · 2008-07-01T03:31:51.000Z · LW(p) · GW(p)

Andy- I agree with your skepticism. I was taking for granted that the AI in the scenario was correct in its calculation, since I am 'convinced that it is friendly' but yes, I would need to be pretty fucking sure it really was both friendly and able to perform such calculations before I would kill anyone at its command.

comment by [deleted] · 2008-07-01T04:30:18.000Z · LW(p) · GW(p)

What's 'objective' about morality doesn't take the form of moral commandments aka 'the 10 commandments', nor does it take the form of an optimization function that produces the commandments either.

There's a thrid possibility, one you've over-looked, that is, in fact, the objective compoenent of morality: namely purely abstract archetypes or moral ideals (ie beauty, freedom, virtue). These objective platonic abstractions are not in the form of commandments, and they're not optimization functions either. The objective component of morality built into the universe doesn't tell me to do anything. It's just a lot of abstract archetypes.

comment by Jim_Powers · 2008-07-01T05:14:40.000Z · LW(p) · GW(p)

Assuming that we evolved in the moral climate that you are constructing I would guess that we would readily kill babies. Now of course, in the example you give there is an inherent limit to the number of babies that can be killed and still have sufficient life left over to be around to respond to your questions.

The spectrum of responses and moralities I've seen on display here (and elsewhere) are artifacts of our being and culture. Many of the behavioral tendencies that we ascribe as being "moral" have both an innate ("instinctual" for lack of a better term) and a social/cultural element (i.e. learned or amplified). The idea of an "embedded" morality in the universe is a bit hard to swallow, but I'll play along: I would guess then, since we are also embedded in this experiment we (or any entity capable of expressing what can be judged as "moral" behavior) would eventually express this embedded moral behavior. It would be a rather fascinating argument to justify otherwise, given the "universe" as described has as part of it's makeup a particular moral code it would be reasonable to conclude that "moral" creatures would come into existence that continue to show a arbitrary collection of moral behaviors despite there being a supposed "embedded" moral compass? Then what is the meaning of such an embedded property if it not to be expressed? In the experiment as proposed the embedded moral direction is either relevant and expressed, or irrelevant and has no baring on the moral development of such a universe's inhabitants.

The general sentiment expressed about humans possessing morality is really a statement about some evolved behaviors that were selected, the rest (higher-level elaboration on these somewhat "innate" traits) is substantially illusionary. This is not to say that the more "illusionary" extrapolations aren't an important variable in societies, they are, but beyond the physiological and neurological elements, the rest are behaviors culturally and socially tuned to essentially arbitrary values.

comment by Laura__ABJ · 2008-07-01T15:18:00.000Z · LW(p) · GW(p)

Hmm... This whole baby-killing example is making me think...

Knecht: "Even if I thought it probably would substantially increase the future happiness of humanity, I still wouldn't do it without a complete explanation. Not because I think there is a moral fabric to the universe that says killing babies is wrong, but because I am hardwired to have an extremely strong aversion to like killing babies."

This does seem like what a true amoralist might say... yet, what if the idea of having forgone the opportunity to substantially increase the future happiness of humanity would haunt you for the rest of your existence, which will be quite long... Then the amoralist might decide indeed that the comparative pain of killing the baby was less than suffering this protracted agony.

Andy: "It's all well and good to speak of utility, but next time, it could be you! How does it come to be that each individual has forfeited control over her/his own destiny? Is it just part of "the contract?"

From how I feel about the world and the people in it now, I would hope I would have the strength to accept my fate and die, if die I must... However, since I really don't believe there is anything 'after,' all utility would drop to 0 if I were to die. However, I think I might very well be tortured for the rest of my existence that my existence was to source of torture to so many. This would be negative utility. I can conceive of not wanting to live anymore. I honestly can't say what I would do if asked to make this sacrifice. What would you do, if it was your life the AI asked you to end?

Laura: "I would need to be pretty fucking sure it really was both friendly and able to perform such calculations before I would kill anyone at its command."

I know I wrote this, but I've been thinking about it. Generally this is true, but we mustn't rationalize inaction by insufficiency of data when probabalistically we have very good reason to believe in the correctness of a conclusion. Be a man, or an adult rather, and take responsibility for the possibility that you may be wrong.

Maybe this is what it is to be a Man/Woman. This is why I was so very impressed with Leonidas and his wife- their ability to make very difficult, unsavory decisions with very good reasons to believe they were correct, but still in the face of uncertainty... Leonidas flaunted fate... his wife, society. Which was more difficult?

OTOH- We can think of King Agammemnon and his ultimate sacrifice of his daughter Iphegenia, demanded by the gods in order to get the ships to set sail. While he clearly HAD to do this under the premise that he should go to war with Troy, Greek literature seems to be highly critical of this decision and whether or not the war should ever have been fought... If our 'super-intelligent,' 'friendly' AI, were but the Greek Gods unto us, I don't think I would want to be at its moral mercy... I am not a toy.

The Greeks really did get it all right. There have been so few insights into human nature since...

comment by Caledonian2 · 2008-07-01T15:33:51.000Z · LW(p) · GW(p)

The Greeks really did get it all right.
No, they were simply less wrong than most on a limited number of memorable topics.

comment by Joseph_Knecht · 2008-07-01T16:35:49.000Z · LW(p) · GW(p)

Laura ABJ: To expand on the text you quoted, I think that killing babies is ugly, and therefore would not do it without sufficient reason, which I don't think the scenario provides. The ugliness of killing babies doesn't need a moral explanation, and the moral explanation just builds on (and adds nothing but a more convenient way of speaking about) the foundation of aversion, no matter how it's dressed up and made to look like something else.

The idea is not compelling to me and so would not haunt me forever, because like I said, I'm not yet convinced that some X number of refreshing breezes on a hot day is strictly equivalent in some non-arbitrary sense to murdering a baby, and X+1 breezes is "better" in some non-arbitrary sense.

However, the idea of being haunted forever would bother me now if I thought it likely that my future self would think I made the wrong decision, but that implies that I have more knowledge and perspective now than I actually have (in order to know enough to think it likely that I'll be haunted). All I can do is make what I think is the best decision given what I know and understand now, so I don't see that I could think it likely that I would be haunted by what I did. Of course, I could make a terrible mistake, not having understood something I will later think I should have understood, and I might regret that forever, but I wouldn't realize that at the time and I wouldn't think it likely.

comment by Laura__ABJ · 2008-07-01T17:37:36.000Z · LW(p) · GW(p)

I realize that just because I am fairly confident I wouldn't suffer terribly from killing the baby if my knowledge was fairly complete, I can't say that for all people. People's utility functions differ, as do their biological and learned aversions to certain types of violence. The cognitive dissonance created by being presented with such a situation might be too great for some, causing them to break down psychologically and rationalize their way out of the decision any way they could. What if we upped the stakes and took it from some anonymous baby painlessly being snuffed out, to your own adult child being tortured?

Look at our dear friend C-. I was not thinking about him when I wrote my last post, but for those of you who know the situation, he seems to be the embodiment of this dilemma. What becomes of the man who, knowing the gravity of the situation and the most likely outcome, still decides NOT to kill the baby???

comment by Sebastian_Hagen2 · 2008-07-01T18:45:00.000Z · LW(p) · GW(p)

Hal Finney:
Why doesn't the AI do it verself? Even if it's boxed (and why would it be, if I'm convinced it's an FAI?), at the intelligence it'd need to make the stated prediction with any degree of confidence, I'd expect it to be able to take over my mind quickly. If what it claims is correct, it shouldn't have any qualms about doing that (taking over one human's body for a few minutes is a small price to pay for the utility involved).
If this happened in practice I'd be confused as heck, and the alleged FAI being honest about its intentions would be pretty far down on my list of hypotheses about what's going on. I'd likely stare into space dumbfounded until I found some halfway-likely explanation, or the AI decided to take over my mind after all.

comment by grendelkhan · 2008-07-01T19:21:00.000Z · LW(p) · GW(p)

By gum, I'm amazed that fifty comments have gone by and nobody's mentioned future toddler chopper Vox Day. Sure, it's nearly a year and a half old, but if anyone had doubt that there are apparently functioning humans out there who would tick the second box and fill in "until my arm got tired".

The Euthyphro hypothetical does remind me a bit of the Ticking Time Bomb--a thoroughly unrealistic situation designed to cause the quiz-taker to draw a conclusion about more realistic situations that they wouldn't have come to otherwise.

comment by Dan_Lewis · 2008-07-01T21:13:00.000Z · LW(p) · GW(p)

If there were no such thing as green, what color would green things be? MU.

Kierkegaard talked about all this in Fear and Trembling a long time ago. Was Abraham sacrificing Isaac immoral? If you call morality the universal of that time, then you have to suppose it was. But in the story we know that God told Abraham to do it.

Anyone who defies the universal in favor of their own experience of God, the universal has no way to judge. No argument on the basis of the universal morality can call Kierkegaard's knight of faith back from the quest.

"And if an external objective morality does say that the universe should occupy some horrifying state... let's not even ask what you're going to do about that. No, instead I ask: What would you have wished for the external objective morality to be instead? What's the best news you could have gotten, reading that stone tablet?

"Go ahead. Indulge your fantasy. Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted? If you could write the stone tablet yourself, what would it say?

"Maybe you should just do that?

"I mean... if an external objective morality tells you to kill people, why should you even listen?"

This is a restatement of ethical egoism. It assumes that what you want is right, so you should just do what you want. If an external objective morality tells you to save people's lives, why should you even listen? Morality is not the only target in the sights of this argument; amorality takes it just as badly in exactly the opposite fashion. (That doesn't mean that you have to pick something besides morality and amorality, it just means the argument is wrong.)

If your desires are messed up (any pedophiles reading the blog?), there is no prospect of change. No one can persuade you out of your egoist position, no argument works. But that's evidence that it's the wrong position to be in, unfalsifiable and unjustifiable.

If you believed this, you might as well say, I am perfect. The world can teach me nothing. I can receive input but never change. It turns out you're incorrect, that's all.

comment by Jay · 2008-07-03T10:55:00.000Z · LW(p) · GW(p)

Would you kill babies if it was inherently the right thing to do?

If it were inherently the right thing to do, then I wouldn't be here. Someone would have killed me when I was a baby.

Replies from: themusicgod1
comment by themusicgod1 · 2017-06-05T01:53:14.122Z · LW(p) · GW(p)

This assumes that the people around you generally do the right thing. If you operate under the alternative assumption (which is much more reasonable) you would likely still be alive.

comment by Jimmy_D · 2008-07-29T08:39:00.000Z · LW(p) · GW(p)

There is no morality, it is a fiction to be discarded alongside god and rights. What informs are actions is the trifecta of self-interest, emotion, and social expectation. Our upbringing and later education shapes which of these is given more weight when we make our decisions.

There simply is no moral property to an action or consequence. There is no natural property that is moral. There is no discoverable law or property that can inform an ought. Our "ethical intuitions" are simply emotional responses. No one can say "killing is wrong", we can only say "I disapprove of killing".

Morality is a fiction, your example of saving lives regardless of morality shows that our emotional makeup and social background inform our decisions about which actions are preferable to others.

There is no ought, there is no moral attribute, there is no moral laws, there is no empirical evidence of moralities existence. Morality is to psychology as alchemy is to chemistry.

comment by byrnema · 2010-04-14T01:19:03.764Z · LW(p) · GW(p)

I don't get it. If killing babies was inherently good, I would kill them, sure. It's not like killing babies is inherently bad.

Or did you think that I thought so?

I understand that in many usual contexts killing babies would seem bad to me, because I was given instructions to take care of babies (generally) by evolution, only because having these instructions made it more likely for me to exist and have those instructions. So what? Is existing and having instructions inherently good?

In general, the Socratic questions in this sequence don't seem to work for me.. is this because I'm not answering in a way I was expected to?

Replies from: wedrifid
comment by wedrifid · 2010-04-14T01:39:41.787Z · LW(p) · GW(p)

I don't get it.

Maybe the problem is that you do already get it so don't particularly benefit from the exercise.

comment by red75 · 2010-06-08T08:12:28.586Z · LW(p) · GW(p)

If I AM utility function maximizer and I proved that killing baby reliably maximizes it, then sure I'll kill.

But I am not. My poorly defined unclosurable morality meter will break in and demand revising that nice and consistent utility function I've based my decision on. And so moral agonising begins.

Answer: I don't know, and it will be a painful work to decide, weighting all pro and cons, building and checking new utility functions, rewriting moral itself...

comment by MrCheeze · 2010-12-12T18:31:26.644Z · LW(p) · GW(p)

So... the correct answer is to dissolve the question, yes?

comment by EditedToAdd · 2011-01-02T19:57:45.725Z · LW(p) · GW(p)

I like to think of this as being extreme artificiality. Humans have always attempted to either ignore or go against certain natural elements in order to flourish. It was never this fundamental, though. Logic has, at best, managed to straighten us out and make things better for us. And at worst, it reaches conclusions that are of no practical consequence. If it ever told us that killing babies is good, we would of course have to check all the consequences of what it would mean to ignore this logic. If we get lucky, it’s a logic that doesn’t really extend very far, and does not manifest much consequences, making it okay for us to exercise our extreme artificiality from this logic. If we don’t get lucky, it’s a logic that branches out into many/severely negative consequences if not carried out (worse than killing babies), and then, by looking at this logic, we would have to kill babies.

comment by AnthonyC · 2011-03-28T17:10:11.453Z · LW(p) · GW(p)

If it were revealed to me that, say, the Aztecs were right, their gods are real, and the One True Religion, then I believe it would be my duty to defy their will, and reject their plan for mankind. Power does not grant moral authority, even if it is the power that was used to make the world as it is.

Would I be brave enough to do it in practice? I have no idea, but I think it helps that I'm thinking about it beforehand.

comment by buybuydandavis · 2011-09-26T10:41:20.622Z · LW(p) · GW(p)

What would you have wished for the external objective morality to be instead? What's the best news you could have gotten, reading that stone tablet?

That's an awesome question. I'm going to have to steal that one.

comment by [deleted] · 2012-02-08T18:39:17.622Z · LW(p) · GW(p)

I find it funny that many of the people here who where pretty much freaked out by the idea of "objective morality built into the fabric of the universe" not really mattering for humans, yet when it comes to mythology don't have a problem criticizing Abraham for being willing to sacrifice his son because God told him too.

comment by Bart119 · 2012-05-31T20:24:12.212Z · LW(p) · GW(p)

Leon Kass (of the President's Council on Bioethics) is glad to murder people so long as it's "natural", for example. He wouldn't pull out a gun and shoot you, but he wants you to die of old age and he'd be happy to pass legislation to ensure it.

Does anyone have sources to support this conclusion about Kass's views? I tracked down a transcript of an interview he gave that was cited on a longevity website, but it doesn't support that characterization at all. He does express concerns about greatly increased lifespans, but makes clear that he sees both sides. He opposed regulation of aging research:

http://www.sagecrossroads.net/files/transcript13.pdf

comment by CronoDAS · 2012-09-08T04:31:25.372Z · LW(p) · GW(p)

Go ahead. Indulge your fantasy. Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted? If you could write the stone tablet yourself, what would it say?

I'm reminded of one of Bill Watterson's Calvin and Hobbes strips:

Calvin: I'm at peace with the world. I'm completely serene.
Hobbes: Why is that?
Calvin: I've discovered my purpose in life. I know why I was put here and why everything exists.
Hobbes: Oh really?
Calvin: Yes. I am here so everyone can do what I want.
Hobbes: (rolling eyes) It's nice to have that cleared up.
Calvin: Once everybody accepts it, they'll be serene too!

comment by Squark · 2013-03-16T20:40:46.976Z · LW(p) · GW(p)

What if the structure of the universe says to do something horrible?

If the "structure of universe" is something mathematical (e.g. the prime number theorem) then it's meaningless to ask "what if the structure of the universe says X" unless it truly says X. Assuming it says something different from what it really says immediately leads to a logical contradiction which allows deducing anything at all

If you could write the stone tablet yourself, what would it say?

You're suggesting that we should trust our moral intuition instead of looking for a fundamental moral principle. But my moral intuition is telling me to look for a fundamental moral principle. Apparently I'm the only one with this kind of intuition, judging by

and I hear from quite a lot of people who are happy to rhapsodize on how Their-Favorite-Morality is built into the very fabric of the universe

comment by Jiro · 2015-09-11T21:37:55.158Z · LW(p) · GW(p)

Responding to old post:

In 1966, the Israeli psychologist Georges Tamarin presented, to 1,066 schoolchildren ages 8-14, the Biblical story of Joshua's battle in Jericho:

If you ask a question to schoolchildren, you have to take into consideration that children are supposed to obey authority figures. And not only because the authority figures have power, but because children don't know and can't comprehend many important things about the world, and that makes it a good idea for children to put little weight on their own conclusions and a lot of weight on what authority figures say.

God, of course, is supposed to be an authority on right and wrong. Fundamentally, children thinking "God says that I should kill lots of people so I should, even though I think that is wrong because of X" is no different than children thinking "Mom says that I should look both ways before crossing the street, so I should, even though I think that is wrong because of Y". The child would be wrong in the first case and right in the second, but he'd be following the same policy in both cases, and this policy is generally beneficial even though it fails this one time.

comment by waveman · 2016-06-27T04:36:39.101Z · LW(p) · GW(p)

Link to "virtue which is nameless" is broken. Probably should be http://www.yudkowsky.net/rational/virtues/

comment by TheAncientGeek · 2016-07-17T07:00:59.470Z · LW(p) · GW(p)

The idea of a Tablet that simply states moral truths without explanation (without even the backing of an authority, as in divine command theory) is a form of ethical objectivism that is hard to defend, but without generalising to all ethical objectivism. For instance, if objectivism works in a more math-like way, the a counterintuitive moral truth would be backed by a step-by-step argument leading the reader to the surprising conclusion in the way the reader of maths is led to surprising conclusions such as the Banach Tarski paradox. The Tablet argument shows, if anything, that truth without justification is a problem, but that is not unique to ethical objectivism.

For instance, consider a mathematical Tablet that lists a series of surprising theorems without justification. That reproduces the problem without bringing in ethics at all.

Replies from: dxu
comment by dxu · 2016-07-18T16:20:55.325Z · LW(p) · GW(p)

How do you get a statement with "shoulds" in it using pure logical inference if none of your axioms (the laws of physics) have "shoulds" in them? And if the laws of physics have "shoulds" in them, how is that different from having a tablet?

Replies from: entirelyuseless, TheAncientGeek
comment by entirelyuseless · 2016-07-19T04:49:28.289Z · LW(p) · GW(p)

How many axioms do you have? Language has thousands of words in it, and logical inference will never result in a statement using words that were not in the axioms.

Notice that this doesn't prevent us from knowing thousands of true things and employing a vocabulary of thousands of words.

Replies from: dxu
comment by dxu · 2016-07-19T19:01:49.816Z · LW(p) · GW(p)

Sorry, but I'm not sure what your comment has to do with mine. Please expand.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-20T04:51:44.194Z · LW(p) · GW(p)

You asked, "How do you get a statement" etc. I was answering that. In the same way we get all our other statements.

Replies from: dxu
comment by dxu · 2016-07-20T18:21:36.725Z · LW(p) · GW(p)

So, just to be clear, I was objecting to this part of TheAncientGeek's comment:

For instance, if objectivism works in a more math-like way, the a counterintuitive moral truth would be backed by a step-by-step argument leading the reader to the surprising conclusion in the way the reader of maths is led to surprising conclusions like thr Banach Tarski paradox.

My comment was an attempt to point out (in a rhetorical way) that math requires axioms, and you can't deduce something your axioms don't imply. After all, there are no universally compelling arguments--and in the case of morality, unless you're specifically choosing your axioms to have "shoulds" in them from the very start, you can't deduce "should" statements from them (although that doesn't stop some people) from trying). You can, of course, have your own personal morality that you adhere to (that's the part where you choose your axioms to have "shoulds" in them from the beginning), but that's a fact about you, not about the universe at large. To claim otherwise is to claim that the laws of physics themselves have moral implications, which takes us back to moral realism (i.e. an external tablet of morality).

Your comment is true, of course, but it seems irrelevant to my original objection.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-21T11:36:47.377Z · LW(p) · GW(p)

It is not irrelevant. Physics does not contain axioms that have the word "apple" in them, and so you cannot logically go from the axioms of physics to "apples tend to fall if you drop them." That does not prevent you from making a reasonable argument that if the axioms of physics are true, then apples will fall, and it does not prevent you from arguing for morality.

Replies from: dxu
comment by dxu · 2016-07-21T16:28:23.352Z · LW(p) · GW(p)

This is an equivocation. "Apple" is a term we use to refer to a large collection of atoms arranged in a particular manner. The same goes for the word "bridge" that you mentioned in your other comment. The fact that we can talk about such collections of atoms and refer to them using shorthands ("apple", "bridge", etc.) does not change the fact that they are still made of atoms, and hence subject to the laws of physics. This fact has precisely no bearing on the issue of whether it is possible to deduce morality from physics.

EDIT: Speaking of whether it's possible to deduce morality from physics, I actually already linked to (what in my mind is) a fairly compelling argument that it's not, but I note that you've (unsurprisingly) neglected to address that argument entirely.

Replies from: entirelyuseless, ChristianKl
comment by entirelyuseless · 2016-07-22T03:06:37.761Z · LW(p) · GW(p)

"Apple" is not used to refer to a "large collection of atoms" etc. You believe that apples are large collections of atoms; but that is not the meaning of the word. So you are making one of the same mistakes here that you made in the zombie argument.

comment by ChristianKl · 2016-07-22T09:35:52.709Z · LW(p) · GW(p)

People spoke of apples before they knew anything about atoms. Someone did discover at sometime that the entities that we call apples are made out of atoms.

If I would have a teleporter and exchange the atoms one-by-one with other atoms it would also stay the same apple. Especially when it comes to bridges I think there are actual bridges that had nearly total atom exchange but as still considered to be the same bridge.

Replies from: dxu
comment by dxu · 2016-07-26T19:43:06.810Z · LW(p) · GW(p)

Your comment is true, but it doesn't address the original issue of whether it is possible to deduce morality from physics. If your intent was to provide a clarification, that's fine, of course.

comment by TheAncientGeek · 2016-07-20T15:36:28.836Z · LW(p) · GW(p)

How do you get a statement about how you should build a bridge so it doesn't fall down?

Replies from: dxu
comment by dxu · 2016-07-20T18:11:50.325Z · LW(p) · GW(p)

Presumably, you get such a statement from the laws of physics, which allow you deduce things about quantities like force, stress, gravity, etc. I see no evidence that the laws of physics allow you to deduce similar things about morality.

Replies from: entirelyuseless, TheAncientGeek
comment by entirelyuseless · 2016-07-21T11:38:20.744Z · LW(p) · GW(p)

No, because the axioms of physics do not contain the word "bridge."

(Also, note that TheAncientGeek deliberately included the word "should" in his bridge statement, so you just effectively contradicted yourself by saying that a statement involving "should" can be deduced from physics.)

comment by TheAncientGeek · 2016-07-22T15:05:57.193Z · LW(p) · GW(p)

You seem to have conceded that you can get shoulds out of descriptions. The trick seems to be that if there is something you want to achieve, there are things you should and should not do to achieve it.

If the purpose of morality is, for instance, to achieve cooperative outcomes, and avoid conflict over resources, then there are things people should and shouldn't do to support that. Although something like game theory , rather than physics, would supply the details

.

comment by themusicgod1 · 2017-06-05T01:58:52.461Z · LW(p) · GW(p)

This post is generalizable, even if you don't think that it's wrong to kill people as a general rule there's probably some other moral act #G_30429 that you probably don't think that it would be appropriate and the point still holds: Rowhammering the bit that says "Don't do #G_30429" is probably not as impossible as it seems in the long run.

(Meta: when thinking about this I found it difficult to recall all of the arguments I've learned in moral philosophy over the past 16 years of trying that would have been applicable. I knew where you were going, roughly, but it was like traveling through a city I haven't been to in years in terms of whether or not I recognized the territory. This gave me an extra impression of 'this bit could be easily flipped')

comment by vedrfolnir · 2018-04-08T17:29:28.731Z · LW(p) · GW(p)
There is a courage that goes beyond even an atheist sacrificing their life and their hope of immortality [? · GW].  It is the courage of a theist who goes against what they believe to be the Will of God [? · GW], choosing eternal damnation and defying even morality in order to rescue a slave, or speak out against hell, or kill a murderer... 

I'm a little late here, but this sounds a lot like Corneliu Codreanu's line that the truest martyr of all is one who goes to Hell for his country.