Jews and Nazis: a version of dust specks vs torture

post by shminux · 2012-09-07T20:15:26.518Z · LW · GW · Legacy · 151 comments

Contents

  EDIT: Thanks to CronoDAS for pointing out that this is known as the 1000 Sadists problem. Once I had this term, I found that lukeprog has mentioned it on his old blog. 
None
151 comments

This is based on a discussion in #lesswrong a few months back, and I am not sure how to resolve it.

Setup: suppose the world is populated by two groups of people, one just wants to be left alone (labeled Jews), the other group hates the first one with passion and want them dead (labeled Nazis). The second group is otherwise just as "good" as the first one (loves their relatives, their country and is known to be in general quite rational). They just can't help but hate the other guys (this condition is to forestall the objections like "Nazis ought to change their terminal values"). Maybe the shape of Jewish noses just creeps the hell out of them, or something. Let's just assume, for the sake of argument, that there is no changing that hatred.

Is it rational to exterminate the Jews to improve the Nazi's quality of life? Well, this seems like a silly question. Of course not! Now, what if there are many more Nazis than Jews? Is there a number large enough where exterminating Jews would be a net positive utility for the world? Umm... Not sure... I'd like to think that probably not, human life is sacred! What if some day their society invents immortality, then every death is like an extremely large (infinite?) negative utility!

Fine then, not exterminating. Just send them all to concentration camps, where they will suffer in misery and probably have a shorter lifespan than they would otherwise. This is not an ideal solutions from the Nazi point of view, but it makes them feel a little bit better. And now the utilities are unquestionably comparable, so if there are billions of Nazis and only a handful of Jews, the overall suffering decreases when the Jews are sent to the camps.

This logic is completely analogous to that in the dust specks vs torture discussions, only my "little XML labels", to quote Eliezer, make it more emotionally charged. Thus, if you are a utilitarian anti-specker, you ought to decide that, barring changing Nazi's terminal value of hating Jews, the rational behavior is to herd the Jews into concentration camps, or possibly even exterminate them, provided there are enough Nazi's in the world who benefit from it.

This is quite a repugnant conclusion, and I don't see a way of fixing it the way the original one is fixed (to paraphrase Eliezer, "only lives worth celebrating are worth creating").

EDIT: Thanks to CronoDAS for pointing out that this is known as the 1000 Sadists problem. Once I had this term, I found that lukeprog has mentioned it on his old blog. 

 

151 comments

Comments sorted by top scores.

comment by CronoDAS · 2012-09-08T01:46:17.866Z · LW(p) · GW(p)

What is sometimes called "the 1000 Sadists problem" is a classic "problem" in utilitarianism; this post is another version of it.

Here's another version, which apparently comes from this guy's homework:

Suppose that the International Society of Sadists is holding its convention in Philadelphia and in order to keep things from getting boring the entertainment committee is considering staging the event it knows would make the group the happiest, randomly selecting someone off the street and then torturing that person before the whole convention. One member of the group, however, is taking Phil. 203 this term and in order to make sure that such an act would be morally okay insists that the committee consult what a moral philosopher would say about it. In Smart's essay on utilitarianism they read that "the only reason for performing an action A rather than an alternative action B is that doing A will make mankind (or, perhaps, all sentient beings) happier than will doing B." (Smart, p. 30) This reassures them since they reason that the unhappiness which will be felt by the victim (and perhaps his or her friends and relatives) will be far outweighed by the happiness felt by the large crowd of sadists, especially since the whole thing will be kept strictly secret (as of course the whole convention has to be every year anyway). So they conclude that the best, indeed morally right, thing to do is go ahead and torture this person, and set off to do it.

Write a short paper in which you explain and critically evaluate what a defender of utilitarianism, for instance Smart’s version of act utilitarianism, could say about this example. Have the sadists misunderstood utilitarianism? Or will a defender of this view just have to accept the sadists' conclusion (and if so, what, if anything, does that say about the theory itself)?

Replies from: V_V
comment by V_V · 2012-09-08T08:34:26.925Z · LW(p) · GW(p)

That's a reverse version of the utility monster scenario.

Act utilitarianism always leads to these kind of paradoxes. I don't think it can be salvaged.

comment by TheOtherDave · 2012-09-07T22:05:29.244Z · LW(p) · GW(p)

(shrug) Sure, I'll bite this bullet.

Yes, if enough people are made to suffer sufficiently by virtue of my existence, and there's no way to alleviate that suffering other than my extermination, then I endorse my extermination.
To do otherwise would be unjustifiably selfish.

Which is not to say I would necessarily exterminate myself, if I had sufficiently high confidence that this was the case... I don't always do what I endorse.

And if it's not me but some other individual or group X that has that property in that hypothetical scenario, I endorse X's extermination as well.

And, sure, if you label the group in an emotionally charged way (e.g., "Nazis exterminating Jews" as you do here), I'll feel a strong emotional aversion to that conclusion (as I do here).

Replies from: J_Taylor, Dolores1984, Viliam_Bur
comment by J_Taylor · 2012-09-07T22:54:18.807Z · LW(p) · GW(p)

Yes, if enough people are made to suffer sufficiently by virtue of my existence, and there's no way to alleviate that suffering other than my extermination, then I endorse my extermination. To do otherwise would be unjustifiably selfish.

Be careful, TheOtherDave! Utility Monsters are wily beasts.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-08T00:07:39.251Z · LW(p) · GW(p)

(nods) Yup.

A lot of the difficulty here, of course, as in many such scenarios, is that I'm being asked to consider the sufferers in this scenario people, even though they don't behave like any people I've ever known.

That said, I can imagine something that suffers the way they do and that I still care about alleviating the suffering of.

The threshold between what I care about and what I don't is, as always, pretty friggin arbitrary.

comment by Dolores1984 · 2012-09-07T23:10:15.993Z · LW(p) · GW(p)

Really? Screw that. If my existence makes other people unhappy, I'm entirely fine with that. It's not any of their business anyway. We can resolve the ethical question the old-fashioned way. They can try to kill me, and I can try to kill them right back.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-08T00:57:12.755Z · LW(p) · GW(p)

It's not any of their business anyway.

If things that make me unhappy aren't my business, what is my business?

But whether your existence makes me unhappy or not, you are, of course, free not to care.
And even if you do care, you're not obligated to alleviate my unhappiness. You might care, and decide to make me more unhappy, for whatever reasons.

And, sure, we can try to kill each other as a consequence of all that.
It's not clear to me what ethical question this resolves, though.

comment by Viliam_Bur · 2012-09-08T19:38:36.451Z · LW(p) · GW(p)

Here is a more difficult scenario:

I am a mind uploaded to a computer and I hate everyone except me. Seeing people dead would make me happy; knowing they are alive makes me suffer. (The suffering is not big enough to make my life worse than death.)

I also have another strong wish -- to have a trillion identical copies of myself. I enjoy the company of myself, and trillion seems like a nice number.

What is the Friendly AI, the ruler of this universe, supposed to do?

My life is not worse than death, so there is nothing inherently unethical in me wanting to have a trillion copies of myself, if that is economically available. All those copies will be predictably happy to exist, and even happier to see their identical copies around them.

However, in the moment when my trillion identical copies exist, their total desire to see everyone else dead will become greater than the total desire of all others to live. So it would be utility maximizing to kill the others.

Should the Friendly AI allow it or disallow it... and what exactly would be its true rejection?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-08T21:06:22.802Z · LW(p) · GW(p)

There are lots of hippo-fighting things I could say here, but handwaving a bit to accept the thrust of your hypothetical... a strictly utilitarian FAI of course agrees to kill everyone else (2) and replace them with copies of you (1). As J_Taylor said, utility monsters are wily beasts.

I find this conclusion intuitively appalling. Repugnant, even, Which is no surprise; my ethical intuitions are not strictly utilitarian. (3)

So one question becomes, are the non-utilitarian aspects of my ethical intuitions something that can be applied on these sorts of scales, and what does that look like, and is it somehow better than a world with a trillion hateful Viliam_Burs (1) and nobody else?

I think it isn't. That is, given the conditions you've suggested, I think I endorse the end result of a trillion hateful Viliam_Burs (1) living their happy lives and the appalling reasoning that leads to it, and therefore the FAI should allow it. Indeed, should enforce it, even if no human is asking for it.

But I'm not incredibly confident of that, because I'm not really sure I'm doing a good enough job of imagining that hypothetical world for the things I intuitively take into consideration to fully enter into those intuitive calculations.

For example, one thing that clearly informs my intuitions is the idea that Viliam_Bur in that scenario is responsible (albeit indirectly) for countless deaths, and ought to be punished for that, and certainly ought not be rewarded for it by getting to inherit the universe. (4) But of course that intuition depends on all kinds on hardwired presumptions about moral hazard and your future likelihood to commit genocide if rewarded for your last genocide and so forth, and it's not clear that any such considerations actually apply in your hypothetical scenario... although it's not clear that they don't, either.

There are a thousand other factors like that.

Does that answer your question?

===
(1) Or, well, a trillion something. I really don't know what I want to say about the difference between one identical copy and a trillion identical copies when it comes to their contribution to some kind of total. This is a major gap in my ethical thinking; I do not know how to evaluate the value of copies; it seems to me that distinctness should matter, somehow. But that's irrelevant here; your scenario retains its power if, instead of a trillian identical copies of you, the FAI is invited to create a group of a trillion distinct individuals who hate everyone outside that group.

(2) Assuming that nobody else also wants a trillion copies of them made and it can't just move us all to Canarsie and not tell you and etc. and etc. All of which is actually pretty critical in practice, and handwaving it away creates a universe fairly importantly different from the one we actually live in, but I accept it for the sake of peace with hippos.

(3) In particular, the identity issue raises its head again. Killing everyone and replacing them with a trillion distinct people who are in some way superior doesn't feel the same to me as replacing them with a trillion copies of one superior person. I don't know whether I endorse that feeling or not. For our purposes here, I can dodge that question as above, by positing a trillion not-quite-identical copies.

(4) I know this, because I'm far less appalled by a similar thought experiment in which you don't want everyone else dead, you plead for their continued survival despite knowing it makes you less happy, and the FAI ignores all of that and kills them all anyway, knowing that you provide greater utility, and your trillion copies cry, curse the FAI's name, and then go on about your lives. All of which changes the important parts of the scenario not at all, but sure does make me feel better about it.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-09-09T10:25:55.170Z · LW(p) · GW(p)

(1) and (3) -- Actually my original thought was "a trillion in-group individuals (not existing yet) who like each other and hate the out-groups", but then I replaced it with trillion copies to avoid possible answers like: "if they succeed to kill all out-groups, they will probably split into subgroubs and start hating out-subgroups". Let's suppose that the trillion copies, after exterminating the rest of the universe, will be happy. The original mind may even wish to have those individuals created hard-wired to feel like this.

(2) -- What if someone else wants trillion copies too, but expresses their wish later? Let's assume there are two such hateful entities, let's call them A and B. Their copies do not exist yet -- so it makes sense to create trillion copies of A, and kill everyone else including (the single copy of) B; just as it makes sense to create trillion copies of B and kill everyone else including (the single copy of) A. Maybe the first one who expresses their wishes win. Or it may be decided by considering that trillion As would be twice as happy as trillion Bs, therefore A wins. Which could be fixed by B wishing for ten trillion copies instead.

But generally the idea was that calculations about "happiness for most people" can be manipulated if some group of people desires great reproduction (assuming their children will mostly inherit their preferences), which gradually increases the importance of wishes of given group.

Even the world ruled by utilitarian Friendly AI would allow fights between groups, where the winning strategy is to "wish for a situation, where it is utilitarian to help us and to destroy our enemies". In such world, the outside-hateful inside-loving hugely reproducing groups with preserved preferences would have an "evolutionary advantage", so they would gradually destroy everyone else.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-09T16:11:27.817Z · LW(p) · GW(p)

(nods) I'm happy to posit that the trillion ViliamBur-clones, identical or not, genuinely are better off; otherwise of course the entire thing falls apart. (This isn't just "happy," and it's hard to say exactly what it is, but whatever it is I see no reason to believe it's logically incompatible with some people just being better at it than others. In LW parlance, we're positing that ViliamBur is much better at having Fun than everybody else. In traditional philosophical terms, we're positing that ViliamBur is a Utility Monster.)

Their copies do not exist yet -- so it makes sense to create trillion copies of A, and kill everyone else including (the single copy of) B

No.

That the copies do not exist yet is irrelevant.
The fact that you happened to express the wish is irrelevant, let alone when you did so.
What matters is the expected results of various courses of action.

In your original scenario, what was important was that the expected result of bulk-replicating you was that the residents of the universe are subsequently better off. (As I say, I reluctantly endorse the FAI doing this even against your stated wishes.) In the modified scenario where B is even more of a Utility Monster than you are, it bulk-replicates B instead. If the expected results of bulk-replicating A and B are equipotential, it picks one (possibly based on other unstated relevant factors, or at random if you really are equipotential).

Incidentally, one of the things I had to ignore in order to accept your initial scenario was the FAI's estimated probability that, if it doesn't wipe everyone else out, sooner or later someone even more utility-monsterish than you (or B) will be born. Depending on that probability, it might not bulk-replicate either of you, but instead wait until a suitable candidate is born. (Indeed, a utilitarian FAI that values Fun presumably immediately gets busy constructing a species more capable of Fun than humans, with the intention of populating the universe with them instead of us.)

But generally the idea was that calculations about "happiness for most people" can be manipulated if some group of people desires great reproduction (assuming their children will mostly inherit their preferences), which gradually increases the importance of wishes of given group.

Again, calculations about utility (which, again, isn't the same as happiness, though it's hard to say exactly what it is) have absolutely nothing to do with wishes in the sense you're using the term here (that is, events that occur at a particular time). It may have something to do with preferences, to the extent that the FAI is a preference utilitarian... that is, if its calculations of utility are strongly contingent on preference-having entities having their preferences satisfied, then it will choose to satisfy preferences.

Even the world ruled by utilitarian Friendly AI would allow fights between groups, where the winning strategy is to "wish for a situation, where it is utilitarian to help us and to destroy our enemies".

Again, no. Wishing for a situation as a strategic act is completely irrelevant. Preferring a situation might be, but it is very odd indeed to refer to an agent having a strategic preference... strategy is what I implement to achieve whatever my preferences happen to be. For example, if I don't prefer to populate the universe with clones of myself, I won't choose to adopt that preference just because adopting that preference will make me more successful at implementing it.

That said, yes, the world ruled by utilitarian FAI will result in some groups being successful instead of others, where the winning groups are the ones whose existence maximizes whatever the FAI's utility definition is.

In such world, the outside-hateful inside-loving hugely reproducing groups with preserved preferences would have an "evolutionary advantage", so they would gradually destroy everyone else.

If they don't have corresponding utility-inhibiting factors, which I see no reason to believe they necessarily would, yes, that's true. Well, not necessarily gradually... they might do so immediately.

Is this important?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-09-09T17:42:28.301Z · LW(p) · GW(p)

Indeed, a utilitarian FAI that values Fun presumably immediately gets busy constructing a species more capable of Fun than humans, with the intention of populating the universe with them instead of us.

Oh.

I would hope that the FAI would instead turn us into the species most capable of fun. But considering the remaining time of the universe and all the fun the new species will have there, the difference between (a) transforming us or (b) killing us and creating the other species de novo, is negligible. The FAI would probably choose the faster solution, because it would allow more total fun-time for the superhappies. If there are more possible superhappy designs, equivalent in their fun-capacity, the FAI would chose the one that cares about us the least, to reduce their possible regret of our extinction. Probably something very unsimilar to us (as much as the definition of "fun" allows). They would care about us less than we care about the dinosaurs.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-09T18:45:28.095Z · LW(p) · GW(p)

Faster would presumably be an issue, yes. Minimizing expected energy input per unit Fun output would presumably also be an issue.

Of course, all of this presumes that the FAI's definition of Fun doesn't definitionally restrict the experience of Fun to 21st-century humans (either as a species, or as a culture, or as individuals).

Unrelatedly, I'm not sure I agree about regret. I can imagine definitions of Fun such that maximizing Fun requires the capacity for regret, for example.

comment by prase · 2012-09-07T21:22:38.305Z · LW(p) · GW(p)

Well, this can easily become a costly signalling issue when the obvious (from the torture-over-speck-supporter's perspective) comment would read "it is rational for the Nazis to exterminate the Jews". I would certainly not like to explain having written such a comment to most people. Claiming that torture is preferable to dust specks in some settings is comparably harmless.

Given this, you probably shouldn't expect honest responses from a lot of commenters.

if you are a specker, you ought to decide that, barring changing Nazi's terminal value of hating Jews, the rational behavior is to [harm Jews]

The use of "specker" to denote people who prefer torture to specks can be confusing.

Replies from: Kindly, Luke_A_Somers, shminux
comment by Kindly · 2012-09-07T22:38:07.693Z · LW(p) · GW(p)

The use of "specker" to denote people who prefer torture to specks can be confusing.

Let's call them "torturers" instead.

Edit: or "Nazis".

Replies from: Emile
comment by Emile · 2012-09-08T07:12:36.953Z · LW(p) · GW(p)

Wait, are you calling me a Nazi?

comment by Luke_A_Somers · 2012-09-10T12:49:16.073Z · LW(p) · GW(p)

Speck-free?

comment by shminux · 2012-09-08T20:03:16.991Z · LW(p) · GW(p)

Edited, thanks.

comment by Alicorn · 2012-09-07T20:27:08.157Z · LW(p) · GW(p)

This probably would have been better if you'd made it Venusians and Neptunians or something.

Replies from: mrglwrf, shminux, None
comment by mrglwrf · 2012-09-07T20:58:49.183Z · LW(p) · GW(p)

But wouldn't that defeat the purpose, or am I missing something? I understood the offensiveness of the specific example to be the point.

Replies from: palladias, fubarobfusco
comment by palladias · 2012-09-07T21:19:40.462Z · LW(p) · GW(p)

Right, I thought the point was showing people are viscerally uncomfortable with the result of this line of reasoning and make them decide whether they reject (a) the reasoning (b) the discomfort or (c) the membership of this example in the torture vs specks class

comment by fubarobfusco · 2012-09-07T21:10:57.473Z · LW(p) · GW(p)

That's called "trolling", yes?

Replies from: prase
comment by prase · 2012-09-07T21:34:58.022Z · LW(p) · GW(p)

Trolling usually means disrupting the flow of discussion by deliberate offensive behaviour towards other participants. It usually doesn't denote proposing a thought experiment with a possible solution that is likely to be rejected for its offensiveness. But this could perhaps be called "trolleying".

Replies from: shminux
comment by shminux · 2012-09-08T20:09:36.215Z · LW(p) · GW(p)

One of the best ever puns I recall on this forum!

comment by shminux · 2012-09-07T20:37:21.797Z · LW(p) · GW(p)

I've considered using neutral terms, but then it is just too easy to say "well, it just sucks to be you, Neptunian, my rational anti-dust-specker approach requires you to suffer!"

Replies from: orthonormal, Raemon
comment by orthonormal · 2012-09-07T21:01:30.209Z · LW(p) · GW(p)

It's a bad sign if you feel your argument requires violating Godwin's Law in order to be effective, no?

Replies from: Dolores1984, DanArmak, shminux
comment by Dolores1984 · 2012-09-07T21:28:47.778Z · LW(p) · GW(p)

Not strictly. It's still explicitly genocide with Venusians and Neptunians -- it's just easier to ignore that fact in the abstract. Connecting it to an actual genocide causes people to reference their existing thinking on the subject. Whether or not that existing thinking is applicable is open for debate, but the tactic's not invalid out of hand.

Replies from: prase
comment by prase · 2012-09-07T21:59:26.655Z · LW(p) · GW(p)

The supposed positive (making the genocide easier to imagine) is however outweighed by a big negative of the connotations brought by the choice of terminology. It was certainly not true about the Nazis that their hatred towards the Jews was an immutable terminal value and the "known to be in general quite rational" part is also problematic. Of course we shouldn't fight the hippo, but it is hard to separate the label "Nazi" from its real meaning.

As a result, the replies to this post are going to be affected by three considerations: 1) the commenters' stance towards the speck/torture problem, 2) their ability to accept the terms of a hypothetical while ignoring most connotations of used terminology, and 3) their courage to say something which may be interpreted as support for Nazism by casual readers. Which makes the post pretty bad as a thought experiment supposed to inquire only the first question.

Replies from: Dolores1984
comment by Dolores1984 · 2012-09-07T22:11:09.635Z · LW(p) · GW(p)

I suppose that's fair. I do think that trying to abstract away the horror of genocide is probably not conducive to a good analysis, either, but there may be an approach better suited to this that does not invoke as much baggage.

comment by DanArmak · 2012-09-08T13:32:56.570Z · LW(p) · GW(p)

It's a bad sign if you feel your ethics don't work (or shouldn't be talked about) in an important, and real, case like the Nazis vs. Jews.

Replies from: orthonormal
comment by orthonormal · 2012-09-08T14:55:25.902Z · LW(p) · GW(p)

You need to be able to argue against genocide without saying "Hitler wanted to exterminate the Jews." If Hitler hadn't advocated genocide, would it thereby become okay?

Replies from: DanArmak
comment by DanArmak · 2012-09-08T15:56:09.315Z · LW(p) · GW(p)

I'm not saying genocide is bad because Hitler did it. I'm saying it's bad for other reasons, regardless of who does it, and Hitler should not be a special case either way.

n your previous comment you seemed to be saying that a good argument should be able to work without invoking Hitler. I'm saying that a good argument should also be able to apply to Hitler just as well as to anyone else. Using Hitler as an example has downsides, but if someone claims the argument actually doesn't work for Hitler as well as for other cases, then by all means we should discuss Hitler.

comment by shminux · 2012-09-07T21:16:13.989Z · LW(p) · GW(p)

It is also a bad sign if you invoke TWAITW. If you check the law, as stated on Wikipedia, it does not cover my post:

"As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1."

The Reductio ad Hitlerum attempts to refute a view because it has been held by Hitler.

You can sort of make your case that it is covered by one of the Corollaries:

Godwin's law applies especially to inappropriate, inordinate, or hyperbolic comparisons of other situations (or one's opponent) with Nazis.

except for the proposed amendment:

Adam Gopnik has proposed an amendment to Godwin's Law. Called 'Gopnik's Amendment', he argues that comparisons to the Nazis is justified if the individual or association embrace a world-view that contains the three characteristics of nationalism, militarism, and hatred of otherness (that includes but is not limited to anti-Semitism).

Which is exactly what I was doing (well, one out of three, so not exactly).

Replies from: orthonormal
comment by orthonormal · 2012-09-07T21:24:50.038Z · LW(p) · GW(p)

As discussed there, pointing out that it has this feature isn't always the worst argument in the world. If you have a coherent reason why this argument is different from other moral arguments that require Godwin's Law violations for their persuasiveness, then the conversation can go forward.

EDIT: (Parent was edited while I was replying.) If "using Jews and Nazis as your example because replacing them with Venusians and Neptunians would fail to be persuasive" isn't technically "Godwin's Law", then fine, but it's still a feature that correlates with really bad moral arguments, unless there's a relevant difference here.

comment by Raemon · 2012-09-07T22:11:40.599Z · LW(p) · GW(p)

This is a bit of a fair point. I guess I'd have written the hypothetical in a few stages to address the underlying issue, which is presumably is either:

1) what happens if it turns out humans don't have compatible values?

2) How does our morality handle aliens or transhumans with unique moralities? What if they are almost identical to our own?

I don't think the babyeater story provided an answer (and I don't have one now) but I felt like it addressed the issue in an emotionally salient way that wasn't deceptive.

comment by [deleted] · 2012-09-08T06:44:55.506Z · LW(p) · GW(p)

But then we all know what people's answers would be.

I think his the point is that if you took a Martian or Neptunian who happens to really hate Venusians and likes Neptunians in his native universe and presented him with a universe similar to the OP he would most likely not behave like a utilitarian he claims he is or wants to be. That's not really much of a problem.

The problem is that he is likely to come up with all sorts of silly rationalizations to cover up his true rejection.

comment by Thomas · 2012-09-07T22:11:36.571Z · LW(p) · GW(p)

It is just a logical conclusion from "dust specks". You can/must do horrible things to a small minority, if a large majority members benefit a little from that.

Another part of the Sequence I reject.

Replies from: SilasBarta
comment by SilasBarta · 2012-09-08T00:17:17.819Z · LW(p) · GW(p)

Wait, what was the conclusion of dust specks? I'm guess "torture", but then, why is this conclusion so strong and obvious (after the fact)? I had always been on the dust specks side, for a few reasons, but I'd like to know why this position is so ridiculous, and I don't know even despite having participated in those threads.

Replies from: ShardPhoenix, Thomas, ArisKatsaris
comment by ShardPhoenix · 2012-09-08T02:06:50.213Z · LW(p) · GW(p)

The problem attempts to define the situation so that "torture" is utility maximizing. Therefore if you are a utility maximizer, "torture" is the implied choice. The problem is meant to illustrate that in extreme cases utility maximization can (rightly or wrongly) lead to decisions that are counter-intuitive to our limited human imaginations.

comment by Thomas · 2012-09-08T06:56:00.493Z · LW(p) · GW(p)

For me, the sum of all the pains isn't a good measure for the dreadfulness of a situation. The maximal pain is a better one.

But I don't think it is more than a preference. It is may preference only. Like a strawberry is better than a blueberry.

For my taste, the dust specks for everybody is better than a horrible torture for just one.

Ask yourself, in which world would you want to be in all the roles.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-09-10T10:53:53.293Z · LW(p) · GW(p)

For me, the sum of all the pains isn't a good measure for the dreadfulness of a situation. The maximal pain is a better one.

It's worse to break the two legs of a single man than to break one leg each of seven billion people?

If a genie forced you to choose between the two options, would you really prefer the latter scenario?

Ask yourself, in which world would you want to be in all the roles.

I'm sorry, but I really can't imagine the size of 3^^^3. So I really can't answer this question by trying to imagine myself filling all those roles. My imagination just fails at that point. And if anyone here thinks they can imagine it, I think they're deluding themselves.

But if anyone wants to try, I'd like to remind them that in a random sample there'd probably be innumerable quintillions of people that would already be getting tortured for life one way or another. You're not removing all that torture if you vote against torturing a single person more.

Replies from: Thomas
comment by Thomas · 2012-09-10T21:18:45.572Z · LW(p) · GW(p)

It's worse to break the two legs of a single man than to break one leg each of seven billion people?

First, I would eliminate two leg breaking. Second, one leg breaking.

Of course, an epidemic one leg breaking would have othere severe effects like starvation to death and alike. What should come even before two broken legs.

In a clean abstract world of just a broken leg or two per person, with no further implications, the maximal pain is stil the first to be eliminated, if you ask me.

Replies from: CarlShulman, TheOtherDave
comment by CarlShulman · 2012-09-10T21:41:39.031Z · LW(p) · GW(p)

From behind the veil of ignorance, would you rather have a 100% chance of one broken leg, or a 1/7,000,000,000 chance of two broken legs and 6,999,999,999/7,000,000,000 chance of being unharmed?

Replies from: Thomas
comment by Thomas · 2012-09-11T07:58:40.379Z · LW(p) · GW(p)

I would opt for two broken legs with a small probability, of course. In your scenario.

But I would choose one broken leg, if that would mean that the total amount of two broken legs would go to zero then.

In another words. I would vaccinate everybody (the vaccination causes discomfort) to eliminate a deadly disease like Ebola which kills few.

What would you do?

Replies from: CarlShulman, TheOtherDave, ArisKatsaris
comment by CarlShulman · 2012-09-11T13:09:04.548Z · LW(p) · GW(p)

But I would choose one broken leg, if that would mean that the total amount of two broken legs would go to zero then.

Creatures somewhere in existence are going to face death and severe harm for the foreseeable future. This view then seems inert.

In another words. I would vaccinate everybody (the vaccination causes discomfort) to eliminate a deadly disease like Ebola which kills few.

What would you do?

There are enough minor threats with expensive countermeasures (more expensive as higher reliability is demanded) that this approach would devour all available wealth. It would bar us from, e.g. traveling for entertainment (risk of death exists whether we walk, drive, or fly). I wouldn't want that tradeoff for society or for myself.

comment by TheOtherDave · 2012-09-11T14:47:34.860Z · LW(p) · GW(p)

I would endorse choosing a broken leg for one person if that guaranteed that nobody in the world had two broken legs, certainly. This seems to have drifted rather far from the original problem statement.

I would also vaccinate a few billion people to avoid a few hundred deaths/year, if the vaccination caused no negative consequences beyond mild discomfort (e.g., no chance of a fatal allergic reaction to the vaccine, no chance of someone starving to death for lack of the resources that went towards vaccination, etc).

I'm not sure I would vaccinate a few billion people to avoid a dozen deaths though... maybe, maybe not. I suspect it depends on how much I value the people involved.

I probably wouldn't vaccinate a few billion people to avoid a .000001 chance of someone dying. Though if I assume that people normally live a few million years instead of a few dozen, I might change my mind. I'm not sure though... it's hard to estimate with real numbers in such an implausible scenario; my intuitions about real scenarios (with opportunity costs, knock-on effects, etc.) keep interfering.

Which doesn't change my belief that scale matters. Breaking one person's leg is preferable to breaking two people's legs. Breaking both of one person's legs is preferable to breaking one of a million people's legs.

comment by ArisKatsaris · 2012-09-11T10:04:37.824Z · LW(p) · GW(p)

In another words. I would vaccinate everybody (the vaccination causes discomfort) to eliminate a deadly disease like Ebola which kills few.

What would you do?

I don't think you understand the logic behind the anti-speckers's choice. It isn't that we always oppose the greater number of minor disutilities. It's that we believe that there's an actual judgment to be made given the specific disutilities and numbers involved -- you on the other hand just ignore the numbers involved altogether.

I would vaccinate everyone to eradicate Ebola which kills few. But I would not vaccinate everyone to eradicate a different disease that mildly discomforts few only slightly more so than the vaccination process itself.

Replies from: Thomas
comment by Thomas · 2012-09-11T10:16:59.118Z · LW(p) · GW(p)

I don't think you understand the logic behind the anti-speckers's choice.

The logic is: Integrate two evils through time and eliminate that which has a bigger integral!

I just don't agree with it.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-09-11T10:20:06.944Z · LW(p) · GW(p)

May I ask if you consider yourself a deontologist, a consequentialist, or something else?

comment by TheOtherDave · 2012-09-10T23:46:25.150Z · LW(p) · GW(p)

Agreed that introducing knock-on effects (starvation and so forth) is significantly changing the scenario. I endorse ignoring that.

Given seven billion one-legged people and one zero-legged person, and the ability to wave a magic wand and cure either the zero-legged person or the 6,999,999,999 one-legged people, I heal the one-legged people.

That's true even if I have the two broken legs.
That's true even if I will get to heal the other set later (as is implied by your use of the word "first").

If I've understood you correctly, you commit to using the wand to healing my legs instead of healing everyone else.

If that's true, I will do my best to keep that wand out of your hands.

Replies from: Thomas
comment by Thomas · 2012-09-11T08:22:33.332Z · LW(p) · GW(p)

If I've understood you correctly, you commit to using the wand to healing my legs instead of healing everyone else.

If that's true, I will do my best to keep that wand out of your hands.

So, you would do everything you can, to prevent a small probability, but very bad scenario? Wouldn't you just neglect it?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-11T12:55:17.214Z · LW(p) · GW(p)

I would devote an amount of energy to avoiding that scenario that seemed commensurate with its expected value. Indeed, I'm doing so right now (EDIT: actually, on consideration, I'm devoting far more energy to it than it merits). If my estimate of the likelihood of you obtaining such a wand (and, presumably, finding the one person in the world who is suffering incrementally more than anyone else and alleviating his or her suffering with it) increases, the amount of energy I devote to avoiding it might also increase.

comment by ArisKatsaris · 2012-09-08T01:24:39.075Z · LW(p) · GW(p)

Different people had different answers. Eliezer was in favor of torture. I am likewise. Others were in favor of the dust specks.

but I'd like to know why this position is so ridiculous

If you want to know why some particular person called your position ridiculous, perhaps you should ask whatever particular person so called it.

My own argument/illustration is that for something to be called the ethically right choice, things should work out okay if more people chose it, the more the better. But in this case, if a billion people chose dust-specks or the equivalent thereof, then whole vast universes would be effectively tortured. A billion tortures would be tragic, but it pales in comparison to a whole universe getting tortured.

Therefore dust-specks is not a universalizable choice, therefore it's not the ethically right choice.

Replies from: SilasBarta
comment by SilasBarta · 2012-09-08T04:32:26.818Z · LW(p) · GW(p)

If you want to know why some particular person called your position ridiculous,

Nobody did; I was replying to the insinuation that the insinuation that it must be ridiculous, regardless of the reasoning.

My own argument/illustration is that for something to be called the ethically right choice, things should work out okay if more people chose it, the more the better. ...

That doesn't work if this is a one-off event, and equating "distributed" with "concentrated" torture requires resolution of the multiperson utility aggregation problem and so would be hard to consider either route ridiculous (as implied by the comment where I entered the thread).

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-09-08T09:06:20.923Z · LW(p) · GW(p)

That doesn't work if this is a one-off event,

The event doesn't need to be repeated, the type of event needs to be repeated (whether you'll choose a minor disutility spread to many, or a large disutility to one). And these type of choices do happen repeatedly, all the time, even though most of them aren't about absurdly large numbers like 3^^^3 or absurdly small disutilities like a dust speck. Things that our mind isn't made to handle.

If someone asked you whether it'd be preferable to save a single person from a year's torture, but in return a billion people would have to get their legs broken -- I bet you'd choose to leave the person tortured; because the numbers are a bit more reasonable, and so the actual proper choice is returned by your brain's intuition...

Replies from: SilasBarta
comment by SilasBarta · 2012-09-08T20:16:39.472Z · LW(p) · GW(p)

The event doesn't need to be repeated, the type of event needs to be repeated (whether you'll choose a minor disutility spread to many, or a large disutility to one). And these type of choices do happen repeatedly, all the time, even though most of them aren't about absurdly large numbers like 3^^^3 or absurdly small disutilities like a dust speck.

But that's assuming they are indeed the same type (that the difference in magnitude does not become a difference in type); and if not, it would make a difference whether or not this choice would in fact generalize.

If someone asked you whether it'd be preferable to save a single person from a year's torture, but in return a billion people would have to get their legs broken -- I bet you'd choose to leave the person tortured;

No, I wouldn't, and for the same reason I wouldn't in the dust specks case: the 3^^^3 can collectively buy off the torturee (i.e. provide compensation enough to make the torture preferable given it) if that setup is Pareto-suboptimal, while the reverse is not true.

[EDIT to clarify the above paragraph: if we go with the torture, and it turns out to be pareto-suboptimal, there's no way the torturee can buy off the 3^^^3 people -- it's a case where willingness to pay collides with the ability to pay (or perhaps, accept). If the torturee, in other words, were offered enough money to buy off the others (not part of the problem), he or she would use the money for such a payment.

In contrast, if we went with the dust specks, and it turned out to be Pareto-suboptimal, then the 3^^^3 could -- perhaps by lottery -- come up with a way to buy off the torturee and make a Pareto-improvement. Since I would prefer we be in situations that we can Pareto-improve away from vs those that can't, I prefer the dust specks.

Moreover, increasing the severity of the disutility that the 3^^^3 get -- say, to broken legs, random murder, etc -- does not change this conclusion; it just increases the consumer surplus (or decreases the consumer "deficit") from buying off the torturee. /end EDIT]

Whatever error I've made here does not appear to stem from "poor handling of large numbers", the ostensible point of the example.

comment by Unnamed · 2012-09-08T02:17:04.238Z · LW(p) · GW(p)

Imagine if humanity survives for the next billion years, expands to populate the entire galaxy, has a magnificent (peaceful, complex) civilization, and is almost uniformly miserable because it consists of multiple fundamentally incompatible subgroups. Nearly everyone is essentially undergoing constant torture, because of a strange, unfixable psychological quirk that creates a powerful aversion to certain other types of people (who are all around them).

If the only alternative to that dystopian future (besides human extinction) is to exterminate some subgroup of humanity, then that creates a dilemma: torture vs. genocide. My inclination is that near-universal misery is worse than extinction, and extinction is worse than genocide.

And that seems to be where this hypothetical is headed, if you keep applying "least convenient possible world" and ruling out all of the preferable potential alternatives (like separating the groups, or manipulating either group's genes/brains/noses to stop the aversive feelings). If you keep tailoring a hypothetical so that the only options are mass suffering, genocide, and human extinction, then the conclusion is bound to be pretty repugnant. None of those bullets are particularly appetizing but you'll have to chew on one of them. Which bullet to bite depends on the specifics; as the degree of misery among the aversion-sufferers gets reduced from torture-levels towards insignificance at some point my preference ordering will flip.

Replies from: Pentashagon, Bruno_Coelho
comment by Pentashagon · 2012-09-11T05:03:52.125Z · LW(p) · GW(p)

I noticed something similar in another comment. CEV must compare the opportunity cost of pursuing a particular terminal value at the expense of all other terminal values, at least in a universe with constrained resources. This leads me to believe that CEV will suggest that the most costly (in terms of utility opportunity lost by choosing to spend time fulfilling a particular terminal value instead of another) terminal value be abandoned until only one is left and we become X maximizers. This might be just fine if X is still humane, but it seems like any X will be expressible as a conjunction of disjunctions and any particular disjuctive clause will have the highest opportunity cost and could be removed to increase overall utility, again leading to maximizing the smallest expressible (or easiest to fulfill) goal.

comment by Bruno_Coelho · 2012-09-10T23:34:05.201Z · LW(p) · GW(p)

Classical failed scenarios. Great morphological/structural changes need legal constraints to don't become very common, or be risk-averse, to prevent the creation of inumerous subgroups with alien values. But contra this, subgroups could go astray far enough to don't be caugth, and make whatever change they want, even creating new subgroups to torture or kill. In this case specifically, I assume we have to deal with this problem before structural changes become common.

comment by Unnamed · 2012-09-07T22:26:46.846Z · LW(p) · GW(p)

This looks like an extension of Yvain's post on offense vs. harm-minimization, with Jews replacing salmon and unchangeable Nazis replacing electrode-implanted Brits.

The consequentialist argument, in both cases, is that if a large group of people are suffering, even if that suffering is based on some weird and unreasonable-seeming aversion, then indefinitely maintaining the status quo in which that large group of people continues to suffer is not a good option. Depending how you construct your hypothetical scenario, and how eager your audience is to play along, you can rule out all of the alternative courses of action except for ones that seem wrong.

comment by Raemon · 2012-09-07T20:22:51.251Z · LW(p) · GW(p)

The assumption "their terminal values are fixed to hate group X" is something akin to "this group is not human, but aliens with an arbitrary set of values that happen to mostly coincide with traditional human values, but with one exception." Which is not terribly different from "These alien race enjoys creativity and cleverness and love and other human values... but also eats babies."

Discussion of human morality only makes sense when you're talking about humans. Yes, arbitrary groups X and Y may, left to their own devices, find it rational to do all kinds of things we find heinous, but then you're moving away from morality and into straight up game theory.

Replies from: TimS, shminux, DanielLC
comment by TimS · 2012-09-07T20:30:19.535Z · LW(p) · GW(p)

Descriptively true, but some argument needs to be made to show that our terminal values never require us to consider any alien's preferences.

Preferably, this argument would also address whether animal cruelty laws are justified by terminal values or instrumental values.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-09-17T05:15:58.453Z · LW(p) · GW(p)

Descriptively true, but some argument needs to be made to show that our terminal values never require us to consider any alien's preferences.

I don't think the argument is that. It's more like our terminal values never require us to consider a preference an alien has that is radically opposed to important human values. If we came across an alien race that, due to parallel evolution, has values that coincide with human values in all important ways, we would be just as obligated to respect their preferences as we would those of a human. If we ran across an alien race whose values were similar in most respects, but occasionally differed in a few important ways, we would be required to respect their preferences most of the time, but not when they were expressing one of those totally inhuman values.

In regard to animal cruelty, "not being in pain" is a value both humans and animals have in common, so it seems like it would be a terminal value to respect it.

Replies from: TimS
comment by TimS · 2012-09-17T16:34:44.590Z · LW(p) · GW(p)

It's more like our terminal values never require us to consider a preference an alien has that is radically opposed to important human values.

That's certainly how we behave. But is it true? Why?

Edit: If your answer is "Terminal value conflicts are intractable," I agree. But that answer suggests certain consequences in how society should be organized, and yet modern society does not really address actual value conflicts with "Purge it with fire."

Also, the word values in the phrases "human values" and "animal values" does not mean the same thing in common usage. Conventional wisdom holds that terminal values are not something that non-human animals have - connotatively if not denotatively.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-09-17T20:56:08.881Z · LW(p) · GW(p)

Edit: If your answer is "Terminal value conflicts are intractable," I agree. But that answer suggests certain consequences in how society should be organized, and yet modern society does not really address actual value conflicts with "Purge it with fire."

I think I might believe that such conflicts are intractable. The reason that society generally doesn't flat-out kill people with totally alien values is that such people are rare-to-nonexistant. Humans who are incurably sociopathic could be regarded as creatures with alien values, providing their sociopathy is egosyntonic. We do often permanently lock up or execute such people.

Also, the word values in the phrases "human values" and "animal values" does not mean the same thing in common usage

You might be right, if you define "value" as "a terminal goal that a consequentialist creature has" and believe most animals do not have enough brainpower to be consequentialists. If this is the case I think that animal cruelty laws are an probably an expression of the human value that creatures not be in pain

comment by shminux · 2012-09-07T20:41:55.727Z · LW(p) · GW(p)

Are you saying that immutable terminal values is a non-human trait?

Replies from: novalis
comment by novalis · 2012-09-07T21:47:36.472Z · LW(p) · GW(p)

With respect to group-based hatred, it seems that there have been changes in both directions over the course of human history (and change not entirely caused by the folks with the old views dying off). So, yeah, I think your Nazis aren't entirely human.

comment by DanielLC · 2012-09-08T00:35:19.732Z · LW(p) · GW(p)

Those baby-eating aliens produce large net disutility, because the babies hate it. In that case, even without human involvement, it's a good idea to kill the aliens. To make it comparable, the aliens have to do something that wouldn't be bad if it didn't disgust the humans. For example, if they genetically modified themselves so that the babies they eat aren't sentient, but have the instincts necessary to scream for help.

Replies from: andrew-sauer
comment by andrew sauer (andrew-sauer) · 2021-08-02T00:16:14.476Z · LW(p) · GW(p)

This situation is more like "they eat babies, but they don't eat that many, to the extent that it produces net utility given their preferences for continuing to do it."

comment by blogospheroid · 2012-09-08T05:15:41.987Z · LW(p) · GW(p)

Isn't it ODD that in a world of Nazis and Jews, me who is neither is being asked to make this decision? If I were a Nazi, I'm sure what my decision is going to be. If I were a Jew, I'm sure what my decision is going to be.

Actually, now that I think about it, this will be a huge problem if and when humanity, in need of new persons to speak to, decides to uplift animals. It is an important question to ask.

Replies from: komponisto, Bruno_Coelho
comment by komponisto · 2012-09-08T11:25:28.457Z · LW(p) · GW(p)

Inspired by this comment, here's a question: what would the CEV of the inhabitants of shminux's hypothetical world look like?

Replies from: ArisKatsaris, shminux, Pentashagon, None
comment by ArisKatsaris · 2012-09-08T14:52:41.490Z · LW(p) · GW(p)

There's obviously no coherence if the terminal values of space-Jews include their continuing existence, and the terminal values of space-Nazis include the space-Jews' eradication.

Replies from: komponisto
comment by komponisto · 2012-09-08T23:42:57.614Z · LW(p) · GW(p)

So what does the algorithm do when you run it?

Replies from: ArisKatsaris, zerker2000
comment by ArisKatsaris · 2012-09-09T02:35:24.896Z · LW(p) · GW(p)

Prints out "these species' values do not cohere"? Or perhaps "both species coherent-extrapolatedly appreciate pretty sunsets, therefore maximize prettiness of sunsets, but don't do anything that impacts on the space-Jews survival one way or another, or the space-Nazis survival either if that connects negatively to the former?"

comment by zerker2000 · 2012-09-09T00:27:37.981Z · LW(p) · GW(p)

Return a "divide by zero"-type error, or send your Turing machine up in smoke trying.

comment by shminux · 2012-09-08T15:29:45.287Z · LW(p) · GW(p)

Note that the CEV must necessarily address contradicting terminal values. Thus an FAI is assumed to be powerful enough to affect people's terminal values, at least over time.

For example, (some of the) Nazis might be OK with not wanting Jews dead, they are just unable to change their innate Jewphobia. An analogy would be people who are afraid of snakes but would not mind living in a world where snakes are non-poisonous (and not dangerous in any other way) and they are not afraid of them.

comment by Pentashagon · 2012-09-11T04:56:50.808Z · LW(p) · GW(p)

It would probably least-destructively turn the jews into nazis or vice versa; e.g. alter one or the other's terminal values such that they were fully compatible. After all, if the only difference between jews and nazis is the nose, why not ask the jews to change the nose and gain an anti-former-nose preference (theoretically the jews would gain utility because they'd have a new terminal value they could satisfy). Of course this is a fine example of how meaningless terminal values can survive despite their innate meaningless; the nazis should realize the irrationality of their terminal value and simply drop it. But will CEV force them to drop it? Probably not. The practical effect is the dissolution of practical utility; utility earned from satisfying an anti-jew preference necessarily reduces the amount of utility attainable from other possible terminal values. That should be a strong argument CEV has to convince any group that one of their terminal values can be dropped, by comparing the opportunity cost of satisfying it to the benefit of satisfying other terminal values. This is even more of a digression from the original question, but I think this implies that CEV may eventually settle on a single, maximally effective terminal value.

comment by [deleted] · 2012-09-08T13:23:24.480Z · LW(p) · GW(p)

I think CEV is supposed execute a controlled shutdown in that kind of situation and helpfully inform the operators that they live in a horrible, horrible world.

comment by Bruno_Coelho · 2012-09-08T20:48:40.655Z · LW(p) · GW(p)

I suspect the names of groups make the framework of problem a bit misleading. Probably if framed in terms of groups A and B could clear the evaluation.

Replies from: blogospheroid
comment by blogospheroid · 2012-09-09T02:53:48.694Z · LW(p) · GW(p)

I just followed the naming convention of the post. There is already a thread where the naming is being disputed starting with Alicorn's comment on venusians and neptunians. As I understand, the naming is to bring near mode thinking right into the decision process and disrupt what would have otherwise been a straightforward utilitarian answer - if there are very few jews and billions of nazis, exterminate the jews.

comment by Manfred · 2012-09-07T21:52:29.389Z · LW(p) · GW(p)

It is always rational for the quasi-Nazis to kill the quasi-Jews, from the Nazi perspective. It's just not always rational for me to kill the Jews - just because someone else wants something, doesn't mean I care.

But if I care about other people in any concrete way, you could modify the problem only slightly in order to have the Nazis suffer in some way I care about because of their hatred of the Jews. In which case, unless my utility is bounded, there is indeed some very large number that corresponds to when it's higher-utility to kill the Jews than to do nothing.

Of course, there are third options that are better, and most of them are even easier than murder, meaning that any agent like me isn't actually going to kill any Jews, they'll have e.g. lied about doing so long before.

comment by buybuydandavis · 2012-09-07T23:28:41.995Z · LW(p) · GW(p)

One of many utilitarian conundrums that are simply not my problem, not being a utilitarian.

comment by Incorrect · 2012-09-07T23:22:59.368Z · LW(p) · GW(p)

If you do happen to think that there is a source of morality beyond human beings... and I hear from quite a lot of people who are happy to rhapsodize on how Their-Favorite-Morality is built into the very fabric of the universe... then what if that morality tells you to kill people?

If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic—anywhere you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say "Pain Is Good"? What then?

Maybe you should hope that morality isn't written into the structure of the universe. What if the structure of the universe says to do something horrible?

And if an external objective morality does say that the universe should occupy some horrifying state... let's not even ask what you're going to do about that. No, instead I ask: What would you have wished for the external objective morality to be instead? What's the best news you could have gotten, reading that stone tablet?

Go ahead. Indulge your fantasy. Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted? If you could write the stone tablet yourself, what would it say?

Maybe you should just do that?

I mean... if an external objective morality tells you to kill people, why should you even listen?

comment by TGM · 2012-09-08T14:32:45.146Z · LW(p) · GW(p)

I suspect what you mean by desire utilitarianism is what wikipedia calls preference utilitarianism, which I believe is the standard term.

Replies from: shminux
comment by shminux · 2012-09-08T15:19:39.671Z · LW(p) · GW(p)

Possibly. I was using the term I found online in relation to the 1000 Sadists problem, and I did not find this or similar problem analyzed on Wikipedia. Maybe SEP has it?

Replies from: CronoDAS, CronoDAS
comment by CronoDAS · 2012-09-10T00:22:33.807Z · LW(p) · GW(p)

I don't know if the "1000 Sadists problem" is the common term for this scenario, it's just one I've seen used in a couple of places.

comment by CronoDAS · 2012-09-10T00:20:28.197Z · LW(p) · GW(p)

"Desire utilitarianism" is a term invented by one Alonzo Fyfe and it isn't preference utilitarianism. It's much closer to "motive utilitarianism".

Replies from: shminux
comment by shminux · 2012-09-10T01:33:33.563Z · LW(p) · GW(p)

Someone ought to add a few words about it to Wikipedia.

Replies from: Eneasz
comment by Eneasz · 2012-09-10T21:11:07.865Z · LW(p) · GW(p)

It was tried a couple years back, Wikipedia shut down the attempt.

comment by Kindly · 2012-09-07T21:56:06.159Z · LW(p) · GW(p)

Of course I wouldn't exterminate the Jews! I'm a good human being, and good human beings would never endorse a heinous action like that. Those filthy Nazis can just suck it up, nobody cares about their suffering anyway.

comment by jimrandomh · 2012-09-08T01:19:12.223Z · LW(p) · GW(p)

The mistake here is in saying that satisfying the preferences of other agents is always good in proportion to the number of agents whose preference is satisfied. While there have been serious attempts to build moral theories with that as a premise, I consider them failures, and reject this premise. Satisfying the preferences of others is only usually good, with exceptions for preferences that I strongly disendorse, independent of the tradeoffs between the preferences of different people. Also, the value of satisfying the same preference in many people grows sub-linearly with the number of people.

comment by TheOtherDave · 2012-09-07T22:32:48.941Z · LW(p) · GW(p)

Hm.

I suppose, if LW is to be consistent, comments on negatively voted posts should incur the same karma penalty that comments on negatively voted comments do.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-09-07T23:00:36.824Z · LW(p) · GW(p)

shminux claims (in an edit to the post) that they do. Do they or not?

Replies from: shminux
comment by shminux · 2012-09-07T23:47:53.193Z · LW(p) · GW(p)

I don't actually know, I simply assumed that this would be the case for posts as well as comments.

comment by blogospheroid · 2012-09-08T05:10:06.639Z · LW(p) · GW(p)

How important is the shape of the noses to the jewish people?

Consider a jew is injured in an accident and the best reconstruction that is present restores the nose to a nazi shape and not a jew one. How would his family react? How different will be his ability to achieve his life's goals and his sense of himself?

How would a nazi react to such a jew?

If the aspect of the Jews that the Nazis have to change is something integral to their worldview, then a repugnant conclusion becomes sort of inevitable.

Till then, pull on the rope sideways. Try to save as many people as possible.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-07-22T15:21:10.572Z · LW(p) · GW(p)

In the real world, Nazis believed that Jews were inimical to Aryans, and treacherous as well. Jews that didn't look like Jews were still considered to be threats.

comment by Irgy · 2012-09-08T01:43:08.644Z · LW(p) · GW(p)

First, I'm going to call them 'N' and 'J', because I just don't like the idea of this comment being taken out of context and appearing to refer to the real things.

Does there exist a relative proportion of N to J where extermination is superior to the status quo, under your assumptions? In theory yes. In reality, it's so big that you run into a number of practical problems first. I'm going to run through as many places where this falls down in practice as I can, even if others have mentioned some.

  • The assumption that if you leave J fixed and increase N, that the level of annoyance per person in N stays constant. Exactly how annoyed can you be by someone you've never even met? Once N becomes large enough, you haven't met one, your friends haven't met one, no one you know has met one, and how do you really know whether they actually exist or not? As the size of N increases, in practice the average cost of J decreases, and it could well hit a bound. You could probably construct things in a way where that wouldn't happen, but it's at least not a straightforward matter of just blindly increasing N.
  • It's a false dichotomy. Even given the assumptions you state, there's all manner of other solutions to the problem than extermination. The existance and likely superiority of these solutions is part of our dislike of the proposal.
  • The assumption that they're unable to change their opinion is unrealistic.
  • The assumption that they hate one particular group but don't then just go on to hate another group when the first one is gone is unrealistic.
  • The whole analogy is horribly misleading because of all the associations that it brings in. Pretty much all of the assumptions required to make the theoretical situation your constructing actually work do not hold for the example you give.

With this much disparity between the theoretical situation and reality, it's no surprise there's an emotional conflict.

Replies from: DanArmak, V_V
comment by DanArmak · 2012-09-08T13:25:30.258Z · LW(p) · GW(p)

Does there exist a relative proportion of N to J where extermination is superior to the status quo, under your assumptions? In theory yes. In reality, it's so big that you run into a number of practical problems first

How do you actually define the correct proportion, and measure the relevant parameters?

Replies from: Irgy
comment by Irgy · 2012-09-08T22:03:03.139Z · LW(p) · GW(p)

The funny thing is the point of my post was the long explanation of practical problems, yet both replies have asked about the "in theory yes" part. The point of those three words was to point out that the statements followed are despite my own position in the torture/dust specks issue.

As far as your questions go, I along with, I expect, the rest of the population of planet Earth have close to absolutely no idea. Logically deriving the theoretical existance of something does not automatically imbue you with the skills to calculate its precise location.

My only opinion is that the number is significantly more than the "billions of N and handful of J" mentioned in the post, indeed more than will ever occur in practice, and substantially less than 3^^^^^3.

Replies from: DanArmak
comment by DanArmak · 2012-09-09T17:37:40.393Z · LW(p) · GW(p)

How do you determine your likelihood that the number is significantly more than billions vs. a handful - say, today's population of Earth against one person? If you have "close to absolutely no idea" of the precise value, there must be something you do know to make you think it's more than a billion to one and less than 3^^^^^3 to one.

This is a leading question: your position (that you don't know what the value is, but you believe there is a value) is dangerously close to moral realism...

Replies from: Irgy
comment by Irgy · 2012-09-09T21:52:24.782Z · LW(p) · GW(p)

So, I went and checked the definition of "moral realism" to understand why the term "dangerously" would be applied to the idea of being close to supporting it, and failed to find enlightenment. It seems to just mean that there's a correct answer to moral questions, and I can't understand why you would be here arguing about a moral question in the first place if you thought there was no answer. The sequence post The Meaning of Right seems to say "capable of being true" is a desirable and actual property of metaethics. So I'm no closer to understanding where you're going with this than before.

As to how I determined that opinion, I imagined the overall negative effects of being exterminated or sent to a concentration camp, imagined the fleeting sense of happiness in knowing someone I hate is suffering pain, and then did the moral equivalent of estimating how many grains of rice one could pile up on a football field (i.e. made a guess). This is just my current best algorithm though, I make no claims of it being the ultimate moral test process.

I hope you can understand that I don't claim to have no idea about morality in general, just about the exact number of grains of rice on a football field. Especially since I don't know the size of the grains of rice or the code of football either.

Replies from: DanArmak
comment by DanArmak · 2012-09-11T09:01:28.758Z · LW(p) · GW(p)

Moral realism claims that:

[True ethical propositions] are made true by objective features of the world, independent of subjective opinion.

Moral realists have spilled oceans of ink justifying that claim. One common argument invents new meanings for the word "true" ("it's not true the way physical fact, or inductive physical law, or mathematical theorems are true, but it's still true! How do you know there aren't more kinds of truth-ness in the world?") They commit, in my experience, a multitude of sins - of epistemology, rationality, and discourse.

I asked myself: why do some people even talk about moral realism? What brings this idea to their minds in the first place? As far as I can see, this is due to introspection (the way their moral intuitions feel to them), rather than inspection of the external world (in which the objective morals are alleged to exist). Materialistically, this approach is suspect. An alien philosopher with different, or no, moral intuitions would not come up with the idea of an objective ethics no matter how much they investigated physics or logic. (This is, of course, not conclusive evidence on its own that moral realism is wrong. The conclusive evidence is that there is no good argument for it. This merely explains why people spend time talking about it.)

Apart from being wrong, I called moral realism dangerous because - in my personal experience - it is correlated with motivated, irrational arguments. And also because it is associated with multiple ways of using words contrary to their normal meaning, sometimes without making this clear to all participants in a conversation.

As for Eliezer, his metaethics certainly doesn't support moral realism (under the above definition). A major point of that sequence is exactly that there is no purely objective ethics that is independent of the ethical actor. In his words, there is no universal argument that would convince "even a ghost of perfect emptiness".

However, he apparently wishes to reclaim the word "right" or "true" and be able to say that his ethics are "right". So he presents an argument that these words, as already used, naturally apply to his ethics, even though they are not better than a paperclipper's ethics in an "objective" sense. The argument is not wrong on its own terms, but I think the goal is wrong: being able to say our ethics are "right" or "true" or "correct" only serves to confuse the debate. (On this point many disagree with me.)

I write all this to make sure there is no misunderstanding over the terms used - as there had been in some previous discussions I took part in.

I can't understand why you would be here arguing about a moral question in the first place if you thought there was no answer.

Certainly there are answers to moral questions. However, they are the answers we give. An alien might give different answers. We don't care morally that it would, because these are our morals, even if others disagree.

Debate about moral questions relies on the facts that 1) humans share many (most?) moral intuitions and conclusions, and some moral heuristics appear almost universal regardless of culture; and 2) within that framework, humans can sometimes convince one another to change their moral positions, especially when the new moral stand is that of a whole movement or society.

Those are not facts about some objective, independently existing morals. They are facts about human behavior.

As to how I determined that opinion, I imagined the overall negative effects of being exterminated or sent to a concentration camp, imagined the fleeting sense of happiness in knowing someone I hate is suffering pain, and then did the moral equivalent of estimating how many grains of rice one could pile up on a football field (i.e. made a guess). This is just my current best algorithm though, I make no claims of it being the ultimate moral test process.

You start the way we all do - by relying on personal moral intuition. But then you say there exists, or may exist, an "ultimate moral test process". Is that supposed to be something independent of yourself? Or does it just represent the way your moral intuitions may/will evolve in the future?

Replies from: Irgy
comment by Irgy · 2012-09-13T04:04:31.455Z · LW(p) · GW(p)

Well, this seems to be a bigger debate than I thought I was getting into. It's tangential to any point I was actually trying to make, but it's interesting enough that I'll bite.

I'll try and give you a description of my point of view so that you can target it directly, as nothing you've given me so far has really put much of a dent in it. So far I just feel like I'm suffering from guilt by association - there's people out there saying "morality is defined as God's will", and as soon as I suggest it's anything other than some correlated preferences I fall in their camp.

Consider first the moral views that you have. Now imagine you had more information, and had heard some good arguments. In general your moral views would "improve" (give or take the chance of specifically misrepresentative information or persuasive false arguments, which in the long run should eventually be cancelled out by more information and arguments). Imagine also that you're smarter, again in general your moral views should improve. You should prefer moral views that a smarter, better informed version of yourself would have to your current views.

Now, imagine the limit of your moral views as the amount of information you have approaches perfect information, and also your intelligence approaches the perfect rational Bayesian. I contend that this limit exists, and this is what I would refer to as the ideal morality. This "existance" is not the same as being somehow "woven into the fabric of the univers". Aliens could not discover it by studying physics. It "exists", but only in the sense that Aleph 1 exists or "the largest number ever to be uniquely described by a non-potentially-self-referential statement" exists. If I don't like what it says, that's by definition either because I am misinformed or stupid, so I would not wish to ignore it and stick with my own views (I'm referring here to one of Eliezer's criticisms of moral realism).

So, if I bravely assume you accept that this limit exists, I can imagine you might claim that it's still subjective, in that it's the limit of an individual person's views as their information and intelligence approach perfection. However, I also think that the limit is the same for every person, for a combination of two reasons. First, as Eliezer has said, two perfect Baysians given the same information must reach the same conclusion. As such, the only thing left to break the symmetry between two different perfectly intelligent and completely informed beings is the simple fact of them being different people. This is where I bring in the difference between morality and preference. I basically define morality as being being about what's best for everyone in general, as opposed to preference which is what's best for yourself. Which person in the universe happens to be you should simply not be an input to morality. So, this limit is the same rational process, the same information, and not a function of which person you are, therefore it must be the same for everyone.

Now at least you have a concrete argument to shoot at rather than some statements suggesting I fall into a particular bucket.

Replies from: DanArmak
comment by DanArmak · 2012-09-13T14:47:41.363Z · LW(p) · GW(p)

I'll ignore several other things I disagree with, or that are wrong, and concentrate on what I view as the big issue, because it's really big.

Now, imagine the limit of your moral views as the amount of information you have approaches perfect information, and also your intelligence approaches the perfect rational Bayesian. I contend that this limit exists, and this is what I would refer to as the ideal morality.

Note: this is the limit of my personal morals. My limit would not be the same as your limit, let alone a nonhuman's limit.

Aliens could not discover it by studying physics. It "exists", but only in the sense that Aleph 1 exists

So aliens could discover it by studying mathematics, like a logical truth? Would they have any reason to treat it as a moral imperative? How does a logical fact or mathematical theorem become a moral imperative?

If I don't like what it says, that's by definition either because I am misinformed or stupid, so I would not wish to ignore it and stick with my own views

You gave that definition yourself. Then you assume without proof that those ideal morals exist and have the properties you describe. Then you claim, again without proof or even argument (beyond your definition), that they really are the best or idealized morals, for all humans at least, and describe universal moral obligations.

You can't just give an arbitrary definition and transform it into a moral claim without any actual argument. How is that different from me saying: I define X-Morals as "the morals achiever by all sufficiently well informed and smart humans, which require they must greet each person they meet by hugging. If you don't like this requirement, it's by definition because you're misinformed or stupid.

I also think that the limit is the same for every person, for a combination of two reasons. First, as Eliezer has said, two perfect Baysians given the same information must reach the same conclusion.

The same conclusion about facts they have information about: like physical facts, or logical theorems. But nobody has "information about morals". Morals are just a kind of preferences. You can only have information about some particular person's morals, not morals in themselves. So perfect Bayesians will agree about what my morals are and about what your morals are, but that doesn't mean your and my morals are the same. Your argument is circular.

This is where I bring in the difference between morality and preference. I basically define morality as being being about what's best for everyone in general, as opposed to preference which is what's best for yourself.

Well, first of all, that's not how everyone else uses the word morals. Normally we would say that your morals are to do what's best for everyone; while my morals are something else. Calling your personal morals "simply morals", is equivalent to saying that my (different) morals shouldn't be called by the name morals or even "Daniel's morals", which is simply wrong.

As for your definition of (your) morals: you describe, roughly, utilitarianism. But people argue forever over brands of utilitarianism: average utilitarianism vs. total utilitarianism, different handling of utility monsters, different handling of "zero utility", different necessarily arbitrary weighing of whose preferences are considered (do we satisfy paperclippers?), and so on. Experimentally, people are uncomfortable with any single concrete version (they have "repugnant conclusions"). And even if you have a version that you personally are satisfied with, that is not yet an argument for others to accept it in place of other versions (and of non-utilitarian approaches).

Replies from: Irgy
comment by Irgy · 2012-10-23T12:36:40.868Z · LW(p) · GW(p)

We obviously have a different view on the subjectivity of morals, no doubt an argument that's been had many times before. The sequences claim to have resolved it or something, but in such a way that we both still seem to see our views as consistent with them.

To me, subjective morals like you talk about clearly exist, but I don't see them as interesting in their own right. They're just preferences people have about other people's business. Interesting for the reasons any preference is interesting but no different.

The fundamental requirement for objective morals is simply that one (potential future) state of the world can be objectively better or worse than another. What constitutes "better" and "worse" being an important and difficult question of course, but still an objective one. I would call the negation, the idea that every possible state of the world is equally as good as any other, moral nihilism.

I accept that it's used for the subjective type as well, but personally I save the use of the word "moral" for the objective type. The actual pursuit of a better state of the world irrespective our own personal preferences. I see objectivity as what separates morals from preferences in the first place - the core of taking a moral action is that your purpose is the good of others, or more generally the world around you, rather than yourself. I don't agree that people having moral debates are simply comparing their subjective views (which sounds to me like "Gosh, you like fish? I like fish too!"), they're arguing because they think there is actually an objective answer to which of them is right and they want to find out who it is (well, actually usually they just want to show everyone that it's them, but you know what I mean).

This whole argument is actually off topic though. I think the point where things went wrong is where I answered the wrong question (though in my defence it was the one you asked). You asked how I determine what the number N is, but I never really even claimed to be able to do that in the first place. What I think you really wanted to know is how I define it. So I'll give you that. This isn't quite the perfect definition but it's a start.

Imagine you're outside space and time, and can see two worlds. One in which J is left alone, the other in which they're eradicated. Now, imagine you're going to chose one of these worlds in which you'll live the life of a then randomly-chosen person. Once you make the decision, your current preferences, personality, and so on will cease to exist and you'll just become that new random person. So, the question then becomes "Which world would you chose?". Or more to the point "For what value of N would you decide it's worth the risk of being eradicated as a J for the much higher chance of being a slightly happier N?".

The one that's "better" is the one that you would choose. Actually, more specifically it's the one that's the correct choice to make. I'd argue this correctness is objective, since the consequences of your choice are completely independent of anything about you. Note that although the connection to my view on morality is probably pretty clear, this definition doesn't use the word "moral" anywhere. The main post posits an objective question of which is better, and this is simply my attempt to give a reasonable definition of what they're asking.

comment by V_V · 2012-09-08T08:54:24.521Z · LW(p) · GW(p)

Does there exist a relative proportion of N to J where extermination is superior to the status quo, under your assumptions? In theory yes. In reality, it's so big that you run into a number of practical problems first.

Real Ns would disagree.

They did realize that killing Js wasn't exactly a nice thing to do. At first they considered relocating Js to some remote land (Madagascar, etc.). When it became apparent thar relocating millions while fighting a world war wasn't feasible and they resolved to killing them, they had to invent death camps rather than just shooting them because even the SS had problems doing that.

Nevertheless, they had to free the Lebensraum to build the Empire that would Last for a Thousand Years, and if these Js were in the way, well, too bad for them.

Ends before the means: utilitarianism at work.

Replies from: Irgy, prase
comment by Irgy · 2012-09-08T10:40:24.411Z · LW(p) · GW(p)

I don't see why utilitarianism should be held accountable for the actions of people who didn't even particulalry subscribe to it.

Also, why are you using N and J to talk about actual Nazis and Jews? That partly defeats the purpose of my making the distinction.

Replies from: V_V
comment by V_V · 2012-09-08T18:32:56.285Z · LW(p) · GW(p)

I don't see why utilitarianism should be held accountable for the actions of people who didn't even particulalry subscribe to it.

They may have not framed the issue explicitely in terms of maximization of an aggregate utility function, but their behavior seems consistent with consequentialist moral reasoning.

Replies from: Irgy
comment by Irgy · 2012-09-08T21:48:24.542Z · LW(p) · GW(p)

Reversed stupidity is not intelligence. That utilitarianism is dangerous in the hands of someone with a poor value function is old news. The reasons why utilitarianism may be correct or not exist in an entirely unrelated argument space.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2012-09-08T21:59:57.950Z · LW(p) · GW(p)

I can't even find the "help" section in this place

click the "Show help" button below the comment box

Replies from: Irgy
comment by Irgy · 2012-09-08T22:21:57.931Z · LW(p) · GW(p)

Ugh so obvious, except I only looked for the help in between making edits, looking for a global thing rather than the (more useful most of the time) local thing.

Thanks!

comment by prase · 2012-09-08T10:20:55.472Z · LW(p) · GW(p)

Real Ns would disagree.

Why is that relevant? Real Ns weren't good rationalists after all. If the existence of Js really made them suffer (which it most probably didn't under any reasonable definition of "suffer") but they realised that killing Js has negative utility, there were still plenty of superior solutions, e.g.: (1) relocating the Js afer the war (they really didn't stand in the way), (2) giving all or most Js a new identity (you don't recognise a J without digging into birth certificates or something; destroying these records and creating strong incentives for the Js to be silent about their origin would work fine), (3) simply stopping the anti-J propaganda which was the leading cause of hatred while being often pursued for reasons unrelated to Js, mostly to foster citizens loyalty to the party by creating an image of an evil enemy.

Of course Ns could have beliefs, and probably a lot of them had beliefs, which somehow excluded these solutions from consideration and therefore justified what they actually did on utilitarian grounds. (Although probably only a minority of Ns were utilitarians). But the original post wasn't pointing out that utilitarianism could fail horribly when combined with false beliefs and biases. It was rather about the repugnant consequences of scope sensitivity and unbounded utility, even when no false beliefs are involved.

Replies from: DanArmak
comment by DanArmak · 2012-09-08T13:24:07.802Z · LW(p) · GW(p)

which it most probably didn't under any reasonable definition of "suffer"

What definition is that?

Replies from: prase
comment by prase · 2012-09-08T15:04:45.232Z · LW(p) · GW(p)

That clause was meant to exclude the possibility of claiming suffering whenever one's preferences aren't satisfied. As I have written 'any reasonable', I didn't have one specific definition in mind.

comment by Dallas · 2012-09-07T21:07:27.889Z · LW(p) · GW(p)

If the Nazis have some built-in value that determines that they hate something utterly arbitrary, then why don't we exterminate them?

Replies from: shminux, None
comment by shminux · 2012-09-07T21:13:28.364Z · LW(p) · GW(p)

It is certainly an option, but if there are enough Nazis, this is a low-utility "final solution" compared to the alternatives.

Replies from: Dallas
comment by Dallas · 2012-09-08T01:29:00.436Z · LW(p) · GW(p)

In a void where there are just these particular Nazis and Jews, sure, but in most contexts, you'll have a variety of intelligences with varying utility functions, and those with pro-arbitrary-genocide values are dangerous to have around.

Of course, there is the simple alternative of putting the Nazis in an enclosed environment where they believe that Jews don't exist. Hypotheticals have to be really strongly defined in order to avoid lateral thinking solutions.

Replies from: None
comment by [deleted] · 2012-09-08T06:43:14.983Z · LW(p) · GW(p)

In a void where there are just these particular Nazis and Jews, sure, but in most contexts, you'll have a variety of intelligences with varying utility functions, and those with pro-arbitrary-genocide values are dangerous to have around.

I am pretty sure that certain kinds of societies and minds are possible that while utterly benign and quite happy would cause 21st century humans to want to exterminate them and suffer greatly as long as it was known they existed.

comment by [deleted] · 2012-09-08T06:38:15.216Z · LW(p) · GW(p)

This may come as news, but all kinds of hating or loving something are utterly arbitrary.

Replies from: DanArmak
comment by DanArmak · 2012-09-08T13:30:42.561Z · LW(p) · GW(p)

Some kinds are a lot less arbitrary than others: for instance, being strongly influenced by evolution, rather than by complex contigent history.

Replies from: None
comment by [deleted] · 2012-09-09T07:48:45.532Z · LW(p) · GW(p)

"strongly influenced by evolution"

You do realize modern Western societies rejection of plenty of kinds of hating or loving that are strongly influenced by evolution are due to its complex contingent history no?

Replies from: DanArmak
comment by DanArmak · 2012-09-09T08:06:52.441Z · LW(p) · GW(p)

Yes. And other kinds of hating or loving or hating-of-loving are influenced more by evolution, e.g. the appearance of covert liaisons and jealousy in societies where such covertness is possible. Or the unsurprising fact that humans generally love their children and are protective of them.

I never said no kinds of loving or hating are arbitrary (or at least determined by complex contigent history). I do say that many kinds are not arbitrary.

(My previous comment seems to be incomplete. Some example is missing after "for instance", I probably intended to add one and forgot. This comment provides the example.)

comment by sixes_and_sevens · 2012-09-09T23:37:51.276Z · LW(p) · GW(p)

That can be interpreted a couple of ways.

comment by Mestroyer · 2012-09-08T08:48:21.682Z · LW(p) · GW(p)

What if I place 0 value (or negative value (which is probably what I really do, but what I wish I did was to put zero value on it)) on the kind of satisfaction or peace of mind the Nazis get from knowing the Jews are suffering?

Replies from: shminux
comment by shminux · 2012-09-08T15:32:01.817Z · LW(p) · GW(p)

Interesting. I am not sure if one can have a consistent version of utilitarianism where one unpacks the reasons for one's satisfaction and weigh them separately.

comment by [deleted] · 2016-02-01T10:38:43.048Z · LW(p) · GW(p)

Relevant: Could Nazi Germany seeding the first modern anti-tobacco movement have resulted in an overall net gain in public utility till date?

Replies from: gjm
comment by gjm · 2016-02-01T15:23:38.492Z · LW(p) · GW(p)

Is there any reason to think that the Nazi's anti-smoking campaign actually influenced later ones in Germany or elsewhere very much?

(I think there are much stronger candidates for ways in which the Nazis produced good as well as harm -- e.g., scientific progress motivated by WW2. But there's a lot of harm to weigh against.)

comment by Wrongnesslessness · 2012-09-08T16:35:18.688Z · LW(p) · GW(p)

I'm a bit confused with this torture vs. dust specks problem. Is there an additive function for qualia, so that they can be added up and compared? It would be interesting to look at the definition of such a function.

Edit: removed a bad example of qualia comparison.

Replies from: Incorrect
comment by Incorrect · 2012-09-08T16:38:17.782Z · LW(p) · GW(p)

They aren't adding qualia, they are adding the utility they associate with qualia.

Replies from: Wrongnesslessness
comment by Wrongnesslessness · 2012-09-08T17:13:01.234Z · LW(p) · GW(p)

It is not a trivial task to define a utility function that could compare such incomparable qualia.

Wikipedia:

However, it is possible for preferences not to be representable by a utility function. An example is lexicographic preferences which are not continuous and cannot be represented by a continuous utility function.

Has it been shown that this is not the case for dust specks and torture?

Replies from: benelliott, TheOtherDave
comment by benelliott · 2012-09-08T19:05:31.559Z · LW(p) · GW(p)

In the real world, if you had lexicographic preferences you effectively wouldn't care about the bottom level at all. You would always reject a chance to optimise for it, instead chasing the tiniest epsilon chance of affecting the top level. Lexicographic preferences are sometimes useful in abstract mathematical contexts where they can clean up technicalities, but would be meaningless in the fuzzy, messy actual world where there's always a chance of affecting something.

Replies from: Wrongnesslessness
comment by Wrongnesslessness · 2012-09-09T05:24:21.702Z · LW(p) · GW(p)

I've always thought the problem with real world is that we cannot really optimize for anything in it, exactly because it is so messy and entangled.

I seem to have lexicographic preferences for quite a lot of things that cannot be sold, bought, or exchanged. For example, I would always prefer having one true friend to any number of moderately intelligent ardent followers. And I would always prefer a FAI to any number of human-level friends. It is not a difference in some abstract "quantity of happiness" that produces such preferences, those are qualitatively different life experiences.

Since I do not really know how to optimize for any of this, I'm not willing to reject human-level friends and even moderately intelligent ardent followers that come my way. But if I'm given a choice, it's quite clear what my choice will be.

Replies from: benelliott
comment by benelliott · 2012-09-09T15:44:48.502Z · LW(p) · GW(p)

I don't won't to be rude, but your first example in particular looks like somewhere where its beneficial to signal lexicographic preferences.

Since I do not really know how to optimize for any of this

What do you mean you don't know how to optimise for this! If you want and FAI then donating to SIAI almost certainly does more good than nothing, (even if they aren't as effective as they could be they almost certainly don't have zero effectiveness, if you think they have negative effectiveness then you should be persuading others not to donate). Any time spent acquiring/spending time with true friends would be better spent on earning money to donate (or encouraging others not to) if your preferences are truly lexicographic. This is what I mean when I say that in the real world, lexicographic preferences just cache out as not caring about the bottom at all.

You've also confused the issue by talking about personal preferences, which tend to be non-linear, rather than interpersonal. It may well be that the value of both ardent followers and true friends suffers diminishing returns as you get more of them, and probably tends towards an asymptote. The real question is not "do I prefer an FAI to any number of true friends" but "do I prefer a single true friend to any chance of an FAI, however small", in which case the answer, for me at least, seems to be no.

comment by TheOtherDave · 2012-09-08T17:48:55.807Z · LW(p) · GW(p)

I'm not sure how one could show such a thing in a way that can plausibly be applied to the Vast scale differences posited in the DSvT thought experiment.

When I try to come up with real-world examples of lexicographic preferences, it's pretty clear to me that I'm rounding... that is, X is so much more important than Y that I can in effect neglect Y in any decision that involves a difference in X, no matter how much Y there is relative to X, for any values of X and Y worth considering.

But if someone seriously invites me to consider ludicrous values of Y (e.g., 3^^^3 dust specks), that strategy is no longer useful.

Replies from: Wrongnesslessness
comment by Wrongnesslessness · 2012-09-09T05:36:45.499Z · LW(p) · GW(p)

I'm quite sure I'm not rounding when I prefer hearing a Wagner opera to hearing any number of folk dance tunes, and when I prefer reading a Vernor Vinge novel to hearing any number of Wagner operas. See also this comment for another example.

It seems, lexicographic preferences arise when one has a choice between qualitatively different experiences. In such cases, any differences in quantity, however vast, are just irrelevant. An experience of long unbearable torture cannot be quantified in terms of minor discomforts.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-09T05:44:06.389Z · LW(p) · GW(p)

It seems our introspective accounts of our mental processes are qualitatively different, then.

I'm willing to take your word for it that your experience of long unbearable torture cannot be "quantified" in terms of minor discomforts. If you wish to argue that mine can't either, I'm willing to listen.

comment by asparisi · 2012-09-08T00:21:03.766Z · LW(p) · GW(p)

If the Nazis are unable to change their terminal values, then Good|Nazi has a substantial difference compared to what we mean when we say Good. Nazis might use the same word, or it might translate as "the same." It might even be similar along many dimensions. Good|Jew might be the same as Good (they don't seem substantially different then humans) although this isn't required by the problem, but Good|Nazi ends up being something that I just don't care about in the case where we are talking about exterminating Jews.

There might be other conditions where Good and Good|Nazi overlap, and in those cases I probably would agree that the Nazi should do the Good thing, which would also be to do the Good|Nazi thing. But I don't have any reason to favor Good|Nazi over Good, and so where they differ (the extermination of Jews) I am unmotivated to defend or even to allow the Good|Nazi point of view to have a say in what is going on.

Replies from: prase
comment by prase · 2012-09-08T10:28:53.317Z · LW(p) · GW(p)

You indeed needn't care about "good|Nazi", but the important question in this hypothetical is whether you care about "happy|Nazi" or "suffer|Nazi". I don't care much whether the outcome is considered good by someone else, the less so if that person is evil, but still it could bother me if the outcome causes that person to suffer.

Replies from: asparisi
comment by asparisi · 2012-09-08T11:51:07.084Z · LW(p) · GW(p)

I don't particularly want "suffer|Nazi" at least in and of itself.

But it works out the same way. A mosquito might suffer from not drinking my blood. That doesn't mean I will just let it. A paperclip maximizer might be said to suffer from not getting to turn the planet into paperclips, if it were restrained.

If the only way to end suffer|Nazi is to violate what's Good, then I am actually pretty okay with suffer|Nazi as an outcome. I'd still prefer ((happy|Nazi) & Good) to ((suffer|Nazi) & Good), but I see no problem with ((suffer|Nazi) & Good) winning out over ((happy|Nazi) & Bad). My preference for things with differing value systems not to suffer does not override my value system in and of itself.

comment by brilee · 2012-09-08T03:26:45.656Z · LW(p) · GW(p)

You know... purposely violating Godwin's Law seems to have become an applause light around here, as if we want to demonstrate how super rational we are that we don't succumb to obvious fallacies like Nazi analogies.

Replies from: drethelin
comment by drethelin · 2012-09-08T04:41:50.991Z · LW(p) · GW(p)

Godwin's law: Not an actual law

Replies from: anonymous259
comment by anonymous259 · 2012-09-08T06:58:12.011Z · LW(p) · GW(p)

Or actually: a "law" in the sense of "predictable regularity", not "rule that one will be punished for violating".

In which case the post exemplifies it, rather than violating it.

comment by Ghatanathoah · 2012-09-17T06:11:58.110Z · LW(p) · GW(p)

One idea that I have been toying since I read Eliezer's various posts on the complexity of value is that the best moral system might not turn out to be about maximizing satisfaction of any and all preferences, regardless of what those preferences are. Rather, it would be about increasing the satisfaction of various complex, positive human values, such as i.e. "Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc." If this is the case then it may well be that horribly malevolent preferences, such as those the Nazis in this thought experiment exhibit, are simply not the sort of preferences that it is morally good to satisfy. Obviously judging which values are "worthy" to be increased is a difficult problem that creates huge moral hazards for whomever is doing the judging, but that is an implementation problem, not a problem with the general principle.

If this line of reasoning is correct, then the reason that preference utilitarianism seems so intuitively persuasive is that since most of our moral reasoning deals with humans, most of the time maximizing whatever a human being prefers is pretty much guaranteed to achieve a great many of those human values. For this reason, "human values utilitarianism" and preference utilitarianism generate the same answers in most real-world scenarios. However, there might be a few values that human beings are theoretically capable of having, but aren't morally good to maximize. I think one of these morally bad preferences is sheer total malevolence, where you hate someone and want to hurt them as an end in itself.

This theory would also explain why people feel that it would be a bad thing if an AI were to kill/sterilize all the human beings in existence and replace them with creatures whose preferences are easier to satisfy. Such action would result in increased preference satisfaction, but they'd be the wrong kind of preferences, they wouldn't be positive human values. (Please note that though I refer to these as "human values," I am not advocating specieism. A nonhuman creature who had similar values would be just as morally significant as a human)

This gets a little more complicated if we change the thought experiment slightly and assume that the Nazi's Jew-hatred is ego-dystonic rather than ego-syntonic. That is, the conscious rational, "approving" part of their brain doesn't want to hurt Jews, but the subconscious "liking" parts of their brains feel incredible pain and psychological distress from knowing that Jews exist. We assume that this part of their brain cannot be changed.

If this is the case then the Nazis are not attempting to satisfy some immoral, malevolent preference. They are simply trying to satisfy the common human preference to not feel psychological pain. Killing the Jews to save the Nazis from such agony would be equivalent to killing a small group of people to save a larger group from being horribly tortured. I don't think the fact that the agent doing the torturing is the Nazis' own subconscious mind, instead of an outside agent, is important.

However, since the Nazis' preference is "I don't want my subconscious to hurt me because it perceives Jews," rather than "I want to make the statement 'all Jews are dead' true" there is an obvious solution to this dilemma. Trick the Nazis into thinking the Jews are dead without actually killing them. That would remove their psychological torment while preserving the lives of the Jews. It would not create the usual moral dilemmas associated with deception because in this variant of the thought experiment the Nazi's preference isn't to kill the Jews, it's to not feel pain from believing Jews exist.

Replies from: shminux
comment by shminux · 2012-09-17T07:02:31.773Z · LW(p) · GW(p)

Of course, if you have the option of lying, the problem becomes trivial and uninteresting, regardless of your model of the Nazi psyche. It's when your choice requires to improve the life of one group at the expense of another one suffering, you tend to face a repugnant conclusion.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-09-17T07:51:59.416Z · LW(p) · GW(p)

Of course, if you have the option of lying, the problem becomes trivial and uninteresting, regardless of your model of the Nazi psyche.

In the original framing of the thought experiment the reason lying wasn't an option was because the Nazis didn't want to believe that all the Jews were dead, they wanted the Jews to really be dead. So if you lied to them you wouldn't really be improving their lives because they wouldn't really be getting what they wanted.

By contrast, if the Nazis simply feel intense emotional pain at the knowledge that Jews exist, and killing Jews is an instrumental goal towards preventing that pain, then lying is the best option.

You're right that that makes the problem trivial. The reason I addressed it at all was that my original thesis was "satisfying malicious preferences is not moral." I was afraid someone might challenge this by emphasizing the psychological pain and distress the Nazis might feel. However, if that is the case then the problem changes from "Is it good to kill people to satisfy a malicious preference?" to "Is it good to kill people to prevent psychological pain and distress.

I still think that "malicious preferences are morally worthless" is a good possible solution to this problem, providing one has a sufficiently rigorous definition of "malicious."

Replies from: shminux
comment by shminux · 2012-09-17T16:35:11.598Z · LW(p) · GW(p)

In the original framing of the thought experiment the reason lying wasn't an option was because the Nazis didn't want to believe that all the Jews were dead, they wanted the Jews to really be dead. So if you lied to them you wouldn't really be improving their lives because they wouldn't really be getting what they wanted.

Maybe you misunderstand the concept of lying. They would really believe that all Jews are dead if successfully lied to, so their stress would decrease just as much as as if they all were indeed dead.

I still think that "malicious preferences are morally worthless" is a good possible solution to this problem, providing one has a sufficiently rigorous definition of "malicious."

This is more interesting. Here we go, the definitions:

Assumption: we assume that it is possible to separate overall personal happiness level into components (factors), which could be additive, multiplicative (or separable in some other way). This does not seem overly restrictive.

Definition 1: A component of personal happiness resulting from others being unhappy is called "malicious".

Definition 2: A component of personal happiness resulting from others being happy is called "virtuous".

Definition 3: A component of personal happiness that is neither malicious nor virtuous is called "neutral".

Now your suggestion is that malicious components do not count toward global decision making at all. (Virtuous components possibly count more than neutral ones, though this could already be accounted for.) Thus we ignore any suffering inflicted on Nazis due to Jews existing/prospering.

Does this sound right?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-09-17T20:45:34.204Z · LW(p) · GW(p)

They would really believe that all Jews are dead if successfully lied to, so their stress would decrease just as much as as if they all were indeed dead.

If this is the case then the Nazis do not really want to kill the Jews. What they really want to do is decrease their stress, killing Jews is just an instrumental goal to achieve that end. My understanding of the original thought experiment was that killing Jews was a terminal value for the Nazis, something they valued for its own sake regardless of whether it helped them achieve any other goals. In other words, even if you were able to modify the Nazi brains so they didn't feel stress at the knowledge that Jews existed, they would still desire to kill them.

Does this sound right?

Yes, that's exactly the point I was trying to make, although I prefer the term "personal satisfaction" rather than "personal happiness" to reflect the possibility that there are other values then happiness.

comment by Incorrect · 2012-09-07T23:34:25.343Z · LW(p) · GW(p)

What's more important to you, your desire to prevent genocide or your desire for a simple consistent utility function?

Replies from: shminux
comment by shminux · 2012-09-07T23:42:35.494Z · LW(p) · GW(p)

I thought it was clear in my post that I have no position on the issue. I was simply illustrating that a "consistent utility function" leads to a repugnant conclusion.

Replies from: Incorrect
comment by Incorrect · 2012-09-07T23:49:36.937Z · LW(p) · GW(p)

Sorry, generic you.

comment by sixes_and_sevens · 2012-09-08T15:39:56.985Z · LW(p) · GW(p)

It is taking some effort to not make a sarcastic retort to this. Please refrain from using such absurdly politically-loaded examples in future. It damages the discussion.