post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by DanielLC · 2014-08-12T16:43:15.955Z · LW(p) · GW(p)

There aren't any. It's an inherently deongological idea.

Suppose Alice or Bob can exist, but not both. Under deontology, you could talk about which one of them will exist if you do nothing, and ask if it's a good idea to change it. You might decide that you can't play god and you have to leave it, that you should make sure the one with a better life comes into existence, or that since they're not born yet neither of them have any rights and you can decide whichever you like.

Under consequentialism, it's a meaningless question. There is one universe with Alice. There is one with Bob. You must choose which you value more. Choosing not to act is a choice.

If Alice and Bob have the same utility, then you should be indifferent. If you consider preventing the birth of Alice with X utility and causing the birth of Bob with Y utility, that's the same as preventing the birth of Alice with X utility and causing the birth of Bob with X utility plus increasing the utility of Bob from X to Y. This has a total utility of 0 + (Y-X) = Y-X.

Replies from: Kyre
comment by Kyre · 2014-08-13T05:08:03.235Z · LW(p) · GW(p)

Is there a separate name for "consequentialism over world histories" in comparison to "consequentialism over world states" ?

What I mean is, say you have a scenario where you can kill of person A and replace him with a happier person B. As I understand the terms, deontology might say "don't do it, killing people is bad". Consequentialism over world states would say "do it, utility will increase" (maybe with provisos that no-one notices or remembers the killing). Consequentialism over world histories would say "the utility contribution of the final state is higher with the happy person in it, but the killing event subtracts utility and makes a net negative, so don't do it".

Replies from: DanielLC
comment by DanielLC · 2014-08-13T06:04:24.911Z · LW(p) · GW(p)

I don't know if there's a name for it. In general, consequentialism is over the entire timeline. You could value events that have a specific order, or value events that happen earlier, etc. I don't like the idea of judging based on things like that, but it's just part of my general dislike of judging based on things that cannot be subjectively experienced. (You can subjectively experience the memory of things happening in a certain order, but each instant of you remembering it is instantaneous, and you'd have no way of knowing if the instants happened in a different order, or even if some of them didn't happen.)

It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob. I was talking about preventing the existence of Alice to make way for Bob. Alice is not dying. I am removing the potential for her to exist. But potential is just an abstraction. There is not some platonic potential of Alice floating out in space that I just killed.

Due to loss aversion, losing the potential for Alice may seem worse than gaining the potential for Bob, but this isn't something that can be justified on consequentialist grounds.

Replies from: Kyre
comment by Kyre · 2014-08-14T05:01:54.928Z · LW(p) · GW(p)

I don't know if there's a name for it. In general, consequentialism is over the entire timeline.

Yes, that makes the most sense.

It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob.

No no, I understand that you're not talking about killing people off and replacing them, I was just trying (unsuccessfully) to give the most clearest example I could.

And I agree with your consequentialist analysis of indifference between the creation of Alice and Bob if they have the same utility ... unless "playing god events" have negative utility.

comment by Slider · 2014-08-12T17:45:51.874Z · LW(p) · GW(p)

If you already have a way to compare utilities of different moral agents you should look into that method whether differences arise or not. You could ofcourse identify the moral identity of the person on how it impacts the global utility function. However the change in utility relative to the persons values need not be one-to-one to the global function. Not handling the renumeration would be like assuming that $ and £ impact equally to wealth straight with their numerical values. However if I have the choice to make Clippy the paperclip maximiser or Roger the rubberband maximiser there probably is some amount of paperclips in utility to Clippy that would correspond to rubberbands in utility to Roger. But I have hard time imagining how I would come to know that amount.