Morality open thread

post by Will_Newsome · 2012-07-08T14:30:08.864Z · LW · GW · Legacy · 89 comments

I figure morality as a topic is popular enough and important enough and related-to-rationality enough to deserve its own thread.

Questions, comments, rants, links, whatever are all welcome. If you're like me you've probably been aching to share your ten paragraph take on meta-ethics or whatever for about three uncountable eons now. Here's your chance.

I recommend reading Wikipedia's article on meta-ethics before jumping into the fray, if only to get familiar with the standard terminology. The standard terminology is often abused. This makes some people sad. Please don't make those people sad.

89 comments

Comments sorted by top scores.

comment by Andreas_Giger · 2012-07-08T14:55:44.463Z · LW(p) · GW(p)

So where can I find your ten paragraph take on meta-ethics?

comment by Jayson_Virissimo · 2012-07-09T03:53:13.501Z · LW(p) · GW(p)

A common response (although, one I cannot find an example of via the search feature, blah) I have observed from Less Wrongers to the challenge of interpersonal utility comparison is the claim that "we do it all the time". I take this to mean that when we make decisions we often consider the preferences of our friends and family (and sometimes strangers or enemies) and that whatever is going on in our minds when we do this approximates interpersonal utility calculations (in some objective sense). This, to me, seems like legerdemain for basically this reason:

One stand restoring to utilitarianism its role of judging policy, is that interpersonal comparisons are obviously possible since we are making them all the time. Only if we denied "other minds" could we rule out comparisons between them. Everyday linguistic usage proves the logical legitimacy of such statements as "A is happier than B" (level-comparison) and, at a pinch, presumably also "A is happier than B but by less than B is happier than C" (difference-comparison). A degree of freedom is, however, left to interpretation, which vitiates this approach. For these everyday statements can, for all their form tells us, just as well be about facts (A is taller than B) as about opinions, tastes or both (A is more handsome than B). If the latter, it is no use linguistic usage telling us that interpersonal comparisons are "possible" (they do not grate on the ear), because they are not the comparisons utilitarianism needs to provide "scientific" support for policies. An equally crucial ambiguity surrounds the piece of linguistic testimony that tends to be invoked in direct support of redistributive policies: "a dollar makes more difference to B than to A." If the statement means that the incremental utility of a dollar to B is greater than it is to A, well and good. We have successfully compared amounts of utilities of two persons. If it means that a dollar affects B's utility more than A's, we have merely compared the relative change in B's utility ("it has been vastly augmented") and in A's ("it has not changed all that much"), without having said anything about B's utility-change being absolutely greater or smaller than A's (i.e. without demonstrating that the utilities of two persons are commensurate, capable of being expressed in terms of some common homogeneous "social" utility).

-Anthony de Jasay, The State

Replies from: bryjnar
comment by bryjnar · 2012-07-09T12:01:38.047Z · LW(p) · GW(p)

I think the point is to suggest that there may be a precise concept hiding in there somewhere.

Compare with "niceness". We say "Jim is nicer than Joe, but not as nice as James", and yet there's currently no prospect of a canonical unit of niceness. There are then two things we can say:

  • maybe if we really focus in on what people mean by "nice", and do lots of studies into what makes them think that people are nice, and think really hard, then we can come up with a precise concept of niceness that we can stick a unit on.
  • even if we can't do that, even if "niceness" is just irredeemably fuzzy, then perhaps it's still appropriate to treat our judgements of niceness as though they approximated some precise concept.

So we can either say: the science of niceness is coming; or the science of niceness is impossible, but we can pretend we're approximating to it for all intents and purposes.

I think something along these lines might be able to help out utilitarianism.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-07-09T12:36:38.205Z · LW(p) · GW(p)
  • maybe if we really focus in on what people mean by "nice", and do lots of studies into what makes them think that people are nice, and think really hard, then we can come up with a precise concept of niceness that we can stick a unit on.

Jasay addresses this very counterargument a few paragraphs later:

On the other hand, if they are to be understood as verifiable, refutable matters of fact, interpersonal comparability must mean that any difficulties we may have with adding up are technical and not conceptual; they are due to the inaccessibility, paucity or vagueness of the required information. The problem is how to get at and measure what goes on inside people's heads and not that the heads belong to different persons. Minimal, widely accessible information about Nero, Rome and fiddling, for example, is sufficient for concluding that, for a fact, there was no net gain of utility from the burning of Rome while Nero played the fiddle. Progressively richer, more precise information allows progressively more refined interpersonal findings. Thus we move forward from the non-addibility resulting from sheer lack of specific data to an at least quasi-cardinal utility and its at least partial interpersonal comparison. At least ostensibly, the contrast with proposals to ignore specificity and strip individuals of their differences, could not be more complete. The proposal here seems to be to start from admitted heterogeneity and approach homogeneity of individuals by capturing as many of their differences as possible in pairwise comparisons, as if we were comparing an apple and a pear first in terms of size, sugar content, acidity, colour, specific weight and so on through n separate comparisons of homogeneous attributes, leaving uncompared only residual ones which defy all common measure. Once we have found the n common attributes and performed the comparisons, we have n separate results. These must then be consolidated into a single result, the Comparison, by deciding their relative weights.

Would, however, the admission that this procedure for adding up utilities is intellectually coherent, suffice to make it acceptable for choosing policies? If the procedure were to be operated, a host of debatable issues would first have to be somehow (unanimously?) agreed by everybody whose utility gain or loss was liable to be compared in the operation. What distinguishing traits of each individual (income, education, health, job satisfaction, character, spouse's good or bad disposition, etc.) shall be pairwise compared to infer utility levels or utility differences? If some traits can only be subjectively assessed, rather than read off from Census Bureau statistics, who shall assess them? What weight shall be given to each characteristic in inferring utility, and will the same weight do for people of possibly quite different sensibilities? Whose values shall condition these judgements? If some "equitable" way were unanimously agreed for delegating powers for taking comparative readings and setting the weights, the delegate would either go insane, or would just produce whatever result looked right to his intuition.

The long and short of it is that objective and procedurally defined interpersonal comparisons of utility, even if they are modestly partial, are merely a roundabout route all the way back to irreducible arbitrariness, to be exercised by authority. At the end of the day, it is the intuition of the person making the comparison which decides, or there is no comparison.

I apologize for the excessive quotation length, but I couldn't think of a good chunk to cut.

Replies from: torekp, bryjnar
comment by torekp · 2012-07-13T01:17:10.370Z · LW(p) · GW(p)

The exact same arguments could be leveled against intrapersonal utility comparisons. After all, a person's desires and tastes change over time, or even oscillate.

The answer to both "dilemmas" is the same: one can only get there from here. That is, each must use one's present weightings of the various dimensions of utility. In a democracy or an anarchy, these can then be discussed and bargained over to reach some reasonable trade-off between (e.g.) those who especially want to see their fellow citizens experience more pleasure and those who especially wish to see them exercise more autonomy.

Of course, this makes utilitarian arguments secondary to (e.g.) democratic process. But that's the way I like it.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-07-13T09:52:09.834Z · LW(p) · GW(p)

The exact same arguments could be leveled against intrapersonal utility comparisons. After all, a person's desires and tastes change over time, or even oscillate.

Not exactly, but I see what you mean. I agree that (at least seemingly) analogous arguments can be leveled against intrapersonal utility comparisons (with a similar level of inductive strength).

Of course, this makes utilitarian arguments secondary to (e.g.) democratic process. But that's the way I like it.

I would wager that you wouldn't be so pleased if your preferences differed significantly from the median voter's.

comment by bryjnar · 2012-07-09T13:12:37.407Z · LW(p) · GW(p)

That all sounds pretty fair! I don't think I made it clear but I'm fairly sceptical of that particular route myself: I just don't think our conception of "utility" is that coherent. Or to put it in Jasay's terms: I'm not sure we have coherent answers (as a species) to the question of how to weight stuff etc.

comment by [deleted] · 2012-07-09T10:49:07.429Z · LW(p) · GW(p)

The other day, I forgot my eyeglasses at home and while walking I got a good sized piece of dust or dirt lodged in my eye. My eye was incapacitated for the better part of a minute until tears washed it out. I had a bit of an epiphany: 3^^^3 dust specks suddenly seems a lot scarier, something you obviously need to agregate and assign a monstrous pile of disutility to. So Basically I have updated my position on torture vs specks.

Replies from: Viliam_Bur, Emile, gjm
comment by Viliam_Bur · 2012-07-11T12:20:34.817Z · LW(p) · GW(p)

Alternative explanation is that you now have "dust speck" in near mode, and "torture" in far mode.

I am curious whether 50 years of torture would make you update in the other direction... :P

comment by Emile · 2012-07-09T16:12:44.817Z · LW(p) · GW(p)

The way I phrase it is, do I prefer a one-in-3^^^3 chance of 50 years of torture, or a sure chance of a dust speck in the eye? If a Fairy came by and offered you a magical protection against dust speck, but each dustspeck warded (provided it would have been large enough to be noticeable) has a 1-in-3^^^3 chance of sending you to the torture chamber - would you accept the fairy's offer?

Note that a one-in-3^^^3 scenario is less likely than Barack Obama kicking down my door and declaring his undying love for me, so probably deserves as much attention.

Replies from: Larks
comment by Larks · 2012-09-15T00:27:49.253Z · LW(p) · GW(p)

Note that a one-in-3^^^3 scenario is less likely than Barack Obama kicking down my door and declaring his undying love for me, so probably deserves as much attention.

It deserves much less attention!

3^^^3 times less attention.

comment by gjm · 2012-07-09T14:44:33.343Z · LW(p) · GW(p)

Then you may have misunderstood or misremembered the problem statement. From the original article:

What's the least bad, bad thing that can happen? Well, suppose a dust speck floated into your eye and irritated it just a little, for a fraction of a second, barely enough to make you notice before you blink and wipe away the dust speck.

So we're not talking here about something that incapacitates the victim's eye for the better part of a minute.

(I don't know that you mis{understood,remembered} the problem statement. Maybe you had a Nasty Dust Speck experience and this made you update your position on Minimal Dust Speck experiences.)

Replies from: None
comment by [deleted] · 2012-07-09T20:58:59.865Z · LW(p) · GW(p)

Yeah I forgot about that detail. My position update stands, for as long as it still represents an amount of disutility, the size can't matter in the final count. My right brain still doesn't like it though.

comment by FiftyTwo · 2012-07-08T22:44:03.140Z · LW(p) · GW(p)

I have a pill that will make you a psychopath. You will retain all your intellectual abilities and all understanding of moral theory, but your emotional reactions to others suffering will cease. You will still have the empathy to understand that others are suffering, but you won't feel automatic sympathy for it.

Do you want to take it?

Replies from: sixes_and_sevens, Nornagest, buybuydandavis, Username, wedrifid, shokwave, kajro, prase, Dorikka
comment by sixes_and_sevens · 2012-07-09T00:27:56.747Z · LW(p) · GW(p)

No. This would ruin most art and make my sex life exceedingly boring.

comment by Nornagest · 2012-07-08T23:37:57.046Z · LW(p) · GW(p)

Probably not, although I imagine most of the common negative outcomes of sociopathy would have been screened off by the fact that I'm an adult with established habits and therefore am unlikely to develop (e.g.) a pattern of casual theft. Sympathy's there for a reason; if I didn't have the instinct I'd still be able to solve social coordination problems, but I'd be missing a heuristic that'd allow me to do it much faster in the 80% case. My impression is that that would cause more problems than it's likely to solve, given that I'm not in a field like law or business where sociopathy would give me a direct comparative advantage.

I'd probably take a pill that reduced my sympathetic instincts rather than eliminating them entirely, though, or allowed me to selectively disable them. I've got the feeling that they're more than optimally active in my particular case.

comment by buybuydandavis · 2012-07-08T23:46:29.146Z · LW(p) · GW(p)

Do psychopaths only fail to sympathisize with painful emotions, or do they fail to sympathize with all emotions?

comment by Username · 2012-07-09T10:46:02.318Z · LW(p) · GW(p)

I'm inclined to say no, but only because seratonin's a hell of a drug.

comment by wedrifid · 2012-07-09T00:01:38.697Z · LW(p) · GW(p)

I have a pill that will make you a psychopath. You will retain all your intellectual abilities and all understanding of moral theory, but your emotional reactions to others suffering will cease. You will still have the empathy to understand that others are suffering, but you won't feel automatic sympathy for it.

Do you want to take it?

If the pill also removes the ability to feel shame I'll take it.

I wouldn't necessarily recommend it to most people and wouldn't want myself to have taken it while young. But at 30 my ethical principles are rather firmly entrenched in abstract concepts and powered by ego and stubbornness, rather than the sympathy that originally caused the ideals to be formed. These days the emotions just get in the way---they are far too sensitive to be useful to me and tend to be a significant liability.

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2012-07-10T15:36:07.914Z · LW(p) · GW(p)

I wouldn't take an empathy-recuding pill, but a shame-reducing one? WANT!

comment by shokwave · 2012-07-10T14:54:34.910Z · LW(p) · GW(p)

Do you want to take it?

As an experiment, yes. If somebody else was more willing to take it than I was, and I could observe them, that would also satisfy my want.

comment by kajro · 2012-07-08T23:43:53.543Z · LW(p) · GW(p)

I guess this would depend on (1) the extent to which unnecessary sympathy effects my daily life and (2) how the consideration of hypothetical events would effect the evolution of my moral system with respect to this new constraint.

The former is negligible to me, but the latter seems potentially dangerous. I don't know exactly how not being a psychopath effects my reasoning, so I don't think I would be comfortable taking the pill. Maybe if I could backup my mind...

comment by prase · 2012-07-09T18:02:09.922Z · LW(p) · GW(p)

No, what possible reason could I have to take it?

comment by Dorikka · 2012-07-09T00:02:52.125Z · LW(p) · GW(p)

No. I think that would cause value drift, and I'd rather my values not drift in that fashion, because it would cause me to be less likely to steer reality towards world-states which maximize my current values.

Replies from: FiftyTwo
comment by FiftyTwo · 2012-07-09T17:49:05.434Z · LW(p) · GW(p)

Would values that aren't stable without constant emotional feedback values worth preserving? You might evolve to better ones E.g psychopaths are shown to make more utilitarian judgements and not be swayed by emotive descriptions.

Replies from: Dorikka
comment by Dorikka · 2012-07-10T00:20:56.904Z · LW(p) · GW(p)

E.g psychopaths are shown to make more utilitarian judgements and not be swayed by emotive descriptions.

I'm interested in this, Do you have a link or cite?

comment by Username · 2012-07-09T10:42:01.817Z · LW(p) · GW(p)

I am having a discussion on reddit (I am TheMeiguoren), and I have a a moral quandry that I want to run by the community.

I'll highlight the main point (the context is a discussion about immortality):

imbecile: For someone to have several lifetimes to be considered a good thing, it must be conclusively shown that this person improves the life of others more and faster than several other people could achieve in their lifetime together with the resources he has at his disposal.

me: If my existence really was harming the human race by not being as efficient as possible, I believe I would fight, well, to the death to preserve my existence. However, I would not do this at the expense of harming humanity. I notice that my goals are contradictory, I'm going to have to reflect on this.

My current moral heuristic is utilitarian tuned by degrees from self. (I.e. self > family > friends > other humans > sentient animals > other life > inanimate matter, other intelligent life fits somewhere in there). imbecile considers this animalistic and archaic, I see it as the value system that best fits my ontology, and choosing a moral system is largely arbitrary, anyway. In the current world, I believe I would sacrifice myself for the lives of my (hypothetical) children. But if I am able to live forever, I am torn as to whether this is the right action, assuming my existence used up valuable resources that harmed humanity's offspring.

So two questions:

  • Is my degrees-from-self moral heuristic a valid one? I at least find it to be internally consistent. Or to put it another way, just how arbitrary is one's moral system?

  • Within the frame of this moral system, in a post-humanity situation where my very existence hurts the rest of humanity by using resources less efficiently than possible, is sacrifice the best course of action?

comment by [deleted] · 2012-07-08T20:16:23.186Z · LW(p) · GW(p)

I have a question: what is akrasia exactly?

Say I have to finish a paper, but I also enjoy wasting time on the internet. All things considered, I decide it would be better for me to finish the paper than for me to waste time on the internet. And yet I waste time on the internet. What's going on there? It can't just be a reflex or a tick: my reflexes aren't that sophisticated. Given how complicated wasting time on the internet is, and that I decidedly enjoy it, it looks like an intentional action, something which is the result of my reasoning. Yet I reasoned that I shouldn't go on the internet, so it can't really be an intentional action. My intention was exactly not to go on the internet.

Maybe I'm just being hypocritical, and I actually value the internet more than finishing a paper?

Replies from: wmorgan, Viliam_Bur, CronoDAS, kajro, AeroRails
comment by wmorgan · 2012-07-08T21:35:43.310Z · LW(p) · GW(p)

Don't sell your reflexes short. Our brains were executing complicated plans for millions of generations before acquiring explicit reasoning, i.e. language. Lately I've been leaning towards the Elephant and Rider model of decision-making, or drawing from this pithy tweet by Stephen Kaas. In your case, I think, your elephant wants to surf the web, and it has a lot more brainpower than your goal-setting rider who wants to finish the paper.

In a practical sense, I think this means you want to put yourself in situations where success is the default, expected result. Use your conscious mind to set up the system, once, then the full power of your brain will work towards your goal, rather than have your "seek cheap entertainment" drive fighting your "finish my paper" drive. (Easier said than done!)

Paul Graham has two computers, one online and the other disconnected from the Internet, and his rule is "you can waste as much time as you want, as long as it's on the other computer". That works for him. Scott Adams' rule is "go to the gym five times a week" even if that means walking through the doors and then walking out immediately. He says, "losers have goals and winners have systems."

Replies from: stcredzero
comment by stcredzero · 2012-07-08T23:16:19.512Z · LW(p) · GW(p)

In a practical sense, I think this means you want to put yourself in situations where success is the default, expected result.

This is a little like "burning the boats."

http://techcrunch.com/2010/03/06/andreessen-media-burn-boats/

comment by Viliam_Bur · 2012-07-09T11:29:29.175Z · LW(p) · GW(p)

what is akrasia exactly?

In most cases, an euphemism for internet addiction.

Replies from: Incorrect
comment by Incorrect · 2012-07-09T11:43:58.125Z · LW(p) · GW(p)

Nah, if I don't waste time on the internet I very easily find other ways to waste time instead.

comment by CronoDAS · 2012-07-08T21:24:54.991Z · LW(p) · GW(p)

Akrasia is getting stuck in a local optimum. It's more pleasant to spend the next instant surfing the internet than writing your paper, so that's what many simple optimizing algorithms (such as a hill-climbing algorithm) will end up doing, rather than "discover" that having finished the paper will be even better.

comment by kajro · 2012-07-08T21:01:57.917Z · LW(p) · GW(p)

Couldn't it be a primitive reflex that starts a chain of locally intentional actions leading to "browsing the internet"? For example, you don't know what to write next so you alt-tab to the web browser. In itself that isn't a complicated reflex - sometimes I find myself alt-tabbing and not remembering what I was alt-tabbing for. Once you get to your web browser, you start making these locally intentional actions - i.e within the scope of a web browser's functionality - and when you finally realize what you've done it feels like one big intentional action.

Replies from: None
comment by [deleted] · 2012-07-08T21:10:59.572Z · LW(p) · GW(p)

That's a good thought, thanks.

comment by AeroRails · 2014-04-23T13:59:54.280Z · LW(p) · GW(p)

A bit late but I just want to chime in that the consensus is that akratic action is intentional. You CAN act intentionally against your better judgment, and your example of wasting time on the internet is almost certainly an intentional rather than reflex action.

comment by Trevor_Caverly · 2012-07-09T04:27:01.927Z · LW(p) · GW(p)

Summary: I'm wondering whether anyone (especially moral anti-realists) would disagree with the statement, "The utility of an agent can only depend on the mental state of that agent".

I have had little success In my attempts to devise a coherent moral realist theory of meta-ethics, and am no longer very sure that moral realism is true, but there is one statement about morality that seems clearly true to me. "The utility of an agent can only depend on the mental state of that agent". Call this statement S. By utility I roughly mean how good or bad things are, from the perspective of the agent. The following thought experiment gives a concrete example of what I mean by S.

Imagine a universe with only one sentient thing, a person named P. P desires that there exist a 1 meter cube of gold somewhere within P's lightcone. P has a (non-sentient) oracle that ey trusts completely to provide either an accurate answer or no information for whatever question ey asks. P asks it whether a 1 meter gold cube exists within eir lightcone, and the oracle says yes.

It seems clear that whether the cube actually exists cannot possibly be relevant to the utility of P, and therfore the utility of the universe. P is free to claim that eir utility depends upon the existence of the cube, but I believe P would be mistaken. P certainly desires the cube to exist, but I believe that it cannot be part of P's utility function. (I suppose it could be argued that in this case P is also mistaken about eir desire, and that desires can only really be about one's own metnal state, but that's not important to my argument). Similarly, P would be mistaken to claim that anything not part of eir mind was part of eir utility function.

I'm not sure whether S in itself implies a weak form of moral realism, since it implies that statements of the form "x is not part of P's utility function" can be true. Would these statements count as ethical statements in the necessary way? It does not seem to imply that there is any objective way to compare different possible worlds though, so it doesn't hurt the anti-realist position much. Still, it does seem to provide a way to create a sort of moral partition of the world, by breaking it into individual morally relevant agents (no, I don't have a good definition for "morally relevant agent") which can be examined separately, since their utility can only depend on their map of the world and not the world itself. The objective utility of the universe can only depend on the separate utilities in each of the partitions. This leaves the question of whether it makes any sense to talk about an objective utility of the universe.

So, does anyone disagree with S? If you agree with S, are you an anti-realist?

Replies from: None, mwengler, Jack, bryjnar, Eugine_Nier
comment by [deleted] · 2012-07-09T11:59:41.342Z · LW(p) · GW(p)

I'm not 100% sure what your S means, but I don't think it's true.

If Omega comes along and says "If you want, I'll make a 1m cube of gold somewhere you'll never observe it, and then make you forget all about this offer", then P will accept.

On the other hand, P wouldn't necessarily accept an offer to make him delusionally believe that a cube of gold exists.

Replies from: Trevor_Caverly
comment by Trevor_Caverly · 2012-07-09T15:21:29.491Z · LW(p) · GW(p)

That is true, but not relevant to the point I am trying to make. If P took the first offer, they would end up exactly as well off as if they hadn't received the offer, and if P took the second offer, they would end up better off. The fact that P's beliefs don't correspond with reality does not change this. The reason that P would accept the first offer but not the second is that P believes the universe would be "better" with the cube. P does not think ey will actually be happier (or whatever) accepting offer 1, and if P does think ey will be happier, I think that is an error in moral judgment. The error is in thinking that the box is morally relevant, when it cannot be, since P is the only morally relevant thing in this universe.

comment by mwengler · 2012-07-09T21:54:10.935Z · LW(p) · GW(p)

So, does anyone disagree with S? If you agree with S, are you an anti-realist?

I disagree with S and I think you might also. It depends on how you define utility.

Consider two sentiences, P&Q. They are in identical states of mind. However, they are not in identical states of universe. P is in a room which is about to have its exits sealed and will then be slowly filled with an acid solution which will eat the flesh from P's bones, killing him after about 45 minutes of excruciating pain. Q is in a room in which a screening of the movie "Cabaret" starring Liza Minelli, Robert York, and Joel Grey is about to begin.

But at this moment, neither acid nor movie has started, and P & Q are in the same state of mind. By your definition of utility do they have the same utility?

I disagree with S. I have no idea if agreeing with S makes you an anti-realist, but it does seem to indicate you are underestimating the power of reality to make you unhappy.

Replies from: Trevor_Caverly
comment by Trevor_Caverly · 2012-07-09T22:17:54.020Z · LW(p) · GW(p)

I guess the realism aspect isn't as relevant as I thought it would be. I expected that any realists would believe S, and that anti-realists might or might not. I also think that not believing S would imply anti-realism, but I'm not super confident that that's true.

I would say that P and Q have equal utility until the point where their circumstances diverge, after which of course they would have different utilities. There is no reason to consider future utility when talking about current utility. So it just depends on what section of time you are looking at. If you're only looking at a segment where P and Q have identical brain states, then yes I would say they have the same utility.

comment by Jack · 2012-07-09T13:17:25.669Z · LW(p) · GW(p)

I'm a moral anti-realist. I don't see a justification for S. If there are facts about "how good or bad things are, from the perspective of the agent" it seems like those facts, for humans, are often facts about the 'real world'. I also don't much see what this has to do with moral realism.

Regarding objective utility: are you just talking about adding up utilities of all agent-like things? I suppose you could call such a figure "objective utility" but that doesn't mean such a figure is of any moral importance. I doubt I would care much about it.

Replies from: Trevor_Caverly
comment by Trevor_Caverly · 2012-07-09T15:32:30.707Z · LW(p) · GW(p)

This is related to moral realism in that I suspect moral realists would be more likely to accept S, and S arguably provides some moral statements that are true. But it's mainly just something I was thinking about while thinking about moral realism.

I don't really know what I'm talking about when I say objective utility, I am just claiming that if such a thing exists/ makes sense to talk about, that it can only depend on the states of individual minds, since each mind's utility can only depend on the state of that mind and nothing outside of the utility of minds can be ethically relevant.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-07-10T05:57:10.777Z · LW(p) · GW(p)

This is related to moral realism in that I suspect moral realists would be more likely to accept S, and S arguably provides some moral statements that are true.

I'm a moral realist and I find your claim nearly as absurd as asserting that 2+2=3, and I suspect nearly all moral realists would share my sentiment (even if they wouldn't express it quiet as strongly).

comment by bryjnar · 2012-07-09T11:52:55.149Z · LW(p) · GW(p)

Your example seems to provide an instance where S is false. You just assert that it isn't like that:

It seems clear that whether the cube actually exists cannot possibly be relevant to the utility of P

Why?

P certainly desires the cube to exist, but I believe that it cannot be part of P's utility function.

Again, why? You haven't really said anything about why you'd think that...

Also, it seems pretty clearl that things outside of your head can matter. Suppose an evil demon offers you a choice: either

  • your family will be tortured, but you will think that they're fine
  • your family will be fine, but you will think that they're being tortured.

And of course, all memory of the encounter with the demon will be erased.

I think most people would take the second option, and gladly! That seems pretty strong prima facie evidence that stuff outside people's heads matters to them. So I guess I'd disagree with S. Oh, and I'm (sort of) an anti-realist.

Replies from: Trevor_Caverly
comment by Trevor_Caverly · 2012-07-09T15:15:06.626Z · LW(p) · GW(p)

In your example, I agree that almost everyone would choose the second choice, but my point is that they will be worse off because they make that choice. It is an act of altruism, not an act which will increase their own utility. (Possibly the horror they would experience in making choice 1 would outweigh their future suffering, but after the choice is made they are definitely worse off having made the second choice.)

I say that the cube cannot be part of P's utility function, because whether the cube exists in this example is completely decoupled from whether P believes the cube exists, since P trusts the oracle completely, and the oracle is free to give false data about this particular fact. P's belief about the cube is part of the utility function, but not the actual fact of whether the cube exists.

Replies from: bryjnar, mwengler
comment by bryjnar · 2012-07-09T17:07:32.961Z · LW(p) · GW(p)

Well, again, you're kind of just asserting your claim. Prima facie, it seems pretty plausible that whatever function evaluates how well off a person is could take into account things outside of their mental states. People looking after their families isn't often thought of as especially altruistic, because it's something that usually matters very deeply to the person, even bracketing morality.

Your second paragraph is genuinely circular: the whole argument was about whether it showed that S was false, but you appeal to the fact that

whether the cube exists in this example is completely decoupled from whether P believes the cube exists

This is only relevant if we already think S is true. You can't use it to support that very argument!

Look, if it helps, you can define utility*, which is utility that doesn't depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*. Or just perhaps they actually do know what they want? Utility* is a perfectly fine concept, it's just not one that is acutally much use in relation to human decision-making.

Edit: remember to escape * s!

Edit2: quoting fail.

Replies from: Trevor_Caverly
comment by Trevor_Caverly · 2012-07-09T21:06:51.920Z · LW(p) · GW(p)

Look, if it helps, you can define utility*, which is utility that doesn't depend on anything outside the mental state of the agent, as opposed to utility**, which does. Then you can get frustrated at all these silly people who seem to mistakenly think they want to maximize their utility** instead of their utility*.

Someone can want to maximize utility**, and this is not necessarily irrational, but if they do this the are choosing to maximize something other than their own well-being.

Perhaps they are being altruistic and trying to improve someone else's well-being at the expense of their own, like in your torture example. In this example, I don't believe that most people who choose to save their family believe that they are maximizing their own well-being, I think they realize they are sacrificing their well-being (by maximizing utility** instead of utility*) in order to increase the well-being of their family members. I think that any one who does believe they are maximizing their own well being when saving their family is mistaken.

Perhaps they do not have any legitimate reason for wanting something other than their own well-being. Going back to the gold cube example, think of why P wants the cube to exist. P could want it to exist because knowing that gold cubes exist makes them happy. If this is the only reason, then P would probably be perfectly happy to accept a deal where their mind is altered so that they know the cube exists, even though it does not. If, however, P thinks there is something "good" about the cube existing, independent of their mind, they would (probably) not take this deal. Both of these actions are perfectly rational, given P's beliefs about morality, but in the second case, P is mistaken in thinking that the existence of the cube is good by itself. This is because in either case, after accepting the deal, P's mental state is exactly the same, so P's well-being must be exactly the same. Further, nothing else in this universe is morally relevant, and P was simply mistaken in thinking that the existence of the gold block was a fundamentally good thing. (There might be other reasons for P to want the cube. Perhaps P just has an inexplicable urge for there to be a cube. in this case it is unclear whether they would take the deal, but taking it would surely still increase their well-being)

Well, again, you're kind of just asserting your claim. Prima facie, it seems pretty plausible that whatever function evaluates how well off a person is could take into account things outside of their mental states.

It seems implausible to me that this function could exist independent of a mind or outside of a mind. You seem to be claiming that two people with identical mental states could have different levels of well-being. This seems absurd to me. I realize I am not provide much of an argument for this claim, but the idea that someone's well-being could depend upon something that has no connection with their mental states whatsoever strongly violates my moral intuitions. I expected that other people would share this intuition, but so far no one has said that they do, so perhaps this intuition is unusual. (One could argue that P is correct in believing that the cube has moral value/utility independent of any sentient being, but this seems even more absurd)

In any case, I think S is basically equivalent to saying that utility (or moral value, however you want to define it) reduces to mental states.

P.S. I think you quoted more than you meant to above.

Replies from: bryjnar
comment by bryjnar · 2012-07-09T21:30:07.550Z · LW(p) · GW(p)

Okay, I just think you seem to have some pretty radically different intuitions about what counts for someone's well-being.

One other thing: you seem to be assuming that the only reasons someone can have to act are either

  • it promotes their well-being
  • some moral reason.

I think this isn't true, and it's especially not true if you're defining well-being as you are. So you present the options for P as

  • they want to have the happy-making belief that the cube exists
  • they think there is something "good" about the cube existing

but these aren't exhaustive: P could just want the cube to exist, not to produce mental states in themself or for a moral reason. If you're now claiming that actually noone desires anything other than that they come to have certain mental states, that's even more controversial, and I would say even more obviously false ;)

Replies from: Trevor_Caverly
comment by Trevor_Caverly · 2012-07-09T21:52:55.612Z · LW(p) · GW(p)

I said that there could be other reasons for P to want the cube to exist. If someone has a desire that fulfilling will not be good for them in any way, or good for any other sentient being, that's fine but I do not think that a desire of this type is morally relevant in any way. Further if someone claimed to have such a desire, knowing that fulfilling it served no purpose other than simply fulfilling it, I would believe them to be confused about what desire is. Surely the desire would have to be at least causing them discomfort, or at least some sort of an urge to fulfill the desire. Without that, what does desire even mean?

But that doesn't really have much to do with whether S is true. Like I said, It seems clearly true to me that identical mental states implies identical well-being, If you don't agree, I don't really have any way to convince you other than what I've already written.

comment by mwengler · 2012-07-09T21:05:26.789Z · LW(p) · GW(p)

It may not matter whether there is gold in them thar hills, but it does matter what the oracle says. So I think you have misstated P's utility function. P wants the oracle to tell him the gold exists, that is his utility function. And realizing that, you cannot say that it doesn't matter what the oracle really tells him, because it does.

I don't think P's hypothesized stupid reliance on a lying oracle binds us to ignore what P really wants and thus call it only a state of mind. He needs that physical communication from something other than his mind, the oracle.

Replies from: Trevor_Caverly
comment by Trevor_Caverly · 2012-07-09T21:28:21.780Z · LW(p) · GW(p)

I am stipulating that P really truly wants the gold to exist (in the same way that you would want there not to exist a bunch of people who are being tortured, ceteris paribus). Whether P should be trusting the oracle is besides the point. The difference between these scenarios is that you are correct in believing that the people being tortured is morally bad. However, your well-being would not be affected by whether the people are being tortured, only by your belief of how likely this is. Of course, you would still try to stop the torture if you could, even if you knew that you would never know whether you were successful, but this is mainly an act of altruism.

My main point is probably better expressed as "Beings with identical mental states must be equally well off". Disagreeing with this seems absurd to me, but apparently a lot of people do not share this intuition.

Also, you could easily eliminate the oracle in the example by just stating that P spontaneously comes to believe the cube exists for no reason. Or we could imagine that P has a perfectly realistic hallucination of the oracle. The fact that P's belief is unjustified does not matter. According to S, the reasons for P's mental state are irrelevant.

Replies from: mwengler
comment by mwengler · 2012-07-09T22:16:07.982Z · LW(p) · GW(p)

Whether P should be trusting the oracle is besides the point.

No, it isn't. You are claiming that P "really" wants the gold to exist, but you are also claiming that P thinks that at least one of the definitions of "the gold exists" is "the oracle said the gold exists." You are flummoxed by the paradox of P feeling just as happy due to a false belief in gold as he would based on a true belief in gold, and you are ignoring the thing that ACTUALLY made him happy: which was the oracle telling him the gold was real.

How surprising should it be that ignoring the real world causes of something produces paradoxes? P's happiness doesn't depend the gold existing in reality, but it does believe on something in reality causing him to believe the gold exists. And if the gold doesn't exist in reality, P's happiness is not changed, but if the reality that lead him to believe the gold existed is reversed, if the oracle tells him (truly or falsely) the gold doesn't exist, then his happiness is changed.

I actually have not a clue what this example's connection to moral realism might be, either supporting it or denying it. But I am pretty clear that what you present as a "real mental result without a physical cause because the gold does not matter" is merely a case of you taking an hypothesized fool at his word and ignoring the REAL physical cause of P's happiness or sadness. Or from a slightly different tack, if P defined "gold exists" as "oracle tells me gold exists" then P's claim that his utility is the gold is equivaelnt to a claim that his utility is being told there is god.

Ps happiness has a real cause in the real world. Because P is an idiot, he misunderstands what that cause means, but even P recognizes that the cause of his happiness is what the oracle told him.

Replies from: Trevor_Caverly
comment by Trevor_Caverly · 2012-07-09T23:11:39.363Z · LW(p) · GW(p)

No, it isn't. You are claiming that P "really" wants the gold to exist, but you are also claiming that P thinks that at least one of the definitions of "the gold exists" is "the oracle said the gold exists."

I do not claim that. I claim that P believes the cube exists because the oracle says so. He could believe it exists because he saw it in a telescope. Or because he saw it fly in front of his face and then away into space. Whatever reason he has for "knowing" the cube exists has some degree of uncertainty. He is happy because he has a strong belief that the gold exists. Moreover, my point stands regardless of where P gets his knowledge. Imagine, for example, that P believes strongly that the cube does not exist, because the existence of the cube violates Occam's razor. It is still the case (in my opinion) that whether he is correct does not alter his well-being.

How surprising should it be that ignoring the real world causes of something produces paradoxes?

I do not think that this is a paradox, it seems intuitively obvious to me. In fact, I'm not entirely sure that we disagree on anything. You say "P's happiness doesn't depend the gold existing in reality, but it does believe on something in reality causing him to believe the gold exists." I think others on this thread would argue that P's happiness does change depending on the existence of the gold, even if what the oracle tells him is the same either way.

I actually have not a clue what this example's connection to moral realism might be,

Maybe nothing, I just suspected that moral anti-realists would be less likely to accept S. My main question is just whether other people share my intuition that S is true (and what there reasons for agreeing or disagreeing are).

Ps happiness has a real cause in the real world. Because P is an idiot, he misunderstands what that cause means, but even P recognizes that the cause of his happiness is what the oracle told him.

I'm not sure I understand what you're saying. P believes that the oracle is telling him the cube exists because the cube exists. P is of course mistaken, but everything else the oracle told him was correct, so he strongly believes that the oracle will only tell him things because they are the truth. Whether this is a reasonable belief for P to have is not relevant. You seem to be saying that if something has no causal effect on someone, that it cannot affect their well-being. I agree with that, but other people do not agree with that.

comment by Eugine_Nier · 2012-07-10T06:05:56.176Z · LW(p) · GW(p)

If you truly believe this proposition, as opposed to merely belief in belief, you shown stop reading LessWrong right now. If you keep reading LessWrong, you are likely to get better at rationality, and in particular at telling whether something is true or false, which will make it harder for you to maintain comfortable beliefs and thus will vastly lower your utility by your definition.

Replies from: Trevor_Caverly
comment by Trevor_Caverly · 2012-07-11T03:17:32.308Z · LW(p) · GW(p)

I think you're misunderstanding what I meant. I'm using "Someone's utility" here to mean only how good or bad things are for that person. I am not claiming that people should (or do) only care about their own well-being, just that their well-being only depends on their own mental states. Do you still disagree with my statement given this definition of utility?

If someone kidnapped me and hooked me up to an experience machine that gave me a simulated perfect life, and then tortured my family for the rest of their lives, I claim that this would be good for me. It would be bad overall because people would be harmed (far in excess of my gains). If I was given this as an option I would not take it because I would be horrified by the idea and because I believe it would be morally wrong, but not because I believe I would be worse off if I took the deal. If someone claimed that taking this deal would be bad for their own well-being, I believe that they would be mistaken.

If someone claimed that the existence of a gold cube in a section of the universe where it would never be noticed by anyone or affect any sentient things could be a morally good thing, I would likewise claim that they are mistaken. I claim this, because regardless of how much they want the cube to exist, or how good they believe the existence of the cube to be, no one's well-being can depend on the existence of the cube. At most, someone's well-being can depend on their belief in the existence of the cube.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-07-11T07:21:16.868Z · LW(p) · GW(p)

I think you're misunderstanding what I meant. I'm using "Someone's utility" here to mean only how good or bad things are for that person. I am not claiming that people should (or do) only care about their own well-being, just that their well-being only depends on their own mental states. Do you still disagree with my statement given this definition of utility?

I had assumed you meant something like this.

To see if I'm understanding you correctly, would you be in favor of wireheading the entire human race?

Replies from: Trevor_Caverly
comment by Trevor_Caverly · 2012-07-12T01:10:15.453Z · LW(p) · GW(p)

I would not be in favor of wireheading the human race, but I don't see how that is connected to S. If wireheading all of humanity is bad, it seems clear that it is bad because it is bad for the people being wireheaded. If this is a wireheading scenario where humanity goes extinct as a result of wireheading, than this is also bad because of the hypothetical people who would have valued being alive. There is nothing about S that stops someone from comparing the normal life they would live with a wireheaded life and saying they would prefer the normal life. This is because these two choices involve different mental states for the person, and S does not in itself place any restrictions on which mental states would be better for you to have. Rather, it states that your own mental states are the only things that can be good or bad for you.

If you think S is false, you could additionally claim that wireheading humanity is bad because the fact that humanity is wireheaded is something that almost everybody believes is bad for them, and so if humanity is wireheaded, that is very bad for many people, even if these people are not aware that humanity is wireheaded. But it seems very easy to believe that wireheading is bad for humanity without believing this claim.

Just to make sure I understand your position: Imagine two universes U1, and U2,like the one in my original post, where P1 and P2 are unsure whether the gold cube exists. In U1 the cube exists, in U2 it does not, but they are otherwise identical (or close enough to identical that P1 and P2 have identical brain states). The Ps truly desire that the cube exist as much as anyone can desire a fact about the universe to be true. Do you claim that P1 is better off than P2? If so do you really think that this being possible is as obvious as that 2 + 2 =\= 3 ? If not, why would someone's well-being be able to depend on something other than their mental states in some situations but not this one? To me it seems very obvious to me that P1 and P2 have exactly equally good lives, and I am truly surprised that other people's intuitions and beliefs lean strongly the other way.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-07-12T03:02:05.739Z · LW(p) · GW(p)

Just to make sure I understand your position: Imagine two universes U1, and U2,like the one in my original post, where P1 and P2 are unsure whether the gold cube exists. In U1 the cube exists, in U2 it does not, but they are otherwise identical (or close enough to identical that P1 and P2 have identical brain states). The Ps truly desire that the cube exist as much as anyone can desire a fact about the universe to be true. Do you claim that P1 is better off than P2?

So would you argue that P2 shouldn't investigate whether the cube exists, because then he would find out that it doesn't and thus become worse off?

Replies from: Trevor_Caverly
comment by Trevor_Caverly · 2012-07-12T04:33:49.954Z · LW(p) · GW(p)

Yes. P2 finding this out would harm him, and couldn't possibly benefit anyone else, so if searching would lead him to believe the cube doesn't exist, it would be ethically better if he didn't search. But the harm to P2 is a result of his knowledge, not the mere fact of the cube's inexistence. Likewise, P1 should investigate assuming he would find the cube. The reason for this difference is that investigating would have a different effect on the mental states of P1 than it would on the mental states of P2. If the cube in U1 can't be found by P1, than the asymmetry is gone, and neither should investigate.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-07-12T04:55:27.220Z · LW(p) · GW(p)

Very well, I repeat the advise I gave you above.

If you truly believe this proposition, as opposed to merely belief in belief, you shown stop reading LessWrong right now. If you keep reading LessWrong, you are likely to get better at rationality, and in particular at telling whether something is true or false, which [is likely to result in you discovering that a lot of gold cubes don't exist].

comment by James_Miller · 2012-07-08T17:08:22.541Z · LW(p) · GW(p)

Love - in increase in her utility causes an increase in your utility.

Hate - in increase in her utility causes a decrease in your utility.

Indifference - a change in her utility has no influence on your utility.

Love = good.
Hate = evil.
Indifference = how almost everyone feels towards almost everyone.

Replies from: Oligopsony, asparisi, Solvent, kajro, wedrifid
comment by Oligopsony · 2012-07-08T17:28:28.135Z · LW(p) · GW(p)

As you've defined them, indifference is a razor-thin line - and I'd say we love each other, mostly.

But scalar "increases in utility" is not typically what our benevolence and malevolence responds to. If I care about positional status but also want other people to be happy independently of that, then there's universal love and universal hate flowing from me alike. Is there an elegant rule distinguishing how utility changes in others are sorted into this? (The obvious things that come to mind don't seem to really work.) And what can I do to convert status threats into sympathy (when appropriate, which let's say it typically is?)

One thing I have noticed myself doing, which I think is a good thing, is thinking of the actual world as moving up or down in status relative to other possible worlds in response to its getting better or worse, and of my relative status as moving up or down with that. But I haven't been doing this in a self-conscious way.

comment by asparisi · 2012-07-08T18:08:21.725Z · LW(p) · GW(p)

So, let's assume your definitions, and also assume a Person X.

Person X likes to hit kids. They enjoy it. They may or may not think about how this decreases the utility of the kids: in fact if hitting kids causes their utility to go up or down, person X doesn't care. They just like hitting kids.

I hate Person X, because I know they like to hit kids. I value kids and think hitting them is damaging, so when X's utility goes up, mine goes down. So I hate X in just the way you say.

Note that Person X doesn't hate the kids, by your definition. They aren't concerned with the children's utility at all; they are actually indifferent.

But I hate Person X. Which makes me the evil one.

That does not add up to normality.

Replies from: TheOtherDave, Solvent
comment by TheOtherDave · 2012-07-08T18:14:59.005Z · LW(p) · GW(p)

I value kids and don't think hitting them is damaging

I'm almost certain that "don't" is not intended.

Replies from: asparisi
comment by asparisi · 2012-07-08T20:40:03.885Z · LW(p) · GW(p)

Edited. Thanks.

comment by Solvent · 2012-07-09T03:03:28.022Z · LW(p) · GW(p)

You're confusing a few different issues here.

So your utility decreases when theirs increases. Say that your love or hate for the adult is L1, and your love or hate for the kid is L2. Utility change for each as a result of the adult hitting the kid is U1 for him and U2 for the kid.

If your utility decreases when he hits the kid, then all we've established is that -L2U2 > L1U1. You may love them both equally, but think that hitting the kid messes him up more than it makes the adult happy, you'd still be unhappy when the guy hits a kid. But we haven't established that you hate the adult.

If the only thing that makes Person X happy is hitting kids, and you somehow find out that his utility function has increased directly, then you can infer from that that he's hit a kid, and that makes you sad. However, this can happen even if you have a positive multiplier for his utility function in yours.

So I think your mistake is saying "I hate Person X, because I know they like to hit kids." You might hate them, but the given definitions don't force you to hate them just because they hit kids.

Put another way, you might not be happy if you heard that they had horrible back pain. You can care for someone, but not like what they're doing.

(Your comment still deserves commendation for presenting an argument in that form.)

Replies from: asparisi
comment by asparisi · 2012-07-09T04:42:37.690Z · LW(p) · GW(p)

I am actually using James' definition of hate, which is "When their utility function goes up, mine goes down."

I suppose that, trivially, this is not entirely accurate of me and Person X. If Person X eats a sandwich and enjoys it, I don't have a problem with that.

But if "hate" is unilateral in that fashion, no one loves or hates anyone: I have yet to encounter any individual who would, for instance, feel worse because someone else is enjoying a tasty sandwich. So instead, I used a more loosely defined variation on their definition, where "hate" can be allowed to occur on one axis of a person's life and not another.

Under this variation, I can hate this person for hitting kids and not along other aspects of their life, which is normal. But hating that person isn't evil, which is part of what I was getting at. I don't feel happier if Person X gets utility from hitting kids, even if I would otherwise value Person X. And I don't think it is evil to hate someone who gets their utility in a really messed-up way.

What might make this more difficult is that I am using a colloquial version of 'evil' but James' particular formulation of 'hate,' which may make things confusing since I don't think James' definition of hate maps onto what we normally refer to as hate.

comment by Solvent · 2012-07-09T02:53:32.023Z · LW(p) · GW(p)

What are you trying to do with these definitions? The first three do a reasonable job of providing some explanation of what love means on a slightly simpler level than most people understand it.

However, the "love=good, hate=evil" can't really be used like that. I don't really see what you're trying to say with that.

Also, I'd argue that love has more to do with signalling than your definition seems to imply.

Replies from: James_Miller
comment by James_Miller · 2012-07-09T03:11:38.524Z · LW(p) · GW(p)

What are you trying to do with these definitions? Show how a tiny bit of economics can be used to provide definitions, consistent with many people's understanding, of love, hate, good and evil. (I have provided these definitions to my intermediate microeconomics students.)

Evil, I believe, is taking pleasure in other peoples' pain. I would exclude signaling concerns when deciding whether someone acted out of love.

Replies from: TheOtherDave, tut
comment by TheOtherDave · 2012-07-09T15:03:49.221Z · LW(p) · GW(p)

Huh.

So on your account, if I enjoy watching people suffer, but I nevertheless go out of my way to alleviate suffering in the world because I prefer people not suffer (thereby reducing my own pleasure), I'm evil? And if I don't enjoy watching people suffer, but I go around causing suffering because I prefer that people suffer (again, thereby potentially reducing my own pleasure), I'm not evil?

Did I get that right?

Replies from: James_Miller
comment by James_Miller · 2012-07-09T15:09:48.636Z · LW(p) · GW(p)

So on your account, if I enjoy watching people suffer, but I nevertheless go out of my way to alleviate suffering in the world because I prefer people not suffer (thereby reducing my own pleasure), I'm evil?

Impossible since utility is that which you maximize or utility is measured by revealed preferences.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-09T17:25:29.445Z · LW(p) · GW(p)

I'll accept that definition of utility, but what does it have to do with enjoyment?

That is, OK, in this case I believe suffering has net negative utility, which explains my preferring to alleviate it. Am I somehow wrong, then, when I say I enjoy watching people suffer... I only think I enjoy it but I really don't? Or what, exactly?

Replies from: James_Miller
comment by James_Miller · 2012-07-09T19:02:16.525Z · LW(p) · GW(p)

I should have written "Evil, I believe, is taking UTILITY in other peoples' pain."

Am I somehow wrong, then, when I say I enjoy watching people suffer... I only think I enjoy it but I really don't?

From a rational actor microeconomic viewpoint this doesn't make sense. But if you believe that enjoyment has some objective, physical basis in the brain then it just means you are mistaken.

Replies from: prase, TheOtherDave
comment by prase · 2012-07-09T22:01:52.563Z · LW(p) · GW(p)

Torturing a masochist with his consent isn't evil. So you perhaps should have written "Evil, I believe, is taking UTILITY in other peoples' DISUTILITY." But then, the definition of evil equals your original definition of hate tautologically. Which may or may not be what you've intended.

comment by TheOtherDave · 2012-07-09T20:08:08.141Z · LW(p) · GW(p)

I've made no claims about the basis for enjoyment, physical or otherwise, merely about my ability to recognize when I am enjoying something. But evidently the talk of enjoyment was a red herring to begin with, so I'm happy to drop it here.

comment by tut · 2012-07-09T14:35:58.053Z · LW(p) · GW(p)

Evil, I believe, is taking pleasure in other peoples' pain.

No, the word for that is sadism. Evil is about how you judge a person('s actions, motivations etc), not purely about their experience/values.

comment by kajro · 2012-07-08T21:20:02.289Z · LW(p) · GW(p)

So the more people that enjoy hurting you (an increase in their utility causing a decrease in your utility), the more evil you become (since you hate a larger number of people)? Did I misinterpret this?

comment by wedrifid · 2012-07-08T17:29:38.142Z · LW(p) · GW(p)

Love - in increase in her utility causes an increase in your utility. Hate - in increase in her utility causes a decrease in your utility. Indifference - a change in her utility has no influence on your utility.

Love = good. Hate = evil. Indifference = how almost everyone feels towards almost everyone.

You make a compelling case that evil is a good thing (sometimes).

comment by Grognor · 2012-07-16T14:51:01.071Z · LW(p) · GW(p)

[...]and related-to-rationality enough to deserve its own thread.

I've gotten to thinking that morality and rationality are very, very isomorphic. The former seems to require the latter, and in my experience the latter gives rise to the former. So they may not even be completely distinguishable. We've got lots of commonalities between the two, noting that both are very difficult for humans due to our haphazard makeup, and both have imaginary Ideal versions (respectively: God, and the agent who only has true beliefs and optimal decisions and infinite computing power, and they seem to be correlated (though it is hard to say for sure), and the folk versions of both are always wrong. By which I mean when someone has an axe to grind, he will say it is moral to X, or rational to X, where really X is just what he wants, whether he is in a position of power or not. Related to that I've got a pet theory that if you take the high values of each literally, they are entirely uncontroversial, and arguments and tribalism only begin when people start making claims of what each implies, but once again I can't be sure at this juncture.

What say ye, Less Wrong?

Replies from: TimS, TimS, TimS
comment by TimS · 2012-07-16T15:25:14.633Z · LW(p) · GW(p)

Related to that I've got a pet theory that if you take the high values of each literally, they are entirely uncontroversial

My sense is that this assertion can be empirically falsified for all levels of abstraction below "Do what is right."

But in a particular society or sub-culture, more specific assertions can be uncontroversial - in an unhelpful in solving any problems kind of way. That was what I took away from Applause lights.

Replies from: Grognor
comment by Grognor · 2012-07-16T16:13:50.394Z · LW(p) · GW(p)

My sense is that this assertion can be empirically falsified for all levels of abstraction below "Do what is right."

Indeed, this is one of many reasons why I am starting to think "go meta" is really, really good advice.

Edit: Clarification, what I mean is that I think virtue ethics, deontology, utilitarianism, and the less popular ethical theories agree way more than their proponents think they do. At this point this is still a guess.

Replies from: TimS
comment by TimS · 2012-07-16T19:24:46.528Z · LW(p) · GW(p)

I don't follow. Discussing theories of morality is already quite meta from the object level moral decisions we face in our daily lives. Going another level of meta is unlikely to illuminate - it certainly doesn't seem likely to be helpful in doing the impossible.

comment by TimS · 2012-07-16T15:24:55.754Z · LW(p) · GW(p)

Related to that I've got a pet theory that if you take the high values of each literally, they are entirely uncontroversial

My sense is that this assertion can be empirically falsified for all levels of abstraction below "Do what is right."

But in a particular society or sub-culture, more specific assertions can be uncontroversial - in an unhelpful in solving any problems kind of way. That was what I took away from Applause lights.

comment by TimS · 2012-07-16T15:24:36.863Z · LW(p) · GW(p)

Related to that I've got a pet theory that if you take the high values of each literally, they are entirely uncontroversial

My sense is that this assertion can be empirically falsified for all levels of abstraction below "Do what is right."

But in a particular society or sub-culture, more specific assertions can be uncontroversial - in an unhelpful in solving any problems kind of way. That was what I took away from Applause lights.

comment by Waterd · 2012-07-09T01:06:50.813Z · LW(p) · GW(p)

Question: What is the definition of morality? What is morality? For what humans use this concept and what motivitates humans to better understand morality, whatever it is?

Replies from: TheOtherDave, None
comment by TheOtherDave · 2012-07-09T04:53:58.942Z · LW(p) · GW(p)

As it's used here, the term roughly refers to a framework for ordering actions or states of the world. That is, given a choice between action A1 and A2, an agent with one moral framework might endorse A1 over A2, and an agent with a different moral framework might endorse A2 over A1, either because of some direct property of the actions themselves, or some property of the states (or expected states) of the world that causes or is caused by the performing of those actions.

People can disagree about what properties of an action or state matter for sorting, and even people who agree on what properties matter can disagree on how to sort based on them

comment by [deleted] · 2012-07-09T11:03:00.891Z · LW(p) · GW(p)

Morality is the goal system which values positive subjective outcomes for sentient beings.

Replies from: wedrifid
comment by wedrifid · 2012-07-09T11:43:24.977Z · LW(p) · GW(p)

Morality is the goal system which values positive subjective outcomes for sentient beings.

No, that's altruism. Morality isn't nearly so nice (in general). Some morals give better subjective outcomes for sentient beings---yours probably included, but not all do and that is not their point.