Requesting clarification- On the Metaethics

post by Carinthium · 2013-10-07T13:00:44.353Z · LW · GW · Legacy · 75 comments

My apologies if this doesn't deserve a Discussion post, but if this hasn't been addresed anywhere than it's clearly an important issue.

There have been many defences of consequentialism against deontology, including quite a few on this site. What I haven't seen, however, is any demonstration of how deontology is incompatible with the ideas in Elizier's Metaethics sequence- as far as I can tell, a deontologist could agree with just about everything in the Sequences.

Said deontologist would argue that, to the extent a human universial morality can exist through generalised moral instincts, said instincts tend to be deontological (as supported through scientific studies- a study of the trolley dilemna v.s the 'fat man' variant showed that people would divert the trolley but not push the fat man). This would be their argument against the consequentialist, who they could accuse of wanting a consequentialist system and ignoring the moral instincts at the basis of their own speculations.

I'm not completely sure about this, but figure it an important enough misunderstanding if I indeed misunderstood to deserve clearing up.

75 comments

Comments sorted by top scores.

comment by shminux · 2013-10-07T15:12:25.967Z · LW(p) · GW(p)

Take some form of consequentialism, precompute a set of actions which cover 90% of the common situations, call them rules, and you get a deontology (like the ten commandments). Which works fine unless you run into the 10% not covered by the shortcuts, and until the world changes significantly enough that what used to be 90% becomes more like 50 or even 20.

Replies from: Carinthium
comment by Carinthium · 2013-10-08T01:06:37.127Z · LW(p) · GW(p)

The trouble with this is that it contradicts the Reason for Being Moral In the First Place, as outlined in Elizier's metaethics. Said reason effectively comes down to obeying moral instincts, after all.

WHY said morality came about is irrelevant. What's important is that it's there.

Replies from: shminux
comment by shminux · 2013-10-08T01:47:05.618Z · LW(p) · GW(p)

Said reason effectively comes down to obeying moral instincts, after all.

Presumably you mean the post The Bedrock of Morality: Arbitrary?, which is apparently some form of moral realism, since it refuses to attach e- prefix (for Eliezer) or even h- prefix to his small subset of possible h-shoulds. I have not been able to understand why he does it, and I don't see what h-good dropping it does. But then I have not put a comparable effort into learning or developing metaethics.

WHY said morality came about is irrelevant. What's important is that it's there.

I disagree with that. Morality changes from epoch to epoch, from place to place and from subculture to subculture. There is very little "there", in our instincts. Some vague feelings of attachment, maybe. Most humans can be easily brought up with almost any morality.

Replies from: Carinthium
comment by Carinthium · 2013-10-08T01:57:05.358Z · LW(p) · GW(p)

There are two seperate questions here, then- one of your metaethics and it's defensibility, and

Your Metaethics: Request clarification before we proceed.

Eliziers: Elizier's metaethics has a problem that your defence cannot solve, and which, I think you would agree, is not addressed by your argument.

I'm more referring to The Moral Void, and am emphasisng that. This shows the fact that morality changes from culture to culture is almost irrelevant- morality constantly changing does not counter any of Elizier's arguments there.

Replies from: shminux
comment by shminux · 2013-10-08T04:46:04.852Z · LW(p) · GW(p)

Sorry, not idea what you are on about. Must be some inferential gap.

Replies from: Carinthium
comment by Carinthium · 2013-10-08T04:53:52.822Z · LW(p) · GW(p)

On point 1: Why are you moral? The whole argument I've been trying to make this thread is that Elizier's moral ideas, though he doesn't realise it, lead naturally to deontology. You seem to disagree with Elizier's metaethics, so yours becomes a point of curiosity.

Elizier's: Clearly your post doesn't work as a defence of Elizier's metaethics, as it was not to meant to be, for the most part. But the last pargraph is an exception.

I'll try a different approach. There is always A morality, even if said morality changes. However, "The Moral Void" and it's basic argument still works because people want to be moral even if there are no universially compelling arguments.

Replies from: shminux
comment by shminux · 2013-10-08T05:26:44.023Z · LW(p) · GW(p)

On point 1: Why are you moral?

What do you mean by moral? Why my behavior largely conforms to the societal norms? Because that is how I was brought up and it makes living among humans easier. Or are you asking why the societal norms are what they are?

Replies from: Carinthium
comment by Carinthium · 2013-10-08T05:33:04.918Z · LW(p) · GW(p)

There is a difference between descriptive and prescriptive. One represents the reasosn why we act. Prescriptive represents the reasons why we SHOULD act. Or in this case, having reflected upon the issue why you want to be moral instead of, say, trying to make yourself as amoral as possible.

Why would you reject that sort of course and instead try to be moral? That's the proper basis of a metaethical theory, from which ethical choices are made.

Replies from: shminux
comment by shminux · 2013-10-08T06:03:02.460Z · LW(p) · GW(p)

I thought I answered that. Descriptive: I was brought up reasonably moral (= conforming to the societal norms), so it's a habit and a thought pattern. Prescriptive: it makes living among humans easier (rudimentary consequentialism). Rejecting these norms and still having a life I would enjoy would be hard for me and requires rewiring my self-esteem to no longer be linked to being a good member of society. Habits are hard to break, and in this case it's not worth it. I don't understand what the fuss is about.

Replies from: Carinthium
comment by Carinthium · 2013-10-08T06:47:35.969Z · LW(p) · GW(p)

You're a rationalist- you've already had some experience at self-rewiring. Plus, if you're a decent liar (and that's not so hard- there's a good enough correlation between lying and self-confidence you can trigger one through the latter, plus you're intelligent), then you can either use strategic lies to get up the career ladder or skive on social responsibilities and become a hedonist.

Replies from: shminux
comment by shminux · 2013-10-08T14:33:15.781Z · LW(p) · GW(p)

OK, one last reply, since we are not getting anywhere, I keep repeating myself: it does not pay for me to attempt to become "amoral" to get happier. See also this quote. Tapping out.

comment by benkuhn · 2013-10-07T19:49:37.989Z · LW(p) · GW(p)

Possible consequentialist response: our instincts are inconsistent: i.e., our instinctive preferences are intransitive, not subject to independence of irrelevant alternatives, and pretty much don't obey any "nice" property you might ask for. So trying to ground one's ethics entirely in moral instinct is doomed to failure.

There's a good analogy here to behavioral economics vs. utility maximization theory. For much the same reason that people who accept gambles based on their intuitions become money pumps (see: the entire field of behavioral economics), people who do ethics entirely based on moral intuitions become "morality pumps". If you want to not be a morality pump, you essentially have to choose some situations in which you go against your moral instincts. Just as behavioral economics is a good descriptive theory but a poor normative theory of choice under uncertainty, deontology is a good descriptive theory of human moral instincts, but a poor normative theory.

Cf. Joshua Greene's research in moral psychology and the cognitive differences between "characteristically deontologist" and "characteristically consequentialist" judgments. Previously discussed on Less Wrong here.

Replies from: lmm, Carinthium
comment by lmm · 2013-10-08T12:05:33.986Z · LW(p) · GW(p)

For much the same reason that people who accept gambles based on their intuitions become money pumps (see: the entire field of behavioral economics), people who do ethics entirely based on moral intuitions become "morality pumps".

I think this thought is worth pursuing in more concrete detail. If I prefer certainly saving 400 people to .8 chance of saving 500 people, and prefer .2 chance of killing 500 people to certainly killing 100 people, what crazy things can a competing agent get me to endorse? Can you get me to something that would be obviously wrong even deontologically, in the same way that losing all my money is obviously bad even behavioral-economically?

Replies from: gjm
comment by gjm · 2013-10-08T15:15:26.730Z · LW(p) · GW(p)

If you have those preferences, then presumably small enough changes to the competing options in each case won't change which outcome you prefer. And then we get this:

Competing Agent: Hey, Imm. I hear you prefer certainly saving 399 people to a 0.8 chance of saving 500 people. Is that right?

Imm: Yup.

Competing Agent: Cool. It just so happens that there's a village near here where there are 500 people in danger, and at the moment we're planning to do something that will save them with probability 80% of the time but otherwise let them all die. But there's something else we could do that will save 399 of them for sure, though unfortunately the rest won't make it. Shall we do it?

Imm: Yes.

Competing Agent: OK, done. Oh, now, I realise I have to tell you something else. There's this village where 100 people are going to die (aside: 101, actually,but that's even worse, right?) because of a dubious choice someone made. I hear you prefer a 20% chance of killing 499 people to the certainty of killing 100 people; is that right?

Imm: Yes, it is.

Competing Agent: Right, then I'll get there right away and make sure they choose the 20% chance instead.

At this point, you have gone from losing 500 people with p=0.8 and saving them with p=0.2, to losing one person for sure and then losing the rest with p=0.8 and saving them with p=0.2. Oops.

[EDITED to clarify what's going on at one point.]

Replies from: lmm
comment by lmm · 2013-10-08T18:10:32.829Z · LW(p) · GW(p)

Well sure. But my position only makes sense at all because I'm not a consequentialist and don't see killing n people and saving n people as netting out to zero, so I don't see that you can just add the people up like that.

Replies from: gjm
comment by gjm · 2013-10-08T20:13:15.174Z · LW(p) · GW(p)

Perhaps it wasn't clear -- those two were the same village. So I'm not adding up people, I'm not assuming that anything cancels out with anything else. I'm observing that if you have those (inconsistent) preferences, and if you have them by enough of a margin that they can be strengthened a little, then you end up happily making a sequence of changes that take you back to something plainly worse than where you started. Just like getting money-pumped.

Replies from: Carinthium
comment by Carinthium · 2013-10-09T01:27:24.782Z · LW(p) · GW(p)

Firstly, a deontological posistion distinguishes between directly killing people and not saving them- killing innocent people is generally an objective moral wrong. Your scenario is deceptive because it seems to lmm that innocents will be killed rather than not saved.

More importantly, Elizier's metaethics is based on the premise that people want to be moral. That's the ONLY argument he has for a metaethics that gets around the is-ought distinction.

Say for the sake of argument a person has a course of action compatible with deontology v.s one compatible with consequentialism and that are their choices. Shouldn't they ignore the stone tablet and choose the deontological one if that's what their moral intuitions say? Elizier can't justify not doing so without contradiciting his original premise.

Replies from: gjm
comment by gjm · 2013-10-09T18:08:13.924Z · LW(p) · GW(p)

(Eliezer.)

So, I wasn't attempting to answer the question "Are deontologists necessarily subject to 'pumping'?" but the different question "Are people who work entirely off moral intuition necessarily subject to 'pumping'?". Imm's question -- if I didn't completely misunderstand it, which of course I might have -- was about the famous framing effect where describing the exact same situation two different ways generates different preferences. If you work entirely off intuition, and if your intuitions are like most people's, then you will be subject to this sort of framing effect and you will make the choices I ascribed to Imm in that little bit of dialogue, and the result is that you will make two decisions both of which look to you like improvements, and whose net result is that more people die. On account of your choices. Which really ought to be unacceptable to almost anyone, consequentialist or deontologist or anything else.

I wasn't attempting a defence of Eliezer's metaethics. I was answering the more specific question that (I thought) Imm was asking.

Replies from: lmm
comment by lmm · 2013-10-11T12:37:20.191Z · LW(p) · GW(p)

I did mean I was making a deontological distinction between saving and killing, not just a framing question (and I didn't really mean that scenario specifically, it was just the example that came to mind - the general question is the one I'm interested in, it's just that as phrased it's too abstract for me) Sorry for the confusion.

comment by Carinthium · 2013-10-08T01:01:27.863Z · LW(p) · GW(p)

Consequentialist judgements are based on emotion in a certain sense just as much as deontological judgements- both come from an emotive desire to do what is "right" (for a broad definition of "right") which cannot be objectively justified using a universially compelling argument. It is true that they come from different areas of the brain, but to call consequentialist judgements "inherently rational" or similiar is a misnomer- from a philosophical perspective both are in the same metaphorical boat as both are based on ultimately unjustifiable premises objectively (see Hume Is/Ought).

Assuming a deontological system is based on the premise "These are human instincts, so we create a series of rules to reflect them", then there is no delusion and hence no rationalisation. It is instead a logical extension, they would argue, of rules similiar to those Elizier made for being moral in the first place.

Elizier's metaethics, as I think best summed up in "The Moral Void" is that people have a desire to do what is Right, which is reason enough for moraltiy. This argument cannot be used as an argument for violating moral instincts without creating a contradiction.

Replies from: lmm
comment by lmm · 2013-10-08T11:36:42.748Z · LW(p) · GW(p)

But Yudkowsky does seem to think that we should violate our moral instincts - we should push the fat man in front of the tram, we should be more willing to pay to save 20,000 birds than to save 200. Our position on whether it's better to save 400 people or take a chance of saving 500 people should be consistent with our position on whether it's better to kill 100 people or take a chance of killing 500 people. We should sell all our possessions except the bare minimum we need to live and give the rest to efficient charity.

If morality is simply our desire to do what feels Right, how can it ever justify doing something that feels Wrong?

Replies from: wedrifid, Carinthium
comment by wedrifid · 2013-10-08T11:55:38.945Z · LW(p) · GW(p)

But Yudkowsky does seem to think that... We should sell all our possessions except the bare minimum we need to live and give the rest to efficient charity.

Yudkowsky does not advocate this. Nor does he practice it. In fact he does the opposite---efficient charity gives him money to have more than the bare minimum needed to live (and this does not seem unwise to me).

comment by Carinthium · 2013-10-08T11:38:27.118Z · LW(p) · GW(p)

This is a very good summary of the point I'm trying to make, though not the argument for making it. Better than mine for what it does.

comment by [deleted] · 2013-10-08T15:14:52.993Z · LW(p) · GW(p)

I have a rant on this subject that I've been meaning to write.

Deontology, Consequentialism, and Virtue ethics are not opposed, just different context, and people who argue about them have different assumptions. Basically:

Consequence:Agents :: Deontology:People :: Virtue:Humans

To the extent that you are an agent, you are concerned with the consequences of your actions, because you exist to have an effect on the actual world. A good agent does not make a good person, because a good agent is an unsympathetic sociopath, and not even sentient.

To the extent that you are a person (existing in a society), you should follow rules that forbid murder, lying, and leaving the toolbox in a mess, and compel politeness, helping others, and whatnot. A good person does not make a good agent, because what a person should do (for example, help an injured bird) often makes no sense from a consequentialist POV.

To the extent that you are human, you are motivated by Virtue and ideas, because that's just how the meat happens to work.

Replies from: wedrifid, shminux, Benito
comment by wedrifid · 2013-10-08T16:37:03.984Z · LW(p) · GW(p)

Deontology, Consequentialism, and Virtue ethics are not opposed, and people who argue about them have different assumptions. Basically:

Totally agree. In fact, I go as far as to declare that Deontologic value systems and Consequentialist systems can be converted between each other (so long as the system of representing consequentialist values is suitably versatile). This isn't to say such a conversion is always easy and it does rely on reflecting off an epistemic model but it can be done.

To the extent that you are an agent, you are concerned with the consequences of your actions, because you exist to have an effect on the actual world.

I'm not sure this is true. Why can't we can something that doesn't care about consequences an agent? Assuming, of course, that they are a suitably advanced and coherent person? Like take a human deontologist who stubbornly sticks to the deontological values and ignores consequences then dismiss as irrelevant that small part of them that feels sad about the consequences. That still seems to deserve being called an agent.

To the extent that you are a person (existing in a society), you should follow rules that forbid murder, lying, and leaving the toolbox in a mess, and compel politeness, helping others, and whatnot. A good person does not make a good agent, because what a person should do (for example, help an injured bird) often makes no sense from a consequentialist POV.

I'd actually say a person shouldn't help an injured bird. Usually it is better from both an efficiency standpoint and a humanitarian standpoint to just kill it and prevent short term suffering and negligible long term prospects of successfully recovering to functioning in the wild. But part of my intuitive experience here is that my intuitions for what makes a 'good person' has been corrupted by my consequentialist values to a greater extent that in has for some others. Sometimes my efforts at social influence and behaviour are governed somewhat more than average by my decision-theory intuitions. For example my 'should' advocates lying in some situations where others may say people 'shouldn't' lie (even if they themselves lie hypocritically).

I'm curious Nyan. You're someone who has developed an interesting philosophy regarding ethics in earlier posts and one that I essentially agree with. I am wondering to what extent your instantiation of 'should' makes no sense from a consequentialist POV. Mine mostly makes sense but only once 'ethical inhibitions' and consideration of second order and unexpected consequences are accounted for. Some of it also only makes sense in consequentialist frameworks where having a preference for negative consequences to occur in response to certain actions is accepted as a legitimate intrinsic value.

Replies from: kalium, Carinthium
comment by kalium · 2013-10-08T20:23:29.905Z · LW(p) · GW(p)

As for helping birds, it depends on the type of injury. If it's been mauled by a cat, you're probably right. But if it's concussed after flying into a wall or window---a very common bird injury---and isn't dead yet, apparently it has decent odds of full recovery if you discourage it from moving and keep predators away for an hour or few. (The way to discourage a bird from moving and possibly hurting itself is to keep it in a dark confined space such as a shoebox. My roommate used to transport pigeons this way and they really didn't seem to mind it.)

Regarding the rest of the post, I'll have to think about it before coming up with a reply.

Replies from: wedrifid
comment by wedrifid · 2013-10-08T23:08:03.003Z · LW(p) · GW(p)

But if it's concussed after flying into a wall or window---a very common bird injury---and isn't dead yet, apparently it has decent odds of full recovery if you discourage it from moving and keep predators away for an hour or few.

Thankyou, I wasn't sure about that. My sisters and I used to nurse birds like that back to health where possible but I had no idea what the prognosis was. I know that if we found any chicks that were alive but displaced from the nest they were pretty much screwed once we touched them due to contamination with human-smell causing rejection.

More recently (now that I'm in Melbourne rather than on a farm) the only birds that have hit my window have broken their neck and died. They have been larger birds so I assume the mass to neck-strength ratio is more of an issue. For some reason most of the birds here in the city manage to not fly into windows anywhere near as often as the farm birds. I wonder if that is mere happen-stance or micro-evolution at work. Cities have got tons more windows than farmland does after all.

Replies from: kalium
comment by kalium · 2013-10-08T23:50:50.064Z · LW(p) · GW(p)

Actually the human-scent claim seems to be a myth. Most birds have a quite poor sense of smell. Blog post quoting a biologist. Snopes.com confirms. However, unless they're very young indeed it's still best to leave them alone:

Possibly this widespread caution against handling young birds springs from a desire to protect them from the many well-intentioned souls who, upon discovering fledglings on the ground, immediately think to cart them away to be cared for. Rather than attempting to impress upon these folks the real reason for leaving well enough alone (that a normal part of most fledglings' lives is a few days on the ground before they fully master their flying skills), a bit of lore such as this one works to keep many people away from young birds by instilling in them a fear that their actions will doom the little ones to slow starvation. Lore is thus called into service to prevent a harmful act that a rational explanation would be much less effective in stopping.

Replies from: wedrifid
comment by wedrifid · 2013-10-09T00:06:42.237Z · LW(p) · GW(p)

Oh, we were mislead into taking the correct action. Fair enough I suppose. I had wondered why they were so sensitive and also why the advice was "don't touch" rather than "put on gloves". Consider me enlightened.

(Mind you the just so story justifying the myth lacks credibility. It seems more likely that the myth exists for the usual reason myths exist and the positive consequences are pure coincidence. Even so I can take their word for it regarding the observable consequences if not the explanation.)

comment by Carinthium · 2013-10-09T01:30:59.470Z · LW(p) · GW(p)

I can see how to convert a Consequentialist system into a series of Deontological rules with exceptions. However, not all Deontological systems can be converted to Consequentialist systems. Deontological systems usually contain Absolute Moral Wrongs which are not to be done no matter what, even if they will lead to even more Absolute Moral Wrongs.

Replies from: wedrifid
comment by wedrifid · 2013-10-09T08:01:12.756Z · LW(p) · GW(p)

I can see how to convert a Consequentialist system into a series of Deontological rules with exceptions

In the case of consequentialists that satisfy the VNM axioms (the only interesting kind) they need only one Deontological rule, "Maximise this utility function!".

However, not all Deontological systems can be converted to Consequentialist systems. Deontological systems usually contain Absolute Moral Wrongs which are not to be done no matter what, even if they will lead to even more Absolute Moral Wrongs.

I suggest that they can. With the caveat that the meaning attributed to the behaviours and motivations will be different, even thought the behaviour decreed by the ethics is identical. It is also worth repeating with emphasis the disclaimer:

This isn't to say such a conversion is always easy and it does rely on reflecting off an epistemic model but it can be done.

The requirement for the epistemic model is particularly critical to the process of constructing the emulation in that direction. It becomes relatively easy (to conceive, not to do) if you use an evaluation system that is compatible with infinitesimals. If infinitesimals are prohibited (I don't see why someone would prohibit that aspect of mathematics) then it becomes somewhat harder to create a perfect emulation.

Of course the above applies when assuming those VNM axioms once again. Throw those away and emulating the deontological system reverts to being utterly trivial. The easiest translation from deontological rules to a vnm-free consequentialst system would be a simple enumeration and ranking of possible permutations. The output consequences ranking system would be inefficient and "NP-enormous" but the proof-of-concept translation algorithm would be simple. Extreme optimisations are almost certainly possible.

Replies from: Carinthium
comment by Carinthium · 2013-10-09T10:29:59.559Z · LW(p) · GW(p)

1- Which is by definition not deontological.

2- A fairly common deontological rule is "Don't murder an innocent, no matter how great the benefit." Take the following scenario:

-A has the choice to kill 1 innocent to stop B killing 2 innocents, when B's own motive is to prevent the death of 4 innocents. B has no idea about A, for simplicity's sake.

Your conversion would have "Killing innocents intentionally" as an evil, and thus A would be obliged to kill the innocent.

Replies from: wedrifid, Protagoras
comment by wedrifid · 2013-10-09T19:36:34.945Z · LW(p) · GW(p)

1- Which is by definition not deontological.

No! When we are explicitly talking about emulating one ethical system in another a successful conversion is not a tautological failure just because it succeeds.

2- A fairly common deontological rule is "Don't murder an innocent, no matter how great the benefit."
Take the following scenario:

This is not a counter-example. It doesn't even seem to be an especially difficult scenario. I'm confused.

-A has the choice to kill 1 innocent to stop B killing 2 innocents, when B's own motive is to prevent the death of 4 innocents. B has no idea about A, for simplicity's sake.

Ok. So when A is replaced with ConsequentialistA, ConsequentialistA will have a utility function which happens to systematically rank world-histories in which ConsequentialistA executes the decision "intentionally kill innocent" at time T as lower than all world-histories in which ConsequentialistA does not execute that decision (but which are identical up until time T).

Your conversion would have "Killing innocents intentionally" as an evil, and thus A would be obliged to kill the innocent.

No, that would be a silly conversion. If A is a deontological agent that adheres to the rule "never kill innocents intentionally' then ConsequentialistA will always rate world histories descending from this decision point in which it kills innocents to be lower than those in which it doesn't. It doesn't kill B.

I get the impression that you are assuming ConsequentialistA to be trying to rank world-histories as if the decision of B matters. It doesn't. In fact, the only aspects of the world histories that ConsequentialistA cares about at all are which decision ConsequentialistA makes at one time and with what information it has available. Decisions are something that occur within physics and so when evaluating world histories according to some utility function a VNM-consequentialist takes into account that detail. In this case it takes into account no other detail and even among such details those later in time are rated infinitesimal in significance compared to earlier decisions.

You have no doubt noticed that the utility function alluded to above seems contrived to the point of utter ridiculousness. This is true. This is also inevitable. From the perspective of a typical consequentialist ethic we should expect typical deontological value system to be utterly insane to the point of being outright evil. A pure and naive consequentialist when encountering his first deontologist may well say "What the F@#%? Are you telling me that of all the things that ever exist or occur in the whole universe across all of space and time the only consequence that matters to you is what your decision is in this instant? Are you for real? Is your creator trolling me?". We're just considering that viewpoint in the form of the utility function it would take to make it happen.

Replies from: Carinthium
comment by Carinthium · 2013-10-21T10:36:29.275Z · LW(p) · GW(p)

Alright- conceded.

comment by Protagoras · 2013-10-09T16:22:27.635Z · LW(p) · GW(p)

More moves are possible. There is the agent-relative consequentialism discussed by Doug Portmore; if a consequence counts as overridingly bad for A if it involves A causing an innocent death, and overridingly bad for B if it involves B causing an innocent death (but not overridingly bad for A if B causes an innocent death; only as bad as normal failures to prevent preventable deaths), then A shouldn't kill one innocent to stop B from killing 2, because that would produce a worse outcome for A (though it would be a better outcome for B). I haven't looked closely at any of Portmore's work for a long time, but I recall being pretty convinced by him in the past that similar relativizing moves could produce a consequentialism which exactly duplicates any form of deontological theory. I also recall Portmore used to think that some form of relativized consequentialism was likely to be the correct moral theory; I don't know if he still thinks that.

Replies from: wedrifid
comment by wedrifid · 2013-10-09T19:40:19.940Z · LW(p) · GW(p)

I've never heard of Doug Portmore but your description of his work suggests that he is competent and may be worth reading.

I also recall Portmore used to think that some form of relativized consequentialism was likely to be the correct moral theory; I don't know if he still thinks that.

This seems overwhelmingly likely. Especially since the alternatives that seem plausible can be conveniently represented as instances of this. This is certainly a framework in which I evaluate all proposed systems of value. When people propose things that are not relative (such crazy things as 'total utilitarianism') then I intuitively think of that in terms of a relative consequentialist system that happens to arbitrarily assert that certain considerations must be equal.

comment by shminux · 2013-10-08T15:28:59.311Z · LW(p) · GW(p)

To the extent that you are human, you are motivated by Virtue and ideas, because that's just how the meat happens to work.

You mean, by habits and instincts, right?

comment by Ben Pace (Benito) · 2013-10-15T19:40:28.387Z · LW(p) · GW(p)

If you were to have a rant, you might have to give more examples, or more thorough ones, because I'm not quite getting your explanation.

I'm just posting this because otherwise there would've been an inferential silence.

comment by Protagoras · 2013-10-07T14:44:11.535Z · LW(p) · GW(p)

I dispute the claim that the default human view is deontological. People show a tendency to prefer to apply simple, universal rules to small scale individual interactions. However, they are willing to make exceptions when the consequences are grave (few agree with Kant that it's wrong to lie to try to save a life). Further, they are generally in favor of deciding large scale issues of public policy on the basis of something more like calculation of consequences. That's exactly what a sensible consequentialist will do. Due to biases and limited information, calculating consequences is a costly and unreliable method of navigating everyday moral situations; it is much more reliable to try to consistently follow rules that usually produce good consequences. Still, sometimes the consequences are dramatic and obvious enough to provide reason to disregard one of the rules. Further, it is rarely clear how to apply our simple rules to the complexities of public policy, and the greater stakes involved justify investing greater resources to get the policy right, by putting in the effort to actually try to figure out the consequences. Thus, I think the evidence as a whole suggests people are really consequentialists; they act like deontologists in small-scale personal decisions because in such decisions deontologists and consequentialists act similarly, not because they are deontologists.

This is not to say that people are perfect consequentialists; I am not particularly confident that people are reliable in figuring out which are the truly exceptional personal cases, or in telling the difference between small scale and large scale cases. But while I think human biases make those judgments (and so some of our moral opinions) unreliable, I think they are best explained by the thesis that we're mostly (highly) fallible consequentialists, rather than the thesis that we're mostly following some other theory. After all, we have plenty of independent evidence that we're highly fallible, so that can hardly be called special pleading.

Replies from: Carinthium
comment by Carinthium · 2013-10-08T01:05:10.426Z · LW(p) · GW(p)

Presumably you think that in a case like the fat man case, the human somehow mistakenly believes the consequences for pushing the fat man will be worse? In some cases you have a good point, but that's one of the ones where your argument is least plausible.

Replies from: Protagoras
comment by Protagoras · 2013-10-08T01:19:40.695Z · LW(p) · GW(p)

I don't think that the person mistakenly believes that the consequences will be sufficiently worse, but something more like that the rule of not murdering people is really really important, and the risk that you're making a mistake if you think you've got a good reason to violate it this time is too high. Probably that's a miscalculation, but not exactly the miscalculation you're pointing to. I'm also just generally suspicious of the value of excessively contrived and unrealistic examples.

Replies from: Carinthium
comment by Carinthium · 2013-10-08T04:59:14.586Z · LW(p) · GW(p)

I'll take two broader examples then- "Broad Trolley cases", cases where people can avert a harm only at the cost of triggering a lesser harm but do not directly cause it, and "Broad Fat Man Cases", which are the same except such a harm is directly caused.

As a general rule, although humans can be swayed to act in Broad Fat Man cases they cannot help but feel bad about it- much less so in Broad Trolley cases. Admittedly this is a case in which humans are inconsistent with themselves if I remember correctly as they can be made to cause such a harm under pressure, but practically none consider it the moral thing to do and most regret it afterwards- the same as near-mode defections from group interests of a selfish nature.

comment by Vladimir_Nesov · 2013-10-08T10:36:12.724Z · LW(p) · GW(p)

(You systematically misspell 'Eliezer', in the post and in the comments.)

Replies from: Carinthium
comment by Carinthium · 2013-10-08T11:03:01.680Z · LW(p) · GW(p)

Sorry about that. Still, it's less important than the actual content.

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2013-10-09T01:35:10.582Z · LW(p) · GW(p)

You keep systematically misspelling it, even after having read my comment. It's annoying.

Replies from: Carinthium
comment by Carinthium · 2013-10-09T02:06:47.213Z · LW(p) · GW(p)

Spelling isn't really a big deal compared to actual arguments- my comments are still just as comprehensible.

Replies from: None
comment by [deleted] · 2013-10-09T03:19:36.880Z · LW(p) · GW(p)

No. Bad spelling after being corrected demonstrates disrespect for the community. Your arguments are irrelevant if you don't obey basic courtesy.

Replies from: Vladimir_Nesov, Carinthium
comment by Vladimir_Nesov · 2013-10-09T03:29:12.262Z · LW(p) · GW(p)

Arguments (in general, not focusing on these particular arguments) are not irrelevant, they are just not relevant to this issue.

Replies from: Carinthium
comment by Carinthium · 2013-10-09T03:39:02.099Z · LW(p) · GW(p)

If you mean the issue of metaethics, you have a problem. Arguments are clearly relevant to metaethics or else Elizier's fails by default for being an argument.

If you mean the issue of spelling, see my argument.

comment by Carinthium · 2013-10-09T03:26:09.120Z · LW(p) · GW(p)

1- This is a very clear ad hominem- there is no possible way in which spelling errors would suggest a lower probability of me being right.

2- I meant no disrespect whatsoever. However for me making arguments at this level is difficult and I have to devote a lot of thought to it. If I were to devote thought to correcting spelling errors as well, I would not be able to make intelligent arguments.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-10-09T03:39:15.693Z · LW(p) · GW(p)

there is no possible way in which spelling errors would suggest a lower probability of me being right

In general this is false. For example, frequent spelling errors (which is not the case here) signal inability to attend to detail, and may thus indicate low intelligence or bad epistemic habits. There are alternative possible causes for that, but it's still evidence, unless screened off by knowledge of those alternative causes.

(Apart from that, probability of being right may be irrelevant when evaluating an abstract argument that is not intended to communicate new information apart from suggesting inferences from their own knowledge to the reader.)

Replies from: Carinthium
comment by Carinthium · 2013-10-09T03:49:51.409Z · LW(p) · GW(p)

O.K- that much could be true, so I was slightly wrong there. But even a moron can come up with an intelligent argument in theory. In theory, if the evidence suggested I was a moron but in other ways suggested I was right, then concluding I was right and got lucky would be the correct answer. Therefore, unless you're significantly unsure about the argument it shouldn't really apply- i.e. weak evidence at best.

comment by Vladimir_Nesov · 2013-10-08T20:20:19.570Z · LW(p) · GW(p)

That's why it was in parentheses. Still, you didn't fix it.

comment by mare-of-night · 2013-10-08T00:17:24.532Z · LW(p) · GW(p)

Could it be possible that some peoples' intuitions are more deontologist or more consequentialist than others? While trying to answer this, I think I noticed an intuition that being good should make good things happen, and shouldn't make bad things happen. Looking back on the way I thought as a teenager, I think I must have been under that assumption then (when I hadn't heard this sort of ethics discussed explicitly). I'm not sure about further back then that, though, so I don't know that I didn't just hear a lot of consequentialist arguments and get used to thinking that way.

I think consequentialism could also come from having an intuition that morality should be universal and logically coherent. Humans often have that intuition, and also other intuitions that conflict with each other, so some form of reconciling intuitions has to happen.

Replies from: Carinthium
comment by Carinthium · 2013-10-08T00:52:54.292Z · LW(p) · GW(p)

No disputes on Paragraph 1, but:

An intuition that morality should be "universial" in your sense is not as common as you might think. In Victorian times there was a hierarchy of responsibilities depending on closeness, which fits modern intutions except that race has been removed. Confucius considered it the fillial duty of a son to cover up their father's crimes. Finally, there are general tribal in-group instincts. All these suggest that the intuition morality should be universial (as opposed to logically coherent, which is more common) is the "weaker" intuition that should give.

In addition, of course, see Elizier's articles about no universially persuasive argument.

Replies from: mare-of-night
comment by mare-of-night · 2013-10-08T00:56:24.942Z · LW(p) · GW(p)

Right, good point. I had Kant on my mind while I was writing the post, and didn't do the mental search I should have to check other sets of ideas.

comment by fubarobfusco · 2013-10-13T15:24:45.114Z · LW(p) · GW(p)

While it's possible to express consequentialism in a deontological-sounding form, I don't think this would yield a central example of what people mean by deontological ethics — because part of what is meant by that is a contrast with consequentialism.

I take central deontology to entail something of the form, "There exist some moral duties that are independent of the consequences of the actions that they require or forbid." Or, equivalently, "Some things can be morally required even if they do no benefit, and/or some things can be morally forbidden even if they do no harm."

That is, deontology is not just a claim about how moral rules should be phrased or taught; it's a claim about what kinds of moral facts can be true.

comment by drethelin · 2013-10-07T14:29:29.954Z · LW(p) · GW(p)

Deontology is not in general incompatible. You could have a deontology that says :God says do exactly what eliezer yudkowsky thinks is correct. But most people's deontology does not work that way.

Our instincts being reminiscent of deontology is very much not the same thing as deontology being true.

Replies from: Carinthium, wedrifid
comment by Carinthium · 2013-10-08T00:47:50.643Z · LW(p) · GW(p)

In your metaethics, what does it mean for an ethical system to be "true", then (put in quotations only because it is a vague term at the moment in need of definition)? Elizier's metaethics has a good case for following a morality considered "true" in that it fits human intuitions- but if you abandon that where does it get you?

Replies from: drethelin
comment by drethelin · 2013-10-09T19:33:59.404Z · LW(p) · GW(p)

Deontology being in true in my meaning is something along the lines of god actually existing and there being a list of things he wants us to do, or a morality that is somehow inherent in the laws of physics that once we know enough about the universe everyone should follow. To me a morality that falls out of the balance between human (or sentients in general) preferences is more like utilitarianism.

comment by wedrifid · 2013-10-14T00:39:06.299Z · LW(p) · GW(p)

Deontology is not in general incompatible. You could have a deontology that says :God says do exactly what eliezer yudkowsky thinks is correct.

That isn't a deontology. That is an epistemic state. "God says do X" is in the class "Snow is white" not "You should do X". Of course if you add "You should do exactly what God says" then you have a deontology. Well, you would if not for the additional fact "Eliezer Yudkowsky thinks that God saying so isn't a particularly good reason to do it", making the system arguably inconsistent.

comment by MrMind · 2013-10-08T08:18:36.105Z · LW(p) · GW(p)

As far as I understand Eliezer's metaethics, I would say that it is compatible with deontology. It even presupposes it a little bit, since the psychological unity of mankind can be seen as a very general set of deontologies.
I would agree thus that deontology is what human instincts are based on.

Under my further elaboration on said metaethics, that is the view of morality as common computations + local patches, deontology and consequentialism are not really opposing theories. In the evolution of a species, morality would be formed as common computations that are passed genetically between generations, thereby forming not much a set of "I must", but a subtler context of presuppositions. But as the species evolves and it gets more and more intelligent, it faces newer and newer challenges, often at a speed that doesn't allow genetic filtering and propagation.
In that case, it seems to me that consequentialism is the only applicable way to find new optimal solutions, sometimes even at odd with older instincts.

Replies from: Carinthium
comment by Carinthium · 2013-10-08T08:49:18.735Z · LW(p) · GW(p)

"Optimal" by what value? Since we don't have an objective morality here, a person only has their Wants (whether moral or not) to decide what counts as optimal. This leads to problems. Take a Hypothetical Case A.

-In Case A there are several options. One option would be the best from a consequentialist perspective, taking all consequences into accont. However, taking this option would make the option's taker not only feel very guilty (for whatever reason- there are plenty of possibilities) but harm their selfish interests in the long run.

This is an extreme case, but it shows the problem at it's worst. Elizier would say that doing the consequentialist thing would be the Right thing to do. However, he cannot have any compelling reason to do it based on his reasons for morality- an innate desire to act that way being the only reason he has for it.

Replies from: MrMind
comment by MrMind · 2013-10-09T10:27:29.906Z · LW(p) · GW(p)

"Optimal" by what value?

Well, I intended it in the minimal sense of "maximizing an optimization problem", if the moral quandary could be seen in that way. I was not asserting that consequentialism is the optimal way to find a solution to a moral problem, I stated that it seems to me that consequentialism is the only way to find an optimal solution to a moral problem that our previous morality cannot cover.

Since we don't have an objective morality here, a person only has their Wants (whether moral or not) to decide what counts as optimal.

But we do have an objective morality (in Eliezer's metaethics): it's morality! As far as I can understand, he states that morality is the common human computation to assign values to states of the world around us. I believe that he asserts these two things, besides others:

  • morality is objective in the sense that it's a common fundamental computation, shared by all humans;

  • even if we encounter an alien way to assign value to states of the world (e.g. pebblesorters), we could not call that morality, because we cannot go outside of our moral system; we should call it another way, and it would not be morally understandable.

That is: human value computation -> morality; pebblesorters value computation -> primality, which is not: moral, fair, just, etc.

One option would be the best from a consequentialist perspective, taking all consequences into accont. However, taking this option would make the option's taker not only feel very guilty (for whatever reason- there are plenty of possibilities) but harm their selfish interests in the long run.

I agree that a direct conflict between a deontological computation and a consequentalist one cannot be solved normatively by metaethics. At least, not by the one exposed here or the one I ascribe to. However, I believe that it doesn't need to: it's true that morality, if confronted with truly alien value computations like primality or clipping, it's rather monolithic, however, if zoomed in it can be rather confused.
I would say that in any situation where there's such a conflict, only the individual computation present in the actor's mind could determine the outcome. If you want, computational metaethics is descriptive and maybe predictive, rather than prescriptive.

comment by Nisan · 2013-10-08T06:44:22.034Z · LW(p) · GW(p)

I agree; on my reading, the metaethics in the Metaethics sequence are compatible with deontology as well as consequentialism.

You can read Eliezer defending some kind of utilitarianism here. Note that, as is stressed in that post, on Eliezer's view, morality doesn't proceed from intuitions only. Deliberation and reflection are also important.

Replies from: Carinthium
comment by Carinthium · 2013-10-08T07:11:53.414Z · LW(p) · GW(p)

The problem with what Elizier says there is making it compatible with his reason for being moral. For example:

"And once you realize that the brain can't multiply by eight, then the other cases of scope neglect stop seeming to reveal some fundamental truth about 50,000 lives being worth just the same effort as 5,000 lives, or whatever. You don't get the impression you're looking at the revelation of a deep moral truth about nonagglomerative utilities. It's just that the brain doesn't goddamn multiply. Quantities get thrown out the window."

However, Elizier's comments on "The Pebblesorters" amongst others make clear that he defines morality based on what humans feel is moral. How is this compatible?

In addition, given that the morality in the Metaethics is fundamentally based on preferences, there are severe problems. Take Hypothetical case A, which is broad enough to cover a lot of plausible scenarios.

A- A hypothetical case where there is an option which will be the best from a consequentialist perspective, but which for some reason the person who takes the option would feel overall more guilty for choosing it AND be less happy aftewards than the alternative, both in the short run and the long run.

Elizier would say to take the action that is best from a consequentialist perspective. This is indefensible however you look at it- logically, philsophically, etc.

Replies from: Nisan
comment by Nisan · 2013-10-08T17:13:50.595Z · LW(p) · GW(p)

Ok, I can see why you read the Pebblesorters parable and concluded that on Eliezer's view, morality comes from human feelings and intuitions alone. The Pebblesorters are not very reflective or deliberative (although there's that one episode where a Pebblesorter makes a persuasive moral argument by demonstrating that a number is composite.) But I think you'll find that it's also compatible with the position that morality comes from human feelings and intuitions, as well as intuitions about how to reconcile conflicting intuitions and intuitions about the role of deliberation in morality. And, since The Moral Void and other posts explicitly say that such metaintuitions are an essential part of the foundation of morality, I think it's safe to say this is what Eliezer meant.

I'll set aside your scenario A for now because that seems like the start of a different conversation.

Replies from: Carinthium
comment by Carinthium · 2013-10-09T01:18:34.567Z · LW(p) · GW(p)

Elizier doesn't have sufficient justification for including such metaintuitions anyway. Scenario A illustrates this well- assuming reflecting on the issue doesn't change the balance of what a person wants to do anyway, it doesn't make sense and Elizier's consequentialism is the equivalent of the stone tablet.

Replies from: Nisan
comment by Nisan · 2013-10-09T05:00:58.263Z · LW(p) · GW(p)

You really ought to learn to spell Eliezer's name.

Anyways, it looks like you're no longer asking for clarification of the Metaethics sequence and have switched to critiquing it; I'll let other commenters engage with you on that.

comment by Eugine_Nier · 2013-10-08T02:00:11.484Z · LW(p) · GW(p)

I suspect the real reason why a lot of people around here like consequentialism, is that (despite their claims to the contrary) alieve that ideas should have a Platonic mathematical backing, and the VNM theorem provides just such a backing for consequentialism.

Replies from: None, Vladimir_Nesov, shminux
comment by [deleted] · 2013-10-08T15:15:44.071Z · LW(p) · GW(p)

VNM

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-10-09T05:05:18.702Z · LW(p) · GW(p)

Thanks, fixed.

comment by Vladimir_Nesov · 2013-10-08T20:35:32.462Z · LW(p) · GW(p)

(I don't have to like consequentialism to be motivated by considerations it offers, such as relative unimportance of what I happen to like.)

comment by shminux · 2013-10-08T15:23:32.619Z · LW(p) · GW(p)

I don't see any link between Platonism and consequentialism.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-10-09T05:14:02.308Z · LW(p) · GW(p)

Basically, the VNM theorem is sufficiently elegant that it causes people to treat consequentialism as the Platonic form of morality.