Pluralistic Moral Reductionism

post by lukeprog · 2011-06-01T00:59:30.115Z · LW · GW · Legacy · 327 comments

Contents

  Many Moral Reductionisms
  Cognitivism vs. Noncognitivism
  Objective vs. Subjective Morality
  Relative vs. Absolute Morality
    Moore's Open Question Argument
    The Is-Ought Gap
  Moral realism vs. Anti-realism
  Toward Empathic Metaethics
    Notes
    References
None
327 comments

Part of the sequence: No-Nonsense Metaethics

Disputes over the definition of morality... are disputes over words which raise no really significant issues. [Of course,] lack of clarity about the meaning of words is an important source of error… My complaint is that what should be regarded as something to be got out of the way in the introduction to a work of moral philosophy has become the subject matter of almost the whole of moral philosophy...
Peter Singer

If a tree falls in the forest, and no one hears it, does it make a sound? If by 'sound' you mean 'acoustic vibrations in the air', the answer is 'Yes.' But if by 'sound' you mean an auditory experience in the brain, the answer is 'No.'

We might call this straightforward solution pluralistic sound reductionism. If people use the word 'sound' to mean different things, and people have different intuitions about the meaning of the word 'sound', then we needn't endlessly debate which definition is 'correct'.1 We can be pluralists about the meanings of 'sound'.

To facilitate communication, we can taboo and reduce: we can replace the symbol with the substance and talk about facts and anticipations, not definitions. We can avoid using the word 'sound' and instead talk about 'acoustic vibrations' or 'auditory brain experiences.'

Still, some definitions can be wrong:

Alex: If a tree falls in the forest, and no one hears it, does it make a sound?

Austere MetaAcousticist: Tell me what you mean by 'sound', and I will tell you the answer.

Alex: By 'sound' I mean 'acoustic messenger fairies flying through the ether'.

Austere MetaAcousticist: There's no such thing. Now, if you had asked me about this other definition of 'sound'...

There are other ways for words to be wrong, too. But once we admit to multiple potentially useful reductions of 'sound', it is not hard to see how we could admit to multiple useful reductions of moral terms.

Many Moral Reductionisms

Moral terms are used in a greater variety of ways than sound terms are. There is little hope of arriving at the One True Theory of Morality by analyzing common usage or by triangulating from the platitudes of folk moral discourse. But we can use stipulation, and we can taboo and reduce. We can use pluralistic moral reductionism2 (for austere metaethics, not for empathic metaethics).

Example #1:

Neuroscientist Sam Harris: Which is better? Religious totalitarianism or the Northern European welfare state?

Austere Metaethicist: What do you mean by 'better'?

Harris: By 'better' I mean 'that which tends to maximize the well-being of conscious creatures'.

Austere Metaethicist: Assuming we have similar reductions of 'well-being' and 'conscious creatures' in mind, the evidence I know of suggests that the Northern European welfare state is more likely to maximize the well-being of conscious creatures than religious totalitarianism.

Example #2:

Philosopher Peter Railton: Is capitalism the best economic system?
Austere Metaethicist: What do you mean by 'best'?
Railton: By 'best' I mean 'would be approved of by an ideally instrumentally rational and fully informed agent considering the question ‘How best to maximize the amount of non-moral goodness?' from a social point of view in which the interests of all potentially affected individuals are counted equally.
Austere Metaethicist: Assuming we agree on the meaning of 'ideally instrumentally rational' and 'fully informed' and 'agent' and 'non-moral goodness' and a few other things, the evidence I know of suggests that capitalism would not be approved of by an ideally instrumentally rational and fully informed agent considering the question ‘How best to maximize the amount of non-moral goodness?' from a social point of view in which the interests of all potentially affected individuals were counted equally.

Example #3:

Theologian Bill Craig: Ought we to give 50% of our income to efficient charities?
Austere Metaethicist: What do you mean by 'ought'?
Craig: By 'ought' I mean 'approved of by an essentially just and loving God'.
Austere Metaethicist: Your definition doesn't connect to reality. It's like talking about atom-for-atom 'indexical identity' even though the world is made of configurations and amplitudes instead of Newtonian billiard balls. Gods don't exist.

But before we get to empathic metaethics, let's examine the standard problems of metaethics using the framework of pluralistic moral reductionism.

Cognitivism vs. Noncognitivism

One standard debate in metaethics is cognitivism vs. noncognitivism. Alexander Miller explains:

Consider a particular moral judgement, such as the judgement that murder is wrong. What sort of psychological state does this express? Some philosophers, called cognitivists, think that a moral judgement such as this expresses a belief.
Beliefs can be true or false: they are truth-apt, or apt to be assessed in terms of truth and falsity. So cognitivists think that moral judgements are capable of being true or false.
On the other hand, non-cognitivists think that moral judgements express non-cognitive states such as emotions or desires. Desires and emotions are not truth-apt. So moral judgements are not capable of being true or false.3

But why should we expect all people to use moral judgments like "Stealing is wrong" to express the same thing?4

Some people who say "Stealing is wrong" are really just trying to express emotions: "Stealing? Yuck!" Others use moral judgments like "Stealing is wrong" to express commands: "Don't steal!" Still others use moral judgments like "Stealing is wrong" to assert factual claims, such as "stealing is against the will of God" or "stealing is a practice that usually adds pain rather than pleasure to the world."

It may be interesting to study all such uses of moral discourse, but this post focuses on addressing cognitivists, who use moral judgments to assert factual claims. We ask: Are those claims true or false? What are their implications?

Objective vs. Subjective Morality

Is morality objective or subjective? It depends which moral reductionism you have in mind, and what you mean by 'objective' and 'subjective'.

Here are some common5 uses of the objective/subjective distinction in ethics:

Now, consider Harris' reduction of morality to facts about the well-being of conscious creatures. His theory of morality is objective3 and objective2, because facts about well-being are independent of anyone's opinion. Even if the Nazis had won WWII and brainwashed everybody to have the opinion that torturing Jews was moral, it would remain true that torturing Jews does not increase the average well-being of conscious creatures. But Harris' theory of morality is not objective1, because facts about the well-being of conscious creatures are mind-dependent facts.

Or, consider Craig's theory of morality in terms of divine approval. His theory doesn't connect to reality, but still: is it objective or subjective? Craig's theory says that moral facts are objective3, because they don't depend on human opinion (God isn't human). But his theory doesn't say that morality is objective2 or objective1, because for him, moral facts depend on the opinion of a sentient being: God.

A warning: ambiguous terms like 'objective' and 'subjective' are attractors for sneaking in connotations. Craig himself provides an example. In his writings and public appearances, Craig insists that only God-based morality can be objective.6 What does he mean by 'objective'? On a single page,7 he uses 'objective' to mean "independent of people's opinions" (objective2) and also to mean "independent of human opinion" (objective3). I'll assume he means that only God-based morality can be objective3, because God-based morality is clearly not objective2 (Craig's God is a person, a sentient being).

And yet, Craig says that we need God in order to have objective3 morality as if this should be a big deal. But hold on. Even a moral code defined in terms of the preferences of Washoe the chimpanzee is objective3. So not only is Bill's claim that only God-based morality can be objective3 false (because Harris' moral theory is also objective3), but also it's trivially easy to come up with a moral theory that is 'objective' in Craig's (apparent) sense of the term (that is, objective3).8

Moreover, Harris' theory of morality is objective in a 'stronger' sense than Craig's theory of morality is. Harris' theory is objective3 and objective2, while Craig's theory is merely objective3. Whether he's doing it consciously or not, I wonder if Craig is using the word 'objective' to try to sneak in connotations that don't actually apply to his claims once you pay attention to what Craig actually means by the word 'objective'. If Craig told his audience that we need God for morality to be 'objective' in the same sense that morality defined in terms of the preferences of a chimpanzee is 'objective', would this still still have his desired effect on his audience? I doubt it.

Once you've stipulated your use of 'objective' and 'subjective', it is often trivial to determine whether a given moral reductionism is 'objective' or 'subjective'. But what of it? What force should those words carry after you've tabooed them? Be careful not to sneak in connotations that don't belong.

Relative vs. Absolute Morality

Is morality relative or absolute? Again, it depends which moral reductionism you have in mind, and what you mean by 'relative' and 'absolute'. Again, we must be careful about sneaking in connotations.

Moore's Open Question Argument

"He's an unmarried man, but is he a bachelor?" This is a 'closed' question. The answer is obviously "Yes."

In contrast, said G.E. Moore, all questions of the type "Such and such is X, but is it good?" are open questions. It feels like you can always ask, "Yes, but is it good?" In this way, Moore resists the identification of 'morally good' with any set of natural facts. This is Moore's Open Question Argument. Because some moral reductionisms do identify 'good' or 'right' with a particular X, those reductionisms had better have an answer to Moore.

The Yudkowskian response is to point out that when cognitivists use the term 'good', their intuitive notion of 'good' is captured by a massive logical function that can't be expressed in simple statements like "maximize pleasure" or "act only in accordance with maxims you could wish to be a universal law without contradiction." Even if you think everything you want (or rather, want to want) can be realized by (say) maximizing the well-being of conscious creatures, you're wrong. Your values are more complex than that, and we can't see the structure of our values. That is why it feels like an open question remains no matter which simplistic identification of "Good = X" you choose.

The problem is not that there is no way to identify 'good' or 'right' (as used intuitively, without tabooing) with a certain X. The problem is that X is huge and complicated and we don't (yet) have access to its structure.

But that's the response to Moore after righting a wrong question - that is, when doing empathic metaethics. When doing mere pluralistic moral reductionism, Moore's argument doesn't apply. If we taboo and reduce, then the question of "...but is it good?" is out of place. The reply is: "Yes it is, because I just told you that's what I mean to communicate when I use the word-tool 'good' for this discussion. I'm not here to debate definitions; I'm here to get something done."9

The Is-Ought Gap

(This section rewritten for clarity.)

Many claim that you cannot infer an 'ought' statement from a series of 'is' statements. The objection comes from Hume, who said he was surprised whenever an argument made of is and is not propositions suddenly shifted to an ought or ought not claim, without explanation.10

The solution is to make explicit the bridge from 'ought' statements to 'is' statements.

Perhaps the arguer means something non-natural by 'ought', such as 'commanded by God' or 'in accord with irreducible, non-natural facts about goodness' (see Moore). If so, I would reject that premise of the argument, because I'm a reductionist. At this point, our discussion might need to shift to a debate over the merits of reductionism.

Or perhaps by 'you ought to X' the arguer means something fully natural, such as:

Or, the speaker may have in mind a common ought-reductionism known as the hypothetical imperative. This is an ought of the kind: "If you desire to lose weight, then you ought to consume fewer calories than your burn." (But usually, people leave off the implied if statement, and simply say "You should eat less and exercise more.")

A hypothetical imperative (as some use it) can be translated from 'ought' to 'is' in a straightforward way: "If you desire to lose weight, then you ought to consume fewer calories than you burn" translates to the claim "If you consume fewer calories than you burn, then you will (or are, ceteris paribus, more likely to) fulfill your desire to lose weight."11

Or, the speaker may be using 'ought' to communicate something only about other symbols (example: Bayes' Rule), leaving the bridge from 'ought' to 'is' to be built when the logical function represented by his use of 'ought' is plugged into a theory that refers to the world.

But one must not fall into the trap of thinking that a definition you've stipulated (aloud or in your head) for 'ought' must match up to your intended meaning of 'ought' (to which you don't have introspective access). In fact, I suspect it never does, which is why the conceptual analysis of 'ought' language can go in circles for centuries, and why any stipulated meaning of 'ought' is a fake utility function. To see clearly to our intuitive concept of ought, we'll have to try empathic metaethics (see below).

But whatever our intended meaning of 'ought' is, the same reasoning applies. Either our intended meaning of 'ought' refers (eventually) to the world of math and physics (in which case the is-ought gap is bridged), or else it doesn't (in which case it fails to refer).12

Moral realism vs. Anti-realism

So, does all this mean that we can embrace moral realism, or does it doom us to moral anti-realism? Again, it depends on what you mean by 'realism' and 'anti-realism'.

In a sense, pluralistic moral reductionism can be considered a robust form of moral 'realism', in the same way that pluralistic sound reductionism is a robust form of sound realism. "Yes, there really is sound, and we can locate it in reality — either as vibrations in the air or as mental auditory experiences, however you are using the term." In the same way: "Yes, there really is morality, and we can locate it in reality — either as a set of facts about the well-being of conscious creatures, or as a set of facts about what an ideally rational and perfectly informed agent would prefer, or as some other set of natural facts."

But in another sense, pluralistic moral reductionism is 'anti-realist'. It suggests that there is no One True Theory of Morality. (We use moral terms in a variety of ways, and some of those ways refer to different sets of natural facts.) And as a reductionist approach to morality, it might also leave no room for moral theories which say there are universally binding moral rules for which the universe (e.g. via a God) will hold us accountable.

What matters are the facts, not whether labels like 'realism' or 'anti-realism' apply to 'morality'.

Toward Empathic Metaethics

But pluralistic moral reductionism satisfies only a would-be austere metaethicist, not an empathic metaethicist.

Recall that when Alex asks how she can do what is right, the Austere Metaethicist replies:

Tell me what you mean by 'right', and I will tell you what is the right thing to do. If by 'right' you mean X, then Y is the right thing to do. If by 'right' you mean P, then Z is the right thing to do. But if you can't tell me what you mean by 'right', then you have failed to ask a coherent question, and no one can answer an incoherent question.

Alex may reply to the Austere Metaethicist:

Okay, I'm not sure exactly what I mean by 'right'. So how do I do what is right if I'm not sure what I mean by 'right'?

The Austere Metaethicist refuses to answer this question. The Empathic Metaethicist, however, is willing to go the extra mile. He says to Alex:

You may not know what you mean by 'right.' But let's not stop there. Here, let me come alongside you and help decode the cognitive algorithms that generated your question in the first place, and then we'll be able to answer your question. Then we can tell you what the right thing to do is.

This may seem like too much work. Would we be motivated to decode the cognitive algorithms producing Albert and Barry's use of the word 'sound'? Would we try to solve 'empathic meta-acoustics'? Probably not. We can simply taboo and reduce 'sound' and then get some work done.

But moral terms and value terms are about what we want. And unfortunately, we often don't know what we want. As such, we're unlikely to get what we really want if the world is re-engineered in accordance with our current best guess as to what we want. That's why we need to decode the cognitive algorithms that generate our questions about value and morality.

So how can the Empathic Metaethicist answer Alex's question? We don't know the details yet. For example, we don't have a completed cognitive neuroscience. But we have some ideas, and we know of some open problems that may admit of progress once more people understand them. In the next few posts, we'll take our first look at empathic metaethics.13

Previous post: Conceptual Analysis and Moral Theory

Notes

1 Some have objected that the conceptual analysis argued against in Conceptual Analysis and Moral Theory is not just a battle over definitions. But a definition is "the formal statement of the meaning or significance of a word, phrase, etc.", and a conceptual analysis is (usually) a "formal statement of the meaning or significance of a word, phrase, etc." in terms of necessary and sufficient conditions. The goal of a conceptual analysis is to arrive at a definition for a term that captures our intuitions about its meaning. The process is to bash our intuitions against others' intuitions until we converge upon a set of necessary and sufficient conditions that captures them all. But consider Barry and Albert's debate over the definition of 'sound'. Why think Albert and Barry have the same concept in mind? Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other's due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we'll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning? And, let's say we arrive at a messy set of 6 necessary and sufficient conditions for the intuitive meaning of the term. Is that going to be as useful for communication as one we consciously chose because it carved-up thingspace well? I doubt it. The IAU's definition of 'planet' is more useful than the folk-intuitions definition of 'planet'. Folk intuitions about 'planet' evolved over thousands of years and different people have different intuitions which may not always converge. In 2006, the IAU used modern astronomical knowledge to carve up thingspace in a more useful and informed way than our intuitions do.

A passage from Bertrand Russell (1953) is appropriate. Russell said that many philosophers reminded him of

the shopkeeper of whom I once asked the shortest way to Winchester. He called to a man in the back premises:

"Gentleman wants to know the shortest way to Winchester."

"Winchester?" an unseen voice replied.

"Aye."

"Way to Winchester?"

"Aye."

"Shortest way?"

"Aye."

"Dunno."

He wanted to get the nature of the question clear, but took no interest in answering it. This is exactly what modern philosophy does for the earnest seeker after truth. Is it surprising that young people turn to other studies?

2 Compare also to the biologist's 'species concept pluralism' and the philosopher's 'art concept pluralism.' See Uidhir & Magnus (2011). Also see 'causal pluralism' (Godfrey-Smith, 2009; Cartwright, 2007), 'theory concept pluralism' (Magnus, 2009) and, especially, 'metaethical contextualism' (Bjornsson & Finlay, 2010) or 'metaethical pluralism' or 'metaethical ambivalence' (Joyce, 2011). Joyce quotes Lewis (1989), who wrote that some concepts of value refer to things that really exist, and some concepts don't, and what you make of this situation is largely a matter of temperament:

What to make of the situation is mainly a matter of temperament. You can bang the drum about how philosophy has uncovered a terrible secret: there are no values! ... Or you can think it better for public safety to keep quiet and hope people will go on as before. Or you can declare that there are no values, but that nevertheless it is legitimate—and not just expedient—for us to carry on with value-talk, since we can make it all go smoothly if we just give the name of value to claimants that don't quite deserve it... Or you can think it an empty question whether there are values: say what you please, speak strictly or loosely. When it comes to deserving a name, there's better and worse but who's to say how good is good enough? Or you can think it clear that the imperfect deservers of the name are good enough, but only just, and say that although there are values we are still terribly wrong about them. Or you can calmly say that value (like simultaneity) is not quite as some of us sometimes thought. Myself, I prefer the calm and conservative responses. But as far as the analysis of value goes, they're all much of a muchness.

Joyce concludes that, for example, the moral naturalist and the moral error theorist may agree with each other (when adopting each other's own language):

[Metaethical ambivalence] begins with a kind of metametaethical enlightenment. The moral naturalist espouses moral naturalism, but this espousal reflects a mature decision, by which I mean that the moral naturalist doesn't claim to have latched on to an incontrovertiblerealm of moral facts of which the skeptic is foolishly ignorant, but rather acknowledges that this moral naturalism has been achieved only via a non-mandatory piece of conceptual precisification. Likewise, the moral skeptic champions moral skepticism, but this too is a sophisticated verdict: not the simple declaration that there are no moral values and that the naturalist is gullibly uncritical, but rather a decision that recognizes that this skepticism has been earned only by making certain non-obligatory but permissible conceptual clarifications.

...The enlightened moral naturalist doesn't merely (grudgingly) admit that the skeptic is warranted in his or her views, but is able to adopt the skeptical position in order to gain the insights that come from recognizing that we live in a world without values. And the enlightened moral skeptic goes beyond (grudgingly) conceding that moral naturalism is reasonable, but is capable of assuming that perspective in order to gain whatever benefits come from enjoying epistemic access to a realm of moral facts.

3 Miller (2003), p. 3.

4 I changed the example moral judgment from "murder is wrong" to "stealing is wrong" because the former invites confusion. 'Murder' often means wrongful killing.

5 Also see Jacobs (2002), starting on p. 2.

6 The first premise of one of his favorite arguments for God's existence is "If God does not exist, objective moral values and duties do not exist."

7 Craig (2010), p. 11.

8 It's also possible that Craig intended a different sense of objective than the ones explicitly given in his article. Perhaps he meant objective4: "morality is objective4 if it is not grounded in the opinion of non-divine persons."

9 Also see Moral Reductionism and Moore's Open Question Argument.

10 Hume (1739), p. 469. The famous paragraph is:

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it should be observed and explained; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.

11 For more on reducing certain kinds of normative statements, see Finlay (2010).

12 Assuming reductionism is true. If reductionism is false, then of course there are problems for pluralistic moral reductionism as a theory of austere (but not empathic) metaethics. The clarifications in the last three paragraphs of this section are due to discussions with Wei Dai and Vladimir Nesov.

13 My thanks to Steve Rayhawk and Will Newsome for their feedback on early drafts of this post.

References

Bjornsson & Finlay (2010). Metaethical contextualism defended. Ethics, 121: 7-36.

Craig (2010). Five Arguments for God. The Gospel Coalition.

Cartwright (2007).Hunting Causes and Using Them: Approaches in Philosophy and Economics. Cambridge University Press.

Godfrey-Smith (2009). Causal pluralism. In Beebee, Hitchcock, & Menzies (eds.), The Oxford Handbook of Causation (pp. 326-337). Oxford University Press.

Hume (1739). A Treatise on Human Nature. John Noon.

Finlay (2010). Normativity, Necessity and Tense: A Recipe for Homebaked Normativity. In Shafer-Landau (ed.), Oxford Studies in Metaethics 5 (pp. 57-85). Oxford University Press.

Jacobs (2002). Dimensions of Moral Theory. Wiley-Blackwell.

Joyce (2011).Metaethical pluralism: How both moral naturalism and moral skepticism may be permissible positions. In Nuccetelli & Seay (eds.), Ethical Naturalism: Current Debates. Cambridge University Press.

Lewis (1989). Dispositional theories of value. Part II. Proceedings of the Aristotelian Society, supplementary vol. 63: 113-137.

Magnus (2009). What species can teach us about theory.

Miller (2003). An Introduction to Contemporary Metaethics. Polity.

Russell (1953). The cult of common usage. British Journal for the Philosophy of Science, 12: 305-306.

Uidhir & Magnus (2011). Art concept pluralism. Metaphilosophy, 42: 83-97.

327 comments

Comments sorted by top scores.

comment by Jonathan_Graehl · 2011-06-01T05:10:25.824Z · LW(p) · GW(p)

Solid and unsurprising.

Replies from: lukeprog
comment by lukeprog · 2011-06-01T06:20:29.869Z · LW(p) · GW(p)

Thanks, this is exactly the feedback I was hoping to receive. :)

Basically, I want this post and the last one to be where Less Wrongers can send people whenever they appear confused about standard philosophical debates in moral theory: "Wait, stop. Go read lukeprog's article on this and then let me know if you still think the same thing."

comment by asr · 2011-06-02T04:39:53.531Z · LW(p) · GW(p)

I feel like your austere meta-ethicist is mostly missing the point. It's utterly routine for different people to have conflicting beliefs about whether a given act is moral*. And often they can have a useful discussion, at the end of which one or both participants change their beliefs. These conversations can happen without the participants changing their definitions of words like 'moral', and often without them having a clear definition at all.

[This is my first LW comment -- if I do something wrong, please bear with me]

This suggests that precise definitions or agreement about definitions isn't all that critical. But it's sometimes useful to be able to reason from stipulated and mutually agreed definitions, in which case meta-ethical speculation and reasoning is doing useful work if it offers a menu of crisp, useful, definitions that can be used in discussion of specific moral claims. Relatedly, it's also doing useful work by offering a set of definitions that help people conceptualize and articulate their personal feelings about morality, even absent a concrete first-order question.

And part of what goes into picking definitions is to understand their consequences. A philosopher is doing useful work for me if he shows me that a tempting-sounding definition of 'morality' doesn't pick out the set of things I want it to pick out, or that some other definition turns out not to refer to any clear set at all.

Many mathematical entities have multiple logically equivalent definitions, that are of different utility in different contexts. (E.g., sometimes I want to think about a circle as a locus of points, and sometimes as the solution set to an equation.) In the real world, something similar happens.

When I discuss, say, abortion, with somebody, probably there are multiple working definitions of 'moral' that could be mutually agreed upon for the purpose of the conversation, and the underlying dispute would still be nontrivial and intelligible. But some definitions might be more directly applicable to the discussion -- and philosophical reasoning might be helpful in figuring out what the consequences of various definitions are. For instance, a non-cognitive strikes me intuitively as less likely to be useful -- but I'd be open to an argument showing how it could be useful in a debate.

Probably a great deal of academic writing on meta-ethics is low value. But that's true of most writing on most topics and doesn't show that the topic is pointless. (With academics being major offenders, but not the only offenders.)

*I'm thinking of the individual personal changes in belief that went along with increased opposition to official racism in America over the course of the 20th century. Or opposition to slavery in the 19th.

Replies from: lukeprog, Garren, Peterdjones
comment by lukeprog · 2011-06-04T23:56:29.530Z · LW(p) · GW(p)

Welcome to Less Wrong!

Is there a part of your comment that you suspect I disagree with? Or, is there a sentence in my post with which you disagree?

Replies from: asr
comment by asr · 2011-06-05T08:33:30.782Z · LW(p) · GW(p)

Having had time to mull over -- I think here's something about your post that bothers me. I don't think it's possible to pinpoint a single sentence, but here are two things that don't quite satisfy me.

1) Neither your austere or empathetic meta-ethicists seem to be telling me anything I wanted to hear. What I want is a "linguistic meta-ethicist", who will tell me what other competent speakers of English mean when they use "moral" and suchlike terms. I understand that different people mean different things, and I'm fine with an answer which comes in several parts, and with notes about which speakers are primarily using which of those possible definitions.

What I don't want is a brain scan from each person I talk to -- I want an explanation that's short and accessible enough to be useful in conversations. Conventional ethics and meta-ethics has given a bunch of useful definitions. Saying "well, it depends" seems unnecessarily cautious; saying "let's decode your brain" seems excessive for practical purposes.

2) Most of the conversations I'm in that involve terms like "moral" would be only slightly advanced by having explicit definitions -- and often the straightforward terms to use instead of "moral" are very nearly as contentious or nebulous. In your own examples, you have your participants talk about "well-being" and "non-moral goodness." I don't think that's a significant step forward. That's just hiding morality inside the notion of "a good life" -- which is a sensible thing to say, but people have been saying it since Plato, and it's an approach that has problems of its own.

By the way, I do understand that I may not have been your target audience, and that the whole series of posts has been carefully phrased and well organized, and I appreciate that.

Replies from: None, lukeprog
comment by [deleted] · 2013-11-18T13:35:36.802Z · LW(p) · GW(p)

I would think that the Hypothetical Imperatives are useful there. You can thus break down your own opinions into material of the form:

"If the set X of imperative premises holds, and the set Y of factual premises holds, then logic Z dictates that further actions W are imperative.

"I hold X already, and I can convince logic Z of the factual truth of Y, thus I believe W to be imperative."

Even all those complete bastards who disagree with your X can thus come to an agreement with you about the hypothetical as a whole, provided they are epistemically rational. Having isolated the area of disagreement to X, Y, or Z, you can then proceed to argue about it.

comment by lukeprog · 2011-06-05T17:54:28.198Z · LW(p) · GW(p)

asr,

Your linguistic metaethicist sounds like the standard philosopher doing conceptual analysis. Did you see my post on 'Conceptual Analysis and Moral Theory'?

I think conversations using moral terms would be greatly advanced by first defining the terms of the debate, as Aristotle suggested. Also, the reason 'well-being' or 'non-moral goodness' are not unpacked is because I was giving brief examples. You'll notice the austere metaethicist said things like "assuming we have the same reduction of well-being in mind..." I just don't have the space to offer such reductions in what is already a long post.

Replies from: asr
comment by asr · 2011-06-05T18:48:53.757Z · LW(p) · GW(p)

I would find it helpful -- and I think several of the other posters here would as well -- to see one reduction on some nontrivial question carried far enough for us to see that the process can be made to work. If I understand right, your approach requires that speakers, or at least many speakers much of the time, can reduce from disputed, loaded, moral terms to reasonably well-defined and fact-based terminology. That's the point I'd most like to see you spend your space budget on in future posts.

Definitions are good. Precise definitions are usually better than loose definitions. But I suspect that in this context, loose definitions are basically good enough and that there isn't a lot of value to be extracted by increased precision there. I would like evidence that improving our definitions is a fruitful place to spend effort.

I did read your post on conceptual analysis. I just re-read it. And I'm not convinced that the practice of conceptual analysis is any more broken than most of what people get paid to do in the humanities and social sciences . My sense is that the standard textbook definitions are basically fine, and that the ongoing work in the field is mostly just people trying to get tenure and show off their cleverness.

I don't see that there's anything terribly wrong with the practice of conceptual analysis -- so long as we don't mistake an approximate and tentative linguistic exercise for access to any sort of deep truth.

Replies from: lukeprog
comment by lukeprog · 2011-06-08T06:22:57.992Z · LW(p) · GW(p)

I don't think many speakers actually have an explicit ought-reduction in mind when they make ought claims. Perhaps most speakers actually have little idea what they mean when they use ought terms. For these people, emotivism may roughly describe speech acts involving oughts.

Rather, I'm imagining a scenario where person A asks what they ought to do, and person B has to clarify the meaning of A's question before B can give an answer. At this point, A is probably forced to clarify the meaning of their ought terms more thoroughly than they have previously done. But if they can't do so, then they haven't asked a meaningful question, and B can't answer the question as given.

I would like evidence that improving our definitions is a fruitful place to spend effort.

Why? What I've been saying the whole time is that improving our definitions isn't worth as much effort as philosophers are expending on it.

I'm not convinced that the practice of conceptual analysis is any more broken than most of what people get paid to do in the humanities and social sciences.

On this, we agree. That's why conceptual analysis isn't very valuable, along with "most of what people get paid to do in the humanities and social sciences." (Well, depending on where you draw the boundary around the term 'social sciences.')

I don't see that there's anything terribly wrong with the practice of conceptual analysis...

Do you see something wrong with the way Barry and Albert were arguing about the meaning of 'sound' in Conceptual Analysis and Moral Theory? I'm especially thinking of the part about microphones and aliens.

Replies from: asr
comment by asr · 2011-06-10T07:09:14.998Z · LW(p) · GW(p)

I agree that emotivism is an accurate description, much of the time, for what people mean when they make value judgments. I would also agree that most people don't have a specific or precise definition in mind. But emotivism isn't the only description and for practical purposes it's often not the most useful. Among other things, we have to specify which emotion we are talking about. Not all disgust is moral disgust.

Value judgments show up routinely in law and in daily life. It would be an enormous, difficult, and probably low-value task to rewrite our legal code to avoid terms like "good cause", "unjust enrichment", "unconscionable contract", and the like. Given that we're stuck with moral language, it's a useful project to pull out some definitions to help focus discourse slightly. But we aren't going to be able to eliminate them. "Morality" and its cousins are too expensive to taboo.

We want law and social standards to be somewhat loosely defined, to avoid unscrupulous actors trying to worm their way through loopholes. We don't want to be overly precise and narrow in our definitions -- we want to leverage the judgement of judges and juries. But conversely, we do want to give them guidance about what we mean by those words. And precedent supplies one sort of guidance, and some definitions give them an additional sort of guidance.

I suspect it would be quite hard to pick out precisely what we as a society mean when we use those terms in the legal code -- and very hard to reduce them to any sort of concrete physical description that would still be human-intelligible. I would be interested to see a counterexample if you can supply one easily.

I have the sense that trying to talk about human judgement and society without moral language would be about like trying to discuss computer science purely in terms of the hardware -- possible, but unnecessarily cumbersome.

One of the common pathologies of the academy is that somebody comes up with a bright idea or a powerful intellectual tool. Researchers then spend several years applying that tool to increasingly diverse contexts, often where the marginal return from the tool is near-zero. Just because conceptual analysis is being over-used doesn't mean that it is always useless! The first few uses of it may indeed have been fairly high-value in aiding us in communicating. The fact that the tool is then overused isn't a reason to ignore it.

Endless wrangles about definitions, I think are necessarily low-value. Working out a few useful definitions or explanations for a common term can be valuable, though -- particularly if we are going to apply those terms in a quasi-formal setting, like law.

comment by Garren · 2011-06-02T17:14:11.802Z · LW(p) · GW(p)

It's utterly routine for different people to have conflicting beliefs about whether a given act is moral*. And often they can have a useful discussion, at the end of which one or both participants change their beliefs. These conversations can happen without the participants changing their definitions of words like 'moral', and often without them having a clear definition at all.

It may be routine in the sense that it often happens, but not routine in the sense that this is a reliable approach to settling moral differences. Often such disputes are not settled despite extensive discussions and no obvious disagreement about other kinds of facts.

This can be explained if individuals are basing their judgments off differing sets of values that partially overlap. Even if both participants are naively assuming their own set of values is the set of moral values, the fact of overlapping will sometimes mean non-moral considerations which are significant to one's values will also be significant for the other's values. Other times, this won't be the case.

For example, many pro-lifers naively assume that everyone places very high value on all human organisms, so they spend a lot of time arguing that an embryo or fetus is a distinct human organism. Anyone who is undecided or pro-choice who shares this value but wasn't aware of the biological evidence that unborn humans are distinct organisms from their mothers may be swayed by such considerations.

On the other hand, many pro-choicers simply do not place equally high value on all human organisms, without counting other properties like sentience. Or — following Judith Jarvis Thomson in "A Defense of Abortion" — they may place equally high value on all human organisms, but place even greater value on the sort of bodily autonomy denied by laws against abortion.

Morality as the expression of pluralistic value sets (and the hypothetical imperatives which go along with them) is a very neat explanation of the pattern we see of agreement, disagreement, and partially successful deliberation.

Replies from: asr, Peterdjones
comment by asr · 2011-06-02T19:21:23.365Z · LW(p) · GW(p)

I agree with all the claims you're making about morality and about moral discussion. But I don't quite see where any of this is giving me any new insights or tools. Sure, people have different but often overlapping values. I knew that. I think most adults who ever have conversations about morality know that. And we know that without worrying too much about the definition of morality and related words.

But I think everything you've said is also true about personal taste in non moral questions. I and my friends have different but overlapping taste in music, because we have distinct but overlapping set of desiderata for what we listen to. And sometimes, people get convinced to like something they previously didn't. I want a meta-ethics that gives me some comparative advantage in dealing with moral problems, as compared to other sorts of disagreements. I had assumed that lukeprog was trying to say something specifically about morality, not just give a general and informal account of human motivation, values, and preferences.

Thus far, this sequence feels like a lot of buildup and groundwork that is true but mostly not in much dispute and mostly doesn't seem to help me accomplish anything. Perhaps my previous comment should just have been a gentle nudge to lukeprog to get to the point.

Replies from: Garren
comment by Garren · 2011-06-02T20:00:47.155Z · LW(p) · GW(p)

I want a meta-ethics that gives me some comparative advantage in dealing with moral problems, as compared to other sorts of disagreements.

This may be a case where not getting it wrong is the main point, even if getting it right is a let down.

My own view is quite similar to Luke's and I find it useful when I hear a moral clam to try sorting out how much of the claim is value-expression and how much is about what needs to be done to promote values. Even if you don't agree about values, it still helps to figure out what someone else's fundamental values are and argue that what they're advocating is out of line with their own values. People tend to be mistaken about how to fulfill their own values more than they are about how to fulfill their own taste in music.

Replies from: lukeprog
comment by lukeprog · 2011-06-08T06:25:10.089Z · LW(p) · GW(p)

People tend to be mistaken about how to fulfill their own values more than they are about how to fulfill their own taste in music.

Yes.

That is why I can interrogate what somebody means by 'ought' and then often show that by their own definition of ought, what they thought they 'ought' to do is not what they 'ought' to do.

comment by Peterdjones · 2011-06-02T19:27:51.975Z · LW(p) · GW(p)

It may be routine in the sense that it often happens, but not routine in the sense that this is a reliable approach to settling moral differences.

Do you know of anything better?

Morality as the expression of pluralistic value sets (and the hypothetical imperatives which go along with them) is a very neat explanation of the pattern we see of agreement, disagreement, and partially successful deliberation.

OTOH, the problem remains that people act on their values, and that one persons actions can affect another person. Pluralistic morality is terrible at translating into a uniform set of rules that all are beholden to.

Replies from: Garren
comment by Garren · 2011-06-02T19:44:19.142Z · LW(p) · GW(p)

Pluralistic morality is terrible at translating into a uniform set of rules that all are beholden to.

Why is that the test of a metaethical theory rather than the theory which best explains moral discourse? Categorical imperatives — if that's what you're referring to — are one answer to the best explanation of moral discourse, but then we're stuck showing how categorical imperatives can hold...or accepting error theory.

Perhaps 'referring to categorical imperatives' is not the only or even the best explanation of moral discourse. See "The Error in the Error Theory" by Stephen Finlay.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T20:59:12.418Z · LW(p) · GW(p)

Why is that the test of a metaethical theory rather than the theory which best explains moral discourse?

Because there is a practical aspect to ethics. Moral discourse involves the idea that people should do the obligatory and refrain from the forbidden. -- irrespective of who they are. That needs explaining as well.

Replies from: Garren
comment by Garren · 2011-06-02T21:16:22.373Z · LW(p) · GW(p)

Moral discourse is about what to do, but it doesn't seem to (at least always) be about what everyone must do for no prior reason.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T22:51:48.562Z · LW(p) · GW(p)

Uh-huh. Is that an issue of commission rather than omission? Are people not obligated to refrain from theft murder and rape , their inclinations notwithstanding?

Replies from: Garren
comment by Garren · 2011-06-03T00:43:55.624Z · LW(p) · GW(p)

If by 'obligated' you mean it's demanded by those who fear being the targets of those actions, yes. Or if you mean exercising restraint may be practically necessary to comply with certain values those actions thwart, yes. Or if you mean doing those things is likely to result in legal penalties, that's often the case.

But if you mean it's some simple fact that we're morally obligated to restrain ourselves from doing certain things, no. Or at least I don't see how that could even possibly be the case, and I already have a theory that explains why people might mistakenly think such a thing is the case (they mistake their own values for facts woven into the universe, so hypothetical imperatives look like categorical imperatives to them).

The 'commission' vs. 'omission' thing is often a matter of wording. Rape can be viewed as omitting to get proper permission, particularly when we're talking about drugging, etc.

Replies from: Peterdjones, BobTheBob
comment by Peterdjones · 2011-06-05T21:32:07.249Z · LW(p) · GW(p)

But if you mean it's some simple fact that we're morally obligated to restrain ourselves from doing certain things, no. Or at least I don't see how that could even possibly be the case, and I already have a theory that explains why people might mistakenly think such a thing is the case (they mistake their own values for facts woven into the universe, so hypothetical imperatives look like categorical imperatives to them).

Well, I have a theory about how it could be the case. Objective morality doesn';t have to be a fact-like thing that is paradoxically indetectable. It could be based on the other source of objectivity: logic and reason. It's an analytical truth that you shouldn't do to others what you wouldn't want done to yourself. You are obliged to be moral so long as you can reason morally in the sense that you will be held responsible.

Replies from: asr
comment by asr · 2011-06-05T23:00:37.010Z · LW(p) · GW(p)

It's an analytical truth that you shouldn't do to others what you wouldn't want done to yourself.

I'm skeptical that this statement is true, let alone an analytic truth. Different people have different desires. I take the Golden Rule to be a valuable heuristic, but no more than that.

What is your reason for believing that it is true as an absolute rule?

comment by BobTheBob · 2011-06-03T17:14:28.087Z · LW(p) · GW(p)

Just to clarify where you stand on norms: Would you say a person is obligated by facts woven into the universe to believe that 68 + 57 = 125 ? (ie, are we obligated in this sense to believe anything?)

To stick my own neck out: I am a realist about values. I think there are facts about what we ought to believe and do. I think you have to be, to capture mathematical facts. This step taken, there's no further commitment required to get ethical facts. Obviously, though, there are epistemic issues associated with the latter which are not associated with the former.

Morality as the expression of pluralistic value sets (and the hypothetical imperatives which go along with them) is a very neat explanation of the pattern we see of agreement, disagreement, and partially successful deliberation.

Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values - and possibly in quite different domains as well (politics, aesthetics, gardening)?

You seem to be suggesting meta-ethics aims merely to give a discriptively adequate characterisation of ethical discourse. If so, would you at least grant that many see (roughly) as its goal to give a general characterisation of moral rightness, that we all ought to strive for it?

Replies from: Peterdjones, Garren
comment by Peterdjones · 2011-06-05T21:42:36.064Z · LW(p) · GW(p)

To stick my own neck out: I am a realist about values. I think there are facts about what we ought to believe and do. I think you have to be, to capture mathematical facts.

Facts as in true statements, or facts as in states-of-affiairs?

Replies from: BobTheBob
comment by BobTheBob · 2011-06-06T03:23:36.940Z · LW(p) · GW(p)

Facts in the disappointingly deflationary sense that

It is a fact that P if and only if P (and that's all there is to say about facthood).

This position is a little underwhelming to any who seek a metaphysically substantive account of what makes things true, but it is a realist stance all the same (no?). If you have strong arguments against this or for an alternative, I'm interested to hear.

comment by Garren · 2011-06-03T18:58:38.550Z · LW(p) · GW(p)

Would you say a person is obligated by facts woven into the universe to believe that 68 + 57 = 125 ? (ie, are we obligated in this sense to believe anything?) Would you say a person is obligated by facts woven into the universe to believe that 68 + 57 = 125 ? (ie, are we obligated in this sense to believe anything?)

No, I wouldn't say that. It would be a little odd to say anyone who doesn't hold a belief that 68 + 57 equals 125 is neglecting some cosmic duty. Instead, I would affirm:

In order to hold a mathematically correct belief when considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.

(I'm leaving 'mathematically correct' vague so different views on the nature of math are accommodated.)

In other words, the obligation relies on a goal. Or we could say normative answers require questions. Sometimes the implied question is so obvious, it seems strange to bother identifying it.

Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values - and possibly in quite different domains as well (politics, aesthetics, gardening)?

Yes.

You seem to be suggesting meta-ethics aims merely to give a discriptively adequate characterisation of ethical discourse. If so, would you at least grant that many see (roughly) as its goal to give a general characterisation of moral rightness, that we all ought to strive for it?

I think that's generally the job of normative ethics, and metaethics is a little more open ended than that. I do grant that many people think the point of ethical philosophy in general is to identify categorical imperatives, not give a pluralistic reduction.

Replies from: BobTheBob, BobTheBob
comment by BobTheBob · 2011-06-04T03:24:32.166Z · LW(p) · GW(p)

Taking your thoughts out of order,

Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values - and possibly in quite different domains as well (politics, aesthetics, gardening)?

Yes.

What I was getting at is that this looks like complete moral relativism -'right for me' is the only right there is (since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people's values differ). I'm understanding that you're willing to bite this bullet.

I think that's generally the job of normative ethics, and metaethics is a little more open ended than that. I do grant that many people think the point of ethical philosophy in general is to identify categorical imperatives, not give a pluralistic reduction.

I take your point here. I may be conflating ethical and meta-ethical theory. I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?). If we're talking about a defence of, say, non-cognitivism, though, maybe what you say is fair.

No, I wouldn't say that. It would be a little odd to say anyone who doesn't hold a belief that 68 + 57 equals 125 is neglecting some cosmic duty.

This is fair.

Instead, I would affirm: In order to hold a mathematically correct belief when considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.

This is an interesting proposal, but I'm not sure what it implies. Is it possible for a rational person to strive to believe anything but the truth? Whether in math or anything else, doesn't a rational person always try to believe what is correct? Or, to put the point another way, isn't having truth as its goal part of the concept of belief? If so, I suggest this collapses to something like

*When considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.

or, more plausibly,

*When considering 68 + 57, we ought to believe it equals 125 or some equivalent expression.

But if this is fair I'm back to wondering where the ought comes from.

Replies from: Garren
comment by Garren · 2011-06-04T06:06:10.757Z · LW(p) · GW(p)

What I was getting at is that this looks like complete moral relativism -'right for me' is the only right there is

While it is relativism, the focus is a bit different from 'right for me.' More like 'this action measures up as right against standard Y' where this Y is typically something I endorse.

For example, if you and I both consider a practice morally right and we do so because it measures up that way against the standard of improving 'the well-being of conscious creatures,' then there's a bit more going on than it just being right for you and me.

Or if I consider a practice morally right for the above reason, but you consider it morally wrong because it falls afoul of Rawls' theory of justice, there's more going on than it just being right for me and wrong for you. It's more like I'm saying it's right{Harris standard} and you're saying it's wrong{Rawls standard}. (...at least as far as cognitive content is concerned; we would usually also be expressing an expectation that others adhere to the standards we support.)

Of course the above are toy examples, since people's values don't tend to line up neatly with the simplifications of philosophers.

(since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people's values differ).

It's not apparent that values differ just because judgments differ, so there's still a lot of interesting work to find out if disagreements can be explained by differing descriptive beliefs. But, yes, once a disagreement is known to result from a pure difference in values, there isn't a rational way to resolve it. It's like Luke's 'tree falling' example; once we know two people are using different definitions of 'sound,' the best we can do is make people aware of the difference in their claims.

I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?).

Yep. While those are interesting standards to consider, it's pretty clear to me that real world moral discourse is wider and more messy than any one normative theory. We can simply declare a normative theory as the moral standard — plenty of people have! — but the next person whose values are a better match for another normative theory is just going to disagree. On what basis do we find that one normative theory is correct when, descriptively, moral pluralism seems to characterize moral discourse?

Is it possible for a rational person to strive to believe anything but the truth?

If being rational consists in doing what it takes to fulfill one's goals (I don't know what the popular definition of 'rationality' is on this site), then it is still possible to be rational while holding a false belief, if a false belief helps fulfill one's goals.

Now typically, false beliefs are unhelpful in this way, but I know at least Sinnott-Armstrong has talked about an 'instrumentally justified' belief that can go counter to having a true belief. The example I've used before is an Atheist married to a Theist whose goal of having a happy marriage would in fact go better if she could take a belief-altering pill so she would falsely take on her spouse's belief in God.

Or, to put the point another way, isn't having truth as its goal part of the concept of belief? [...] But if this is fair I'm back to wondering where the ought comes from.

Perhaps it comes from the way you view the concept of belief as implying a goal?

Replies from: asr, Peterdjones, BobTheBob, BobTheBob
comment by asr · 2011-06-05T22:33:00.974Z · LW(p) · GW(p)

At risk of triggering the political mind-killer, I think there are some potentially problematic consequences of this view.

Once a disagreement is known to result from a pure difference in values, there isn't a rational way to resolve it...the best we can do is make people aware of the difference in their claims.

Suppose we don't have good grounds for keeping one set of moral beliefs over another. Now suppose somebody offers to reward us for changing our views, or punish us for not changing. Should we change our views?

To go from the philosophical to the concrete: There are people in the world who are fanatics who are largely committed to some reading of the Bible/Koran/Little Green Book of Colonel Gaddafi/juche ideology of the Great Leader/whatever. Some of those people have armies and nuclear weapons. They can bring quite a lot of pressure to bear on other individuals to change their views to resemble those of the fanatic.

If rationalism can't supply powerful reasons to maintain a non-fanatical worldview in the face of pressure to self-modify, that's an objection to rationalism. Conversely, altering the moral beliefs of fanatics with access to nuclear weapons strikes me as an extremely important practical project. I suspect similar considerations will apply if you consider powerful unfriendly powerful AIs.

This reminds me of that line of Yeats, that "the best lack all conviction, while the worst are full of passionate intensity." Ideological differences sometimes culminate in wars, and if you want to win those wars, you may need something better than "we have our morals and they have theirs."

To sharpen the point slightly: There's an asymmetry between the rationalists and the fanatics, which is that the rationalists are aware that they don't have a rational justification for their terminal values, but the fanatic does have a [fanatical] justification. Worse, the fanatic has a justification to taboo thinking about the problem, and the rationalist doesn't.

Replies from: Manfred, Garren
comment by Manfred · 2011-06-05T23:23:38.945Z · LW(p) · GW(p)

Just because morality is personal doesn't make it not real. If you model people as agents with utility functions, the reason not to change is obvious - if you change, you won't do all the things you value. Non-fanatics can do that the same as fanatics.

The difference comes when you factor in human irrationality. And sure, fanatics might resist where anyone sane would give in. "We will blow up this city unless you renounce the Leader," something like that. But on the other hand, rational humans might resist techniques that play on human irrationality, where fanatics might even be more susceptible than average. Good cop / bad cop for example.

What about on a national scale, where, say, an evil mastermind threatens to nuke every nation that does not start worshiping the flying spaghetti monster? Well, what a rational society would do is compare benefits and downsides, and worship His Noodliness if it was worth it. Fanatics would get nuked. I fail to see how this is an argument for why we shouldn't be rational.

if you want to win those wars, you may need something better than "we have our morals and they have theirs."

And that's why Strawmansylvania has never won a single battle, I agree. Just because morality is personal doesn't make it unmomving.

Replies from: asr
comment by asr · 2011-06-06T00:07:33.034Z · LW(p) · GW(p)

Just because morality is personal doesn't make it not real. If you model people as agents with utility functions, the reason not to change is obvious - if you change, you won't do all the things you value.

Does this imply that if a rational actor has terminal values that are internally consistent and in principle satisfiable, it would always be irrational for the actor to change those values or allow them to change?

That doesn't seem right either. Somehow, an individual improving their moral beliefs as they mature, the notional Vicar of Bray, and Pierre Laval are all substantially different cases of people changing their [terminal] beliefs in response to events. There's something badly wrong with a theory that can't distinguish those cases.

Also, my apologies if this has been already discussed to death on LW or elsewhere -- I spent some time poking and didn't see anything on this point.

Replies from: Manfred
comment by Manfred · 2011-06-06T00:49:32.707Z · LW(p) · GW(p)

Does this imply that if a rational actor has terminal values that are internally consistent and in principle satisfiable, it would always be irrational for the actor to change those values or allow them to change?

No, but it sets a high standard - If you value, say, the company of your family, then modifying to not want that (and therefore not spend much time with your family) costs as much as if you were kept away from your family by force for the rest of your life. So any threats have to be pretty damn serious, and maybe not even death would work if you know important secrets or do not highly value living without some key values.

an individual improving their moral beliefs as they mature, the notional Vicar of Bray, and Pierre Laval are all substantially different cases of people changing their [terminal] beliefs

I wouldn't call all of those cases of modifying terminal values. From some quick googling (I didn't know about the Vicar of Bray), what the Vicar of Bray cared about was being the vicar of Bray. What Pierre Laval cared about was being the head of the government and not being killed, maybe. So they're maybe not good examples of changing terminal values, as opposed to instrumental ones.

Also "improving their moral beliefs as they mature" is a very odd concept once you think about it. How do you judge whether a moral belief is right to hold correctly without having a correct ultimate belief from the start, to do the judging? It's really an example of how humans are emphatically not rational agents - we follow a bunch of evolved and cultural rules, which can appear to produce consistent behavior, but really have all these holes and internal conflicts. And things can change suddenly, without the sort of rational deliberation described above.

Replies from: torekp, asr
comment by torekp · 2011-06-08T22:56:02.892Z · LW(p) · GW(p)

Also "improving their moral beliefs as they mature" is a very odd concept once you think about it. How do you judge whether a moral belief is right to hold correctly without having a correct ultimate belief from the start, to do the judging?

You could say the same about "improving our standards of scientific inference." Circular? Perhaps, but it needn't be a vicious circle. It's pretty clear that we've accomplished it, so it must be possible.

comment by asr · 2011-06-06T04:06:47.578Z · LW(p) · GW(p)

I would cheerfully agree that humans aren't rational and routinely change their minds about morality for non-rational reasons.

This is one of the things I was trying to get at. Ask when we should change our minds for non-rational reasons, and when we should attempt to change others' minds using non-rational means.

The same examples I mentioned above work for these questions too.

Here's what I had in mind with the reference to the Vicar of Bray. Imagine an individual with two terminal values: "Stay alive and employed" and the reigning orthodoxy at the moment. The individual sincerely believes in both, and whenever they start to conflict, changes their beliefs about the orthodoxy. He is quite sincere in advocating for the ruling ideology at each point in time; he really does believe in divine right of kings, just so long as it's not a dangerous belief to hold.

The beliefs in question are at least potentially terminal moral beliefs. Without delving deep into the history, let's stipulate for the purpose of the conversation that we're talking about a rational actor who has a sequence of terminal moral beliefs about what constitutes a just government, and that these beliefs shift with the political climate.

Now for contrast, let's consider a hypothetical rational but very selfish child. The child's parents attempt and succeed in changing the child's values to be less selfish. They do this by the usual parental tactics of punishment and example-setting, not by rational argument. By your social standard and mine, this is an improvement to the child.

Both the vicar and the child are updating their moral beliefs in response to outside pressure, not rational deliberation. The general consensus is that parents are obligated to bring up their children not to be overly self-centered and that reasoning with children is not a sufficient pedagogic technique But conversely that coercive government pressure on religion is ignoble.

Is this simply that you and I think "a change in moral beliefs, brought about by non-reasonable means is good (all else equal), if it significantly improves the beliefs of the subject by my standards"?

Replies from: Manfred
comment by Manfred · 2011-06-06T07:36:06.088Z · LW(p) · GW(p)

I'd agree with that. Maybe with some caveats, but generally yes.

Replies from: asr
comment by asr · 2011-06-07T07:53:58.051Z · LW(p) · GW(p)

I think the caveats will turn out to matter a lot. One of the things that human moral beliefs do, in practice, is give other humans some reasons to trust you. If I know that you are committed, for non-instrumental reasons, to avoid manipulating* me into changing my values, that gives me reasons to trust you. Conversely, if your moral view is that it's legitimate to lie to people to make them do what you want, people will trust you less.

Obviously, people have incentives to lie about their true values. I think equally obviously, people are paying attention and looking hard for that sort of hypocrisy.

*This sentence is true for a range of possible expansions of "manipulating".

Replies from: Manfred
comment by Manfred · 2011-06-07T08:31:56.068Z · LW(p) · GW(p)

My statement was more observational than ideal, though. Sure, a rational agent can be averse to manipulating other people (and humans often are too), because agents can care about whatever they want. But that doesn't bear very strongly on how the language is used compared to the fact that in real-world usage I see people say things like "improved his morals" by only three standards: consistency, how much society approves, and how much you approve.

comment by Garren · 2011-06-06T06:03:10.703Z · LW(p) · GW(p)

I think the worry here is that realizing 'right' and 'wrong' are relative to values might make us give up our values. Meanwhile, those who aren't as reflective are able to hold more strongly onto their values.

But let's look at your deep worry about fanatics with nukes. Does their disregard for life have to also be making some kind of abstract error for you to keep and act on your own strong regard for life?

Replies from: asr
comment by asr · 2011-06-06T07:21:51.672Z · LW(p) · GW(p)

I think the worry here is that realizing 'right' and 'wrong' are relative to values might make us give up our values. Meanwhile, those who aren't as reflective are able to hold more strongly onto their values.

Almost. What I'm worried about is that acknowledging or defining values to be arbitrary makes us less able to hold onto them and less able to convince others to adopt values that are safer for us. I think it's nearly tautological that right and wrong are defined in terms of values.

The comment about fanatics with nuclear weapons wasn't to indicate that that's a particular nightmare of mine. It isn't. Rather, that was to get at the point that moral philosophy isn't simply an armchair exercise conducted amongst would-be rationalists -- sometimes having a good theory a matter of life and death.

It's very tempting, if you are firmly attached to your moral beliefs, and skeptical about your powers of rationality (as you should be!) to react to countervailing opinion by not listening. If you want to preserve the overall values of your society, and are skeptical of others' powers of rational judgement, it's tempting to have the heretic burnt at the stake, or the philosopher forced to drink the hemlock.

One of the undercurrents in the history of philosophy has been an effort to explain why a prudent society that doesn't want to lose its moral footings can still allow dissent, including dissent about important values, that risks changing those values to something not obviously better. Philosophers, unsurprisingly, are drawn to philosophies that explain why they should be allowed to keep having their fun. And I think that's a real and valuable goal that we shouldn't lose sight of.

I'm willing to sacrifice a bunch of other theoretical properties to hang on to a moral philosophy that explains why we don't need heresy trials and why nobody needs to bomb us for being infidels.

comment by Peterdjones · 2011-06-05T21:53:29.113Z · LW(p) · GW(p)

While it is relativism, the focus is a bit different from 'right for me.' More like 'this action measures up as right against standard Y' where this Y is typically something I endorse.

I don't see much difference there. Relativist morality doesn't have to be selfish (although the reverse is probably true).

comment by BobTheBob · 2011-06-05T19:59:34.328Z · LW(p) · GW(p)

For example, if you and I both consider a practice morally right and we do so because it measures up that way against the standard of improving 'the well-being of conscious creatures,' then there's a bit more going on than it just being right for you and me.

OK, but what I want to know is how you react to some person -whose belief system is internally consistent- who has just, say, committed a gratuitous murder. Are you committed to saying that there are no objective grounds to sanction him - there is no sense in which he ought not to have done what he did (assuming his belief system doesn't inveigh against him offending yours)?

Or, to put the point another way, isn't having truth as its goal part of the concept of belief? [...] But if this is fair I'm back to wondering where the ought comes from.

Perhaps it comes from the way you view the concept of belief as implying a goal?

Touche.

Look, what I'm getting at is this. I assume we can agree that

"68 + 57 = 125" is true if and only if 68 + 57 = 125

This being the case, if A, seriously pondering the nature of the compulsion behind mathematical judgement, should ask, "Why ought I to believe that 68 + 57 = 125?", and B answers, "Because it's true", then B is not really saying anything beyond, "Because it does". B does not answer A's question.

If the substantive answer is something along the lines that it is a mathematical fact, then I am interested to know how you conceive of mathematical facts, and whether there mightn't be moral facts of generally the same ilk (or if not, why). But if you want somehow to reduce this to subjective goals, then it looks to me that mathematics falls by the wayside - you'll surely allow this looks pretty dubious at least superficially.

Replies from: Garren
comment by Garren · 2011-06-06T05:54:12.483Z · LW(p) · GW(p)

OK, but what I want to know is how you react to some person -whose belief system is internally consistent- who has just, say, committed a gratuitous murder. Are you committed to saying that there are no objective grounds to sanction him

There's an ambiguity here. A standard can make objective judgments, without the selection of that standard being objective. Like meter measurements.

Such a person would be objectively afoul of a standard against randomly killing people. But let's say he acted according to a standard which doesn't care about that; we wouldn't be able to tell him he did something wrong by that other standard. Nor could we tell him he did something wrong according to the one, correct standard (since there isn't one).

But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.

if A, seriously pondering the nature of the compulsion behind mathematical judgement, should ask, "Why ought I to believe that 68 + 57 = 125?", and B answers, "Because it's true", then B is not really saying anything beyond, "Because it does". B does not answer A's question.

Unless A was just asking to be walked through the calculation steps, then I agree B is not answering A's question.

But if you want somehow to reduce this to subjective goals, then it looks to me that mathematics falls by the wayside - you'll surely allow this looks pretty dubious at least superficially.

I'm not sure I'm following the argument here. I'm saying that all normativity is hypothetical. It sounds like you're arguing there is a categorical 'ought' for believing mathematical truths because it would be very strange to say we only 'ought' to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical 'oughts,' there might be others.

Is it something like that?

If so, then I would offer the goal of "in order to be logically consistent." There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don't have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.

Replies from: TimFreeman, BobTheBob, BobTheBob, Peterdjones
comment by TimFreeman · 2011-06-07T04:03:32.002Z · LW(p) · GW(p)

If so, then I would offer the goal of "in order to be logically consistent." There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don't have a good counter-argument to that, other than asking to examine such a theory...

You can stop right there. If no theory of morality based on logical consistency is offered, you don't have to do any more.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-07T11:13:41.257Z · LW(p) · GW(p)

If no logically consistent theory is offered, you don't have to do any more.

I suppose you mean "if no theory of morality based on logical consistency is offered".

Of course, one could make an attempt to research reason-based metaethics before discarding the whole idea.

Replies from: TimFreeman
comment by TimFreeman · 2011-06-07T13:40:52.623Z · LW(p) · GW(p)

I suppose you mean "if no theory of morality based on logical consistency is offered".

Agreed and edited.

Of course, one could make an attempt to research reason-based metaethics before discarding the whole idea.

I observe that you didn't offer a pointer to a theory of morality based on logical consistency.

I agree with Eby: you are a troll. I'm done here.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-07T14:03:51.775Z · LW(p) · GW(p)

I observe that you didn't offer a pointer to a theory of morality based on logical consistency.

For one thing, I don't think logical consistency is quite the right criterion for reason-based objective morality. Pointing out that certain ideas are old and well documented, is offering a pointer, and is not trolling.

comment by BobTheBob · 2011-06-07T02:57:18.832Z · LW(p) · GW(p)

But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.

I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary. Sorry if I'm not getting it.

I'm not sure I'm following the argument here. I'm saying that all normativity is hypothetical. It sounds like you're arguing there is a categorical 'ought' for believing mathematical truths because it would be very strange to say we only 'ought' to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical 'oughts,' there might be others.

Is it something like that?

This states the thought very clearly -thanks.

If so, then I would offer the goal of "in order to be logically consistent."

I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It's possible this doesn't really engage your thoughts, though.

There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don't have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.

If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn't that an important result?

Replies from: Garren, TimFreeman
comment by Garren · 2011-06-07T04:11:22.931Z · LW(p) · GW(p)

It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary.

When a dispute is over fundamental values, I don't think we can give the other side compelling grounds to act according to our own values. Consider Eliezer's paperclip maximizer. How could we possibly convince such a being that it's doing something irrational, besides pointing out that its current actions are suboptimal for its goal in the long run?

Thanks for the link to the Carroll story. I plan on taking some time to think it over.

If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn't that an important result?

It's important to us, but — as far as I can tell — only because of our values. I don't think it's important 'to the universe' for someone to refrain from going on a killing spree.

Another way to put it is that the rationality of killing sprees is dependent on the agent's values. I haven't read much of this site, but I'm getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-07T10:18:16.989Z · LW(p) · GW(p)

, besides pointing out that its current actions are suboptimal for its goal in the long run?

that sounds like a good rational argument to me. Is the paperclip maximiser supposed to have a different rationality or just different values?

Another way to put it is that the rationality of killing sprees is dependent on the agent's values. I haven't read much of this site, but I'm getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.

Like so much material on this site, that tacitly assumes values cannot be reasoned about.

comment by TimFreeman · 2011-06-07T04:00:16.365Z · LW(p) · GW(p)

I cannot provide [a murderer] compelling grounds as to why he ought not to have done what he did... [T]o punish him would be arbitrary.

If you don't want murderers running around killing people, then it's consistent with your values to set up a situation in which murderers can expect to be punished, and one way to do that is to actually punish murderers.

Yes, that's arbitrary, in the same sense that every preference you have is arbitrary. If you are going to act upon your preferences without deceiving yourself, you have to feel comfortable with doing arbitrary things.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-07T10:28:30.125Z · LW(p) · GW(p)

I think you missed the point quite badly there. The point is that there is no rationally compelling reason to act on any arbitrary value. You gave the example of punishing murderers, but if every value is equally arbitrary that is no more justifiable than punishing stamp collectors or the left-handed. Having accepted moral subjectivism, you are faced with a choice between acting irrationality or not acting. OTOH, you haven't exactly given moral objectivism a run for its money.

comment by BobTheBob · 2011-06-07T02:47:11.568Z · LW(p) · GW(p)

But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.

I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary.

I'm not sure I'm following the argument here. I'm saying that all normativity is hypothetical. It sounds like you're arguing there is a categorical 'ought' for believing mathematical truths because it would be very strange to say we only 'ought' to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical 'oughts,' there might be others.

Is it something like that?

This states the thought very clearly -thanks.

If so, then I would offer the goal of "in order to be logically consistent."

I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It's possible this doesn't really engage your thoughts, though. Some people I know think it's just foolish.

There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don't have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.

As is pointed out in the other thread from your post, plausibly our goal in the first instance is to show that it is rational not to kill people.

comment by Peterdjones · 2011-06-06T13:38:36.182Z · LW(p) · GW(p)

There's an ambiguity here. A standard can make objective judgments, without the selection of that standard being objective. Like meter measurements.

I don't think that works. If you have multiple contradictory judgements being made by multiple standards, and you deem them to be objective, then you end up with multiple contradictoryobjective truths. But I don't think you can have multiple contradictory objective truths.

But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him.

You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways

I don't have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.

It''ll work on people who already subscribe to rationaity, whereas relativism won't.

Replies from: nshepperd, Garren
comment by nshepperd · 2011-06-06T16:02:59.434Z · LW(p) · GW(p)

What's contradictory about the same object being judged differently by different standards?

Here's a standard: return the width of the object in meters. Here's another: return the number of wavelengths of blue light that make up the width of the object. And another: return the number of electrons in the object.

You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways

No Universally Compelling Arguments seems relevant here.

Replies from: Eugine_Nier, Peterdjones
comment by Eugine_Nier · 2011-06-07T02:21:27.938Z · LW(p) · GW(p)

No Universally Compelling Arguments seems relevant here.

You realize that the linked post applies to arguments about mathematics or physics just as much as about morality.

comment by Peterdjones · 2011-06-06T16:16:55.335Z · LW(p) · GW(p)

What's contradictory about the same object being judged differently by different standards?

Nothing. There's nothing contradictory about multiple subjective truths or about multiple opinions, or about a single objective truth. But there is a contradiction in multiple objective truths about morality, as I said.

Here's a standard: return the width of the object in meters. Here's another: return the number of wavelengths of blue light that make up the width of the object. And another: return the number of electrons in the object.

There isn't any contradiction in multiple objective truths about different things; but the original hypothesis was multiple objective truths about the same thing, ie the morality of an action. If you are going to say that John-morality and Mary-morality are different things, that is effectively conceding that they are subjective.

Replies from: Garren, nshepperd
comment by Garren · 2011-06-07T04:34:17.662Z · LW(p) · GW(p)

If you are going to say that John-morality and Mary-morality are different things, that is effectively conceding that they are subjective.

The focus doesn't have to be on John and Mary; it can be on the morality we're referencing via John and Mary. By analogy, we could talk about John's hometown and Mary's hometown, without being subjectivists about the cities we are referencing.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-07T11:03:07.171Z · LW(p) · GW(p)

That isn't analogous, because towns aren;'t epistemic.

comment by nshepperd · 2011-06-06T17:07:36.930Z · LW(p) · GW(p)

Hmm. Sounds like it would be helpful to taboo "objective" and "subjective". Or perhaps this is my fault for not being entirely clear.

A standard can be put into the form of sentences in formal logic, such that any formal reasoner starting from the axioms of logic will agree about the "judgements" of the standard.

I should mention that this point that I use the word "morality" to indicate a particular standard - the morality-standard - that has the properties we normally associate with morality ("approving" of happiness, "disapproving" of murder, etc). This is the standard I would endorse (by, for example, acting to maximise "good" according to it) were I fully rational and reflectively consistent and non-akrasiac.

So the judgements of other standards are not moral judgements in the sense that they are not statements about the output of this standard. There would indeed be something inconsistent about asserting that other standards made statements about -- ie. had the same output as -- this standard.

Given that, and assuming your objections about "subjectivity" still exist, what do you mean by "subjective" such that the existence of other standards makes morality "subjective", and this a problem?

It already seems that you must be resigned to your arguments failing to work on some minds: there is no god that will strike you down if you write a paperclip-maximising AIXI, for example.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-06T17:41:46.933Z · LW(p) · GW(p)

A standard can be put into the form of sentences in formal logic, such that any formal reasoner starting from the axioms of logic will agree about the "judgements" of the standard.

Yep. Subjective statements about X can be phrased in objectivese. But that doesn't make them objective statements about X.

Given that, and assuming your objections about "subjectivity" still exist, what do you mean by "subjective" such that the existence of other standards makes morality "subjective", and this a problem?

By other standards do you mean other people's moral standards, or non-moral (eg aesthetic standards)?

It already seems that you must be resigned to your arguments failing to work on some minds:

Of course. But I think moral objectivism is better as an explanation, because it explains moral praise and blame as something other than a mistake; and I think moral objectivism is also better in practice because having some successful persuasion going on is better than having none.

Replies from: nshepperd
comment by nshepperd · 2011-06-06T18:06:19.289Z · LW(p) · GW(p)

Yep. Subjective statements about X can be phrased in objectivese. But that doesn't make them objective statements about X.

I don't know what you mean, if anything, by "subjective" and "objective" here, and what they are for.

By other standards do you mean other people's moral standards, or non-moral (eg aesthetic standards)?

Okay... I think I'll have to be more concrete. I'm going to exploit VNM-utility here, to make the conversation simpler. A standard is a utility function. That is, generally, a function that takes as input the state of the universe and produces as output a number. The only "moral" standard is the morality-standard I described previously. The rest of them are just standards, with no special names right now.

A mind, for example an alien, may be constructed such that it always executes the action that maximises the utility of some other standard. This utility function may be taken to be the "values" of the alien.

Moral praise and blame is not a mistake; whether certain actions result in an increase or decrease in the value of the moral utility function is a analytic fact. It is further an analytic fact that praise and blame, correctly applied, increases the output of the moral utility function, and that if we failed to do that, we would therefore fail to do the most moral thing.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-06T18:22:05.798Z · LW(p) · GW(p)

I don't know what you mean, if anything, by "subjective" and "objective" here, and what they are for.

By "subjective" I meant that it is indexed to an individual, and properly so. If Mary thinks vanilla is nice, vanilla is nice-for-Mary, and there is no further fact that can undermine the truth of that -- whereas if Mary thinks the world is flat, there may be some sense in which it is flat-for-Mary, but that doens't count for anything, because the shape of the world is not something about which Mary has the last word.

By other standards do you mean other people's moral standards, or non-moral (eg aesthetic standards)?

Okay... I think I'll have to be more concrete. I'm going to exploit VNM-utility here, to make the conversation simpler. A standard is a utility function. That is, generally, a function that takes as input the state of the universe and produces as output a number. The only "moral" standard is the morality-standard I described previously. The rest of them are just standards, with no special names right now.

And there is one such standard in the universe, not one per agent?

Replies from: nshepperd
comment by nshepperd · 2011-06-07T01:46:00.379Z · LW(p) · GW(p)

By "subjective" I meant that it is indexed to an individual, and properly so. If Mary thinks vanilla is nice, vanilla is nice-for-Mary, and there is no further fact that can undermine the truth of that -- whereas if Mary thinks the world is flat, there may be some sense in which it is flat-for-Mary, but that doens't count for anything, because the shape of the world is not something about which Mary has the last word.

If Mary thinks the world is flat, she is asserting that a predicate holds of the earth. It turns out it doesn't, so she is wrong. In the case of thinking vanilla is nice, there is no sensible niceness predicate, so we assume she's using shorthand for nice_mary, which does exist, so she is correct. She might, however, get confused and think that nice_mary being true meant nice_x holds for all x, and use nice to mean that. If so, she would be wrong.

Okay then. An agent who thinks the morality-standard says something other than it does, is wrong, since statements about the judgements of the morality-standard are tautologically true.

And there is one such standard in the universe, not one per agent?

There is precisely one morality-standard.

Each (VNM-rational or potentially VNM-rational) agent contains a pointer to a standard -- namely, the utility function the agent tries to maximise, or would try to maximise if they were rational. Most of these pointers within a light year of here will point to the morality-standard. A few of them will not. Outside of this volume there will be quite a lot of agents pointing to other standards.

comment by Garren · 2011-06-07T04:27:10.772Z · LW(p) · GW(p)

If you have multiple contradictory judgements being made by multiple standards, and you deem them to be objective, then you end up with multiple contradictoryobjective truths. But I don't think you can have multiple contradictory objective truths.

Ok, instead of meter measurements, let's look at cubit measurements. Different ancient cultures represented significantly different physical lengths by 'cubits.' So a measurement of 10 cubits to a Roman was a different physical distance than 10 cubits to a Babylonian.

A given object could thus be 'over ten cubits' and 'under ten cubits' at the same time, though in different senses. Likewise, a given action can be 'right' and 'wrong' at the same time, though in different senses.

The surface judgments contradict, but there need not be any propositional conflict.

You are tacitly assuming that the good guys are in the majority, However, sometimes the minority is in the right (as you and I would judge it), and need to persuade the majority to change their ways

Isn't this done by appealing to the values of the majority?

It''ll work on people who already subscribe to rationaity, whereas relativism won't.

Only if — independent of values — certain values are rational and others are not.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-07T10:07:29.369Z · LW(p) · GW(p)

Likewise, a given action can be 'right' and 'wrong' at the same time, though in different senses.

Are you sure that people mean different things by 'right' and 'wrong', or are they just using different criteria to judge whether something is right or wrong.

Isn't this done by appealing to the values of the majority?

It's done by changing the values of the majority..by showing the majority that they ought (in a rational sense of ought)) think differently. The point being that if correct reasoning eventually leads to uniform results, we call that objective.

Only if — independent of values — certain values are rational and others are not

Does it work or not? Have majorities not been persuaded that its wrong, if convenient, to oppress minorities?

Replies from: Garren
comment by Garren · 2011-06-07T13:33:56.919Z · LW(p) · GW(p)

Are you sure that people mean different things by 'right' and 'wrong', or are they just using different criteria to judge whether something is right or wrong.

What could 'right' and 'wrong' mean, beyond the criteria used to make the judgment?

It's done by changing the values of the majority..by showing the majority that they ought (in a rational sense of ought)) think differently.

Sure, if you're talking about appealing to people to change their non-fundamental values to be more in line with their fundamental values. But I've still never heard how reason can have anything to say about fundamental values.

Does it work or not? Have majorities not been persuaded that its wrong, if convenient, to oppress minorities?

So far as I can tell, only by reasoning from their pre-existing values.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-07T14:14:00.397Z · LW(p) · GW(p)

What could 'right' and 'wrong' mean, beyond the criteria used to make the judgment?

"Should be rewarded" and "should be punished". If there was evidence of people saying that the good should be punished, that would indicate that some people are disagreeing about the meaning of good/right. Otherwise, disagreements are about criteria for assigning the term.

So far as I can tell, only by reasoning from their pre-existing values.

But not for all of them (since some of then get discarded) and not only from moral values (since people need to value reason to be reasoned with).

comment by BobTheBob · 2011-06-05T19:57:36.137Z · LW(p) · GW(p)

For example, if you and I both consider a practice morally right and we do so because it measures up that way against the standard of improving 'the well-being of conscious creatures,' then there's a bit more going on than it just being right for you and me.

OK, but what I want to know is how you react to some person -whose belief system is internally consistent- who has just, say, committed a gratuitous murder. Are you committed to saying that there are no objective grounds to sanction him - there is no sense in which he ought not to have done what he did (assuming his belief system doesn't inveigh against him offending yours)?

Or, to put the point another way, isn't having truth as its goal part of the concept of belief? [...] But if this is fair I'm back to wondering where the ought comes from. Perhaps it comes from the way you view the concept of belief as implying a goal?

Touche.

Look, what I'm getting at is this. I assume we can agree that

"68 + 57 = 125" is true if and only if 68 + 57 = 125

This being the case, if A, seriously pondering the nature of the compulsion behind mathematical judgement, should ask, "Why ought I to believe that 68 + 57 = 125?", and B answers, "Because it's true", then B is not really saying anything beyond, "Because it does". B does not answer A's question.

If the substantive answer is something along the lines that it is a mathematical fact, then I am interested to know how you conceive of mathematical facts, and whether there mightn't be moral facts of generally the same ilk (or if not, why). But if you want somehow to reduce this to subjective goals, then it looks to me that mathematics falls by the wayside - you'll surely allow this looks pretty dubious at least superficially.

comment by BobTheBob · 2011-06-04T03:23:06.211Z · LW(p) · GW(p)

Taking your thoughts out of order,

Would it be fair to extrapolate this, and say that individual variation in value sets provides a good explanation of the pattern we see of agreement and disagreement between individuals as regards moral values - and possibly in quite different domains as well (politics, aesthetics, gardening)?

Yes.

What I was getting at is that this looks like complete moral relativism -'right for me' is the only right there is (since you seem to be implying there is nothing interesting to be said about the process of negotiation which occurs when people's values differ). I'm understanding that you're willing to bite this bullet.

I think that's generally the job of normative ethics, and metaethics is a little more open ended than that. I do grant that many people think the point of ethical philosophy in general is to identify categorical imperatives, not give a pluralistic reduction.

I take your point here. I may be conflating ethical and meta-ethical theory. I had in mind theories like Utilitarianism or Kantian ethical theory, which are general accounts of what it is for an action to be good, and do not aim merely to be accurate descriptions of moral discourse (would you agree?). If we're talking about a defence of, say, non-cognitivism, though, maybe what you say is fair.

No, I wouldn't say that. It would be a little odd to say anyone who doesn't hold a belief that 68 + 57 equals 125 is neglecting some cosmic duty.

This is fair.

Instead, I would affirm: In order to hold a mathematically correct belief when considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.

This is an interesting proposal, but I'm not sure what it implies. Is it possible for a rational person to strive to believe anything but the truth? Whether in math or anything else, doesn't a rational person always try to believe what is correct? Or, to put the point another way, isn't having truth as its goal part of the concept of belief? If so, I suggest this collapses to something like

*When considering 68 + 57, we are obligated to believe it equals 125 or some equivalent expression.

or, more plausibly,

*When considering 68 + 57, we ought to believe it equals 125 or some equivalent expression.

But if this is fair I'm back to wondering where the ought comes from.

comment by Peterdjones · 2011-06-02T13:00:42.355Z · LW(p) · GW(p)

A philosopher is doing useful work for me if he shows me that a tempting-sounding definition of 'morality' doesn't pick out the set of things I want it to pick out, or that some other definition turns out not to refer to any clear set at all.

That is an important point. People often run on examples as much as or more than they do on definitions,and if their intuitions about examples are strong, that can be used to fix their definitions (ie give them revised definitions that serve their intuitions better).

The rest of the post contained good material that needed saying.

comment by JenniferRM · 2011-06-01T15:20:57.103Z · LW(p) · GW(p)

When I have serious conversations with thoughtful religious people who have faith but no major theological training, I find it helpful to think of their statements about "God" as being statements about "all worldly optimization processes stronger than me that I don't have time to understand in very much detail like evolution, entropy, economics, democratic politics, organizational dynamics, similar regularities in the structure of the world that science hasn't started analyzing yet, plus many small activist groups throughout history, and a huge number of specific powerful agents silently influencing my life right now like various investors and celebrities, the local chief of police, the local school principal, my employer, my ancestors, and so on".

I can imagine a relatively simple life heuristic, H, that might successfully navigate this vast and bewildering array of optimization pressures in their life and can ask "Does God want you to H". Also, this translation scheme helps me to listen to evangelical radio and learn things from it :-)

I bring this up because it feels to me like you're doing a lot of work to resuscitate ideas from moral philosophy that are significantly helped by "pluralistic reduction" to unpack the ideas into more specific and coherent claims, but you seem to be doing it in a lop-sided way by not unpacking the ideas of "the other side" in a similarly generous manner. Also, to a lesser extent, it seems to be leaving some of "our" ideas unpacked that could probably use some pluralistic reduction but might not look as shiny if unpacked this way.

I guess what I'm trying to say is that "acoustic messenger ferries" in the "ether" seem to me like perfectly adequate placeholder terms if I'm in a conversation with someone whose starting vocabulary uses them as atomic concepts, but if I'm tossing those terms out then "an ideally instrumentally rational and fully informed agent" seems roughly as questionable given how much difficulty people seem to have when using mind-shaped conceptually-atomic entities in their theories.

Do you think my impression of lopsided conceptual unpacking is accurate? If yes, I'm wondering if you could try to introspect on your writing process and try to articulate how you decided which things to unpack and which to leave fuzzy.

Replies from: lukeprog
comment by lukeprog · 2011-06-04T23:59:01.897Z · LW(p) · GW(p)

I'm not sure what you mean. I unpacked certain concepts as examples, and there are many more we could unpack. Could you say a bit more about what your concern is?

comment by prase · 2011-06-01T10:55:57.557Z · LW(p) · GW(p)

I don't understand the terms "world of is" and "world of is not". Does "talking about world of is not" mean "deducing from false assumptions", or is there something more to it? Anyway, "talking about world of is" sounds like the worst kind of continental philosophy babble.

Else, the article is clear, comprehensible and well readable.

Replies from: lukstafi, Sniffnoy, lukeprog
comment by lukstafi · 2011-06-01T11:13:16.918Z · LW(p) · GW(p)

While "of is, of is not" didn't hurt my understanding that much, the article would be better off without them.

Replies from: wedrifid
comment by wedrifid · 2011-06-01T12:03:29.565Z · LW(p) · GW(p)

While "of is, of is not" didn't hurt my understanding that much, the article would be better off without them.

I agree, and also note that the way luke dismisses the "is not" misses much of the point that is trying to be expressed by the phrase. If it is going to be discussed at all it deserves the same kind of parameterizing as 'objective' received.

comment by Sniffnoy · 2011-06-01T18:52:20.276Z · LW(p) · GW(p)

It seems to be essentially a bit of wordplay, in that he uses it to mean two different things. Initially he is contrasting "is/is not" statements with "ought/ought not" statements. Later he talks about things that exist vs. things that don't exist. It doesn't seem to be very helpful though; in the earlier sense, there is no distinction between the "world of is" and the "world of is not". So this seems like it was a bad idea.

Replies from: torekp
comment by torekp · 2011-06-04T02:00:07.907Z · LW(p) · GW(p)

I think there may be a good idea behind it though: view it as a cryptic appeal to Occam's Razor. Various moralists (e.g., Railton, Craig) were shown to be speaking of real things and properties, or of imaginary ones, with their moral language. Why not then hypothesize that all are - albeit less transparently than these two - and do away with the need of a special metaphysics or semantics (or both) for "ought" questions as "opposed" to "is" questions.

comment by lukeprog · 2011-06-08T07:01:14.556Z · LW(p) · GW(p)

Does this make it any clearer?

Replies from: prase
comment by prase · 2011-06-08T11:00:43.633Z · LW(p) · GW(p)

Yes, it does.

comment by RichardChappell · 2011-06-03T19:48:44.626Z · LW(p) · GW(p)

If we taboo and reduce, then the question of "...but is it good?" is out of place. The reply is: "Yes it is, because I just told you that's what I mean to communicate when I use the word-tool 'good' for this discussion. I'm not here to debate definitions; I'm here to get something done."

I just wanted to flag that a non-reductionist moral realist (like myself) is also "not here to debate definitions". See my post on The Importance of Implications. This is compatible with thinking well of the Open Question Argument, if we think we have an adequate grasp of some fundamental normative concept (be it 'good', 'reason', or 'ought' -- I lean towards 'reason', myself, such that to speak of a person's welfare is just to talk about what a sympathetic party has reason to desire for the person's sake).

Note that if we're right to consider some normative concepts to be conceptually primitive (not analytically reducible to non-normative concepts) then your practice of "tabooing" all normative vocabulary actually has the effect of depriving us of the conceptual tools necessary to even talk about the normative sphere. Consequent talk of people's (incl. God's) desires or dispositions is simply changing the subject, on this way of looking at things.

Out of interest: Will you be arguing anywhere in this sequence against non-reductionist moral realism? Or are you simply assuming its falsity from the start, and exploring the implications from there? (Even the latter, more modest project is of course worth pursing, but I personally would be more interested in the former.) Either way, it'd be good to be clear about this. (You could then skip the silly rhetoric about how what is not "is", must be "is not".)

Replies from: lukeprog
comment by lukeprog · 2011-06-06T00:04:26.700Z · LW(p) · GW(p)

I'm inclined not to write about moral non-naturalism because I'm writing this stuff for Less Wrong, where most people are physicalists.

What does it mean to you to say that something is a 'fundamental normative concept'? As in... non-reducible to 'is' statements (in the Humean sense)?

Replies from: RichardChappell, RichardChappell
comment by RichardChappell · 2011-06-06T01:14:53.239Z · LW(p) · GW(p)

I was thinking of "fundamental" concepts as those that are most basic, and not reducible to (or built up out of) other, more basic, concepts. I do think that normative concepts are conceptually isolated, i.e. not reducible to non-normative concepts, and that's really the more relevant feature so far as the OQA is concerned. But by 'fundamental normative concept' I meant a normative concept that is not reducible to any other concepts at all. They are the most basic, or bedrock, of our normative concepts.

Replies from: torekp
comment by torekp · 2011-06-07T01:22:47.188Z · LW(p) · GW(p)

Given the extremely poor access human beings have to the structure of their own concepts, it's dubious that the methods of analytic philosophy can trace those structures. Moreover, concepts typically "cluster together similar things for purposes of inference" ( Yudkowsky ) and thus we can re-structure them in light of new discoveries. Concepts that are connected now might be improved by disconnecting them, or vice versa. It is not at all clear that normative concepts are not included in this (Neurath-style) boat.

comment by RichardChappell · 2011-06-06T01:26:09.853Z · LW(p) · GW(p)

I'm inclined not to write about moral non-naturalism because I'm writing this stuff for Less Wrong, where most people are physicalists

Physicalists could (like Mackie) accept the non-naturalist's account of what it would take for something to be genuinely normative, and then simply deny that there are any such properties in reality. I'm much more sympathetic to this hard-headed "error theory" than to the more weaselly forms of naturalism.

Replies from: lukeprog
comment by lukeprog · 2011-06-06T05:42:23.640Z · LW(p) · GW(p)

I think many of our normative concepts fail to refer, but that a class of normative concepts often called hypothetical imperatives do refer, thanks to a rather straightforward reduction as given above. Are hypothetical imperatives not 'genuinely normative' in your sense of the phrase? Do you use the term 'normative' when talking about things other than hypothetical imperatives, and do you think those other things successfully refer?

Replies from: RichardChappell, poqwku
comment by RichardChappell · 2011-06-06T16:50:45.159Z · LW(p) · GW(p)

As I argue elsewhere:

"Hypothetical imperatives thus reveal patterns of normative inheritance. But their highlighted 'means' can't inherit normative status unless the 'end' in question had prior normative worth. A view on which there are only hypothetical imperatives is effectively a form of normative nihilism -- no more productive than an irrigation system without any water to flow through it."

(Earlier in the post explains why hypothetical imperatives aren't reducible to mere empirical statements of a means-ends relationship.)

I tentatively favour non-naturalist realism over non-naturalist error theory, but my purpose in my previous comment was just to flag the latter option as one that physicalists should take (very) seriously.

Replies from: lukeprog
comment by lukeprog · 2011-06-08T06:59:25.524Z · LW(p) · GW(p)

Error theory

You know this, but for the benefit of others: Roughly, error theory consists of two steps. As Finlay puts it:

(1) Presupposition: moral judgments involve a particular kind of presupposition which is essential to their status as moral; (2) Error: this presupposition is irreconcilable with the way things are

Given my view of conceptual analysis, it shouldn't be surprising that I'm not confident of some error theorists' assertion of step 1. Is a presupposition of moral absolutism 'essential' to a judgment's status as a 'moral' judgment? Is a presuppositional of motivational internalism 'essential' to a judgment's status as a 'moral' judgment? I don't know. Moral discourse (unlike carbon discourse) is so confused that I'm not too interested to assert one fine boundary line around moral terms over another.

So if someone thinks a presupposition of supernaturalism is 'essential' to a judgment's status as a 'moral' judgment, then I will claim that supernaturalism is false. But this doesn't make me an error theorist because I don't necessarily agree that a presupposition of supernaturalism is 'essential' to a judgment's status as a 'moral' judgment. I reject step 1 of error theory in this case.

Likewise, if someone thinks a presupposition of moral absolutism or motivational internalism is essential to a judgment's status as a 'moral' judgment, I'll be happy to deny both moral absolutism and motivational internalism, but I wouldn't call myself an error theorist because I reject the claim that moral judgments (by definition, by conceptual analysis) necessarily presuppose moral absolutism or motivational internalism.

But hey, if you convince me that the presumption of motivational internalism in moral discourse is so widespread that talking about 'morality' without it would be like using the term 'phlogiston' to talk about oxygen, then I'll be happy to call myself an error theorist, though none of my anticipations will have changed.

Hypothetical imperatives

I'll reply to a passage from your post on hypothetical imperatives. My reply won't make sense to those who haven't read it:

When we affirm the first premise as a mere hypothetical imperative, we mean it in a sense that does not validate such an inference. We might add, "But of course you shouldn't want to torture children, and so you shouldn't take the means to this atrocious end either."

I think this is because 'should' is being used in different senses. The real modus ponens is:

  1. If you want to torture children, you should_ToTortureChildren volunteer as a babysitter.
  2. You want to torture children.
  3. Therefore, you should_ToTortureChildren volunteer as a babysitter.

Or at least, that is plausibly what some people mean when they assert what looks like a hypothetical imperative. Doubtless, others will appear to be meaning something else if pressed by interrogation.

Now, to respond to Sidgwick:

When (e.g.) a physician says, "If you wish to be healthy you ought to rise early," this is not the same thing as saying "early rising is an indispensable condition of the attainment of health." This latter proposition expresses the relation of physiological facts on which the former is founded; but it is not merely this relation of facts that the word 'ought' imports: it also implies the unreasonableness of adopting an end and refusing to adopt the means indispensable to its attainment.

I could capture this 'unreasonableness' by simply clarifying that from the standpoint of Bayesian rationality, it would be somewhat irrational to expect good health despite not rising early (or so the doctor claims).

But again, I'm not too keen to play the definitions game. If you state hypothetical imperatives with more intuitions about the meaning of hypothetical imperatives than I do, then you are free to explain what you mean by hypothetical imperatives and then show how they fit into the physical world. If you can't show how they fit into the world, then you're talking about something that doesn't exist, or else we'll have to replay the physicalism vs. non-physicalism debate, which is another topic.

Right?

Replies from: RichardChappell
comment by RichardChappell · 2011-06-08T18:09:23.694Z · LW(p) · GW(p)

Thanks for this reply. I share your sense that the word 'moral' is unhelpfully ambiguous, which is why I prefer to focus on the more general concept of the normative. I'm certainly not going to stipulate that motivational internalism is true of the normative, though it does seem plausible that there's something irrational about someone who acknowledges that they really ought (all things considered) to phi and yet fails to do so. (I don't doubt that it's possible for someone to form the judgment without any corresponding motivation though, as it's always possible for people to be irrational!)

I trust that we all have a decent pre-theoretic grasp of normativity (or "ought-ness"). The question then is whether this phenomenon that we have in mind (i) is reducible to some physical property, and (ii) actually exists.

Error theory (answering 'no' and 'no' to the two questions above) seems the most natural position for the physicalist. And it sounds like you may be happy to agree that you're really an error theorist about normativity (as I mean it). But then I'm puzzled by what you take yourself to be doing in this series. Why even use moral/normative vocabulary at all, rather than just talking about the underlying natural properties that you really have in mind?

P.S. What work is the antecedent doing in your conditional?

If you want to torture children, you should_ToTortureChildren volunteer as a babysitter.

Why do you even need the modus ponens? Assuming that "should_ToTortureChildren" just means "what follows is an effective means to torturing children", then isn't the consequent just plain true regardless of what you want? (Perhaps only someone with the relevant desire will be interested in this means-ends fact, but that's true of many unconditional facts.)

Replies from: lukeprog
comment by lukeprog · 2011-06-08T22:59:53.700Z · LW(p) · GW(p)

there's something irrational about someone who acknowledges that they really ought (all things considered) to phi and yet fails to do so. (I don't doubt that it's possible for someone to form the judgment without any corresponding motivation though, as it's always possible for people to be irrational!)

Right.

And it sounds like you may be happy to agree that you're really an error theorist about normativity (as I mean it).

Unfortunately, I don't think I'm clear about what you mean by normativity. The only source of normativity I think exists is the hypothetical imperative, which can be reduced to physics by straightforward methods such as the one I used in the original post. I'm not an error theorist about that kind of normativity.

Why even use moral/normative vocabulary at all, rather than just talking about the underlying natural properties that you really have in mind?

This is a good question. Truly, I want to get away from moral vocabulary, and be careful around normative vocabulary. But people already think about these topics in moral and normative vocabulary, which is why I'm trying to solve or dissolve (in this post and its predecessor) some of the usual 'problems' in this space of questions.

After that's done, I don't think it will be most helpful to use moral language. This is evident in the fact that in 15 episodes of my 'morality podcast' I've used almost no moral language at all.

What work is the antecedent doing in your conditional?

Not much, really. I wasn't using the modus ponens to present an argument, but to unpack one interpretation of (some) 'should' discourse. Normative language, like many other kinds of language, is (when used correctly) merely a shortcut for saying something else. I can imagine a language that has no normative language at all. In that language we couldn't say things like "If you want to torture children, you should volunteer as a babysitter" but we could say things like "If you volunteer as a babysitter you will have more opportunities to torture children." The way I'm parsing 'should' in the first sentence, nothing is lost by this translation.

Of course, people use 'should' in a variety of ways, some of which translate into claims about things reducible to physics, others of which translate into claims about things non-reducible to physics, while still others don't seem to translate into cognitive statements at all.

Replies from: RichardChappell
comment by RichardChappell · 2011-06-08T23:48:09.751Z · LW(p) · GW(p)

Thanks, this is helpful. I'm interested in your use of the phrase "source of normativity" in:

The only source of normativity I think exists is the hypothetical imperative

This makes it sound like there's a new thing, normativity, that arises from some other thing (e.g. desires, or means/ends relationships). That's a very realist way of talking.

I take it that what you really want to say something more like, "The only kind of 'normativity'-talk that's naturalistically reducible and hence possibly true is hypothetical imperatives -- when these are understood to mean nothing more than that a certain means-end relation holds." Is that right?

I'd then understand you as an error theorist, since "being a means-end relationship", like "being red", is not even in the same ballpark as what I mean by "being normative". (It might sometimes have normative importance, but as we learn from Parfit, that's a very different thing.)

Replies from: lukeprog
comment by lukeprog · 2011-06-09T00:03:44.214Z · LW(p) · GW(p)

My thought process on sources of normativity looks something like this:

People claim all sorts of justifications for 'ought' statements (aka normative statements). Some justify ought statements with respect to natural law or divine commands or non-natural normative properties or categorical imperatives. But those things don't exist. The only justification of normative language that fits in my model of the universe is when people use 'ought' language as some kind of hypothetical imperative, which can be translated into a claim about things reducible to physics. There are many varieties of this. Many uses of 'ought' terms can be translated into claims about things reducible to physics. If somebody uses 'ought' terms to make claims about things not reducible to physics, then I am suspicious of the warrant for those claims. When interrogating about such warrants, I usually find that the only evidences on offer are pieces of folk wisdom, intuitions, and conventional linguistic practice.

Replies from: RichardChappell, poqwku, Peterdjones
comment by RichardChappell · 2011-06-09T01:59:33.647Z · LW(p) · GW(p)

People claim all sorts of justifications for 'ought' statements (aka normative statements).

You still seem to be conflating justification-giving properties with the property of being justified. Non-naturalists emphatically do not appeal to non-natural properties to justify our ought-claims. When explaining why you ought to give to charity, I'll point to various natural features -- that you can save a life for $500 by donating to VillageReach, etc. It's merely the fact that these natural features are justifying, or normatively important, which is non-natural.

Replies from: lukeprog
comment by lukeprog · 2011-06-09T03:09:17.804Z · LW(p) · GW(p)

Sure. So what is it that makes (a) [the fact that you can save a life by donating $500 to VillageReach] normatively justifying, whereas (b) [the fact that you can save a mosquito by donating $2000 to SaveTheMosquitos] is not normatively justifying?

On my naturalist view, the fact that makes (a) but not (b) normatively justifying would be some fact about how the goal we're discussing at the moment is saving human lives, not saving mosquito lives. That's a natural fact. So are the facts about how the English language works and how two English speakers are using their terms.

Replies from: RichardChappell, Vladimir_Nesov, Peterdjones
comment by RichardChappell · 2011-06-09T04:46:42.136Z · LW(p) · GW(p)

It's not entirely clear what you're asking. Two possibilities, corresponding to my above distinction, are:

(1) What (perhaps more general) normatively significant feature is possessed by [saving lives for $500 each] that isn't possessed by [saving mosquitoes for $2000 each]? This would just be to ask for one's fully general normative theory: a utilitarian might point to the greater happiness that would result from the former option. Eventually we'll reach bedrock ("It's just a brute fact that happiness is good!"), at which point the only remaining question is....

(2) In what does the normative signifiance of [happiness] consist? That is, what is the nature of this justificatory status? What are we attributing to happiness when we claim that it is normatively justifying? This is where the non-naturalist insists that attributing normativity to a feature is not merely to attribute some natural quality to it (e.g. of "being the salient goal under discussion" -- that's not such a philosophically interesting property for something to have. E.g., I could know that a feature has this property without this having any rational significance to me at all).

(Note that it's a yet further question whether our attributions of normativity are actually correct, i.e. whether worldly things have the normative properties that we attribute to them.)

I gather it's this second question you had in mind, but again it's crucial to carefully distinguish them since non-naturalist answers to the first question are obviously crazy.

Replies from: lukeprog
comment by lukeprog · 2011-06-09T05:44:11.396Z · LW(p) · GW(p)

Yup. I'm asking question (2). Thanks again for your clarifying remarks.

comment by Vladimir_Nesov · 2011-06-09T09:11:37.383Z · LW(p) · GW(p)

On my naturalist view, the fact that makes (a) but not (b) normatively justifying would be some fact about how the goal we're discussing at the moment is saving human lives, not saving mosquito lives.

What if you actually should be discussing saving of mosquito lives, but don't, because humans are dumb?

Replies from: lukeprog
comment by lukeprog · 2011-06-09T09:41:16.193Z · LW(p) · GW(p)

I think this is a change of subject, but... what do you mean by 'actually should'?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-06-09T09:43:50.648Z · LW(p) · GW(p)

what do you mean by 'actually should'?

No idea.

Replies from: lukeprog
comment by lukeprog · 2011-06-09T16:06:21.163Z · LW(p) · GW(p)

I take you to mean "what would maximize Luke's utility function" (knowing that 'utility function' is probably just a metaphor when talking about humans) when you say "you actually should..." Of course, my 'utility function' is unknown to both of us.

In that case, it would remain true in our hypothetical scenario that I should-HumanLivesAreGood donate to VillageReach (assuming they're a good charity for saving human lives), while I should-UtilityFunctionLuke donate to SaveTheMosquitos.

(Sorry about formatting; LW comments don't know how to use underscores, apparently.)

comment by Peterdjones · 2011-06-09T12:38:57.996Z · LW(p) · GW(p)

On my naturalist view, the fact that makes (a) but not (b) normatively justifying would be some fact about how the goal we're discussing at the moment is saving human lives, not saving mosquito lives.

But the question then is what goal you should have. It is easy to naturalise norms inasmuch as they are hypothetical and indexed to whatever you happen to be doing. (if you want to play chess, you should move the bishop diagonally) The issue is how to naturalise categorical ends,the goals you should have and the rules you should be following irrespective of what you are doing.

That's a natural fact. So are the facts about how the English language works and how two English speakers are using their terms.

Such facts aren't supernatural. OTOH, they fall on the analytical/apriori side of the fence, rather than the empirical side, and that is an iimportant distinction.

comment by poqwku · 2011-06-11T00:08:06.622Z · LW(p) · GW(p)

What reasons are there for doubting the existence of categorical imperatives that do not equally count against the existence of hypothetical imperatives? I can understand rejecting both, I can understand accepting both, but I can't understand treating them differently.

Replies from: lukeprog, Peterdjones
comment by lukeprog · 2011-06-11T00:33:27.131Z · LW(p) · GW(p)

Depends what you mean by 'categorical imperative', and what normative force you think it carries. The categorical imperatives I am used to hearing about can't be reduced into the physical world in the manner by which I reduced a hypothetical imperative into the physical world in the original post above.

Replies from: poqwku
comment by poqwku · 2011-06-11T05:38:19.362Z · LW(p) · GW(p)

I take it you want to reduce a hypothetical imperative like "If you want to stretch, then you ought to stand" into a physically-kosher causal claim like "standing is causally necessary for satisfying your desire to stretch". Now, I'm skeptical about this reduction, simply because I don't see how a mere causal claim could provide any normative direction whatsoever. But in any case, it seems that you could equally well reduce a categorical imperative like "Regardless of what you personally want, if you are the only one around a drowning child, then you ought to help save it from drowning" into a physically-kosher causal claim like "your help is causally necessary for the survival of the drowning child". Both causal claims are equally physicalistic, and both seem equally relevant to their respective imperative, so both physicalistic reductions seem equally promising.

Of course, the question "why should I assume that the drowning's child survival actually matters?" seems reasonable enough. But so does the question "why should I assume that satisfying my desire to stretch actually matters?" If the first question jeopardizes the reduction of the categorical imperative, then the second question would also jeopardize the reduction of the hypothetical imperative.

Replies from: lukeprog
comment by lukeprog · 2011-06-11T06:44:27.818Z · LW(p) · GW(p)

it seems that you could equally well reduce a categorical imperative like "Regardless of what you personally want, if you are the only one around a drowning child, then you ought to help save it from drowning" into a physically-kosher causal claim like "your help is causally necessary for the survival of the drowning child"

I'm not used to the term 'categorical imperative' being used in this way. Normally, a categorical imperative is one that holds not just regardless of what you want, but regardless of any stated ends at all. Your reduction of this categorical imperative assumes an end of 'survival of the drowning child.' To me, that's what we call a hypothetical imperative, and that's why it can be reduced in that way, by translating normativity into statements about means and ends.

Replies from: poqwku
comment by poqwku · 2011-06-15T00:15:45.863Z · LW(p) · GW(p)

I think you must be mistaken about categorical imperatives and ends.

In the Groundwork, Kant deems rational nature ('humanity' in us) to be an end in itself which can ground a categorical imperative, in the 2nd Critique, he deems the highest good (happiness in proportion to virtue) to be a necessary end of practical reason, and in the Metaphysics of Morals, he deems one's own perfection and the happiness of others to be ends that are also duties. For Kant, the categorical imperative of morality is directed at ends, but it is not a mere hypothetical imperative grounded in subjective ends.

And leaving Kant aside, plenty of moral systems aspiring to categorical force have ends at their center. Classic utilitarianism picks out the greatest overall balance of pleasure over pain as the one and only end of morality. Eudaimonist virtue ethics has as its end a well-lived flourishing life for the agent. Thomist ethics says an intellectual vision of God in the afterlife is the ultimate end of humans. These systems do not traffic in merely hypothetical imperatives: they present these ends as objectively worth pursuing, regardless of the agent's personal preferences.

But if your main point is simply that my 'reduction' assumes that the end ought to be pursued, and that this assumption is a form of cheating, then I agree. I've left the normativity unexplained and unreduced. But then in exactly the same way, your reduction of hypothetical imperatives assumes that effective means to one's ends ought to be taken, and this assumption is also a form of cheating. You have also left the normativity unexplained and unreduced.

So I don't see how hypothetical imperatives are any more fit for naturalistic reduction than categorical imperatives.

Replies from: lukeprog
comment by lukeprog · 2011-06-15T00:53:00.841Z · LW(p) · GW(p)

I'm sure I speak differently about categorical imperatives than Kant does. I haven't read much of Kant, and I don't regret that. In your language, what you call an "end in itself" is what I mean to pick out when I talk about an imperative that "holds... regardless of any stated [arbitrary, desired, subjective] ends at all." I don't really know what it means for something to be an "end in itself". Kant's idea seemed to be that we ought to do X regardless of what anyone wants. Your way (and perhaps Kant's way) of talking about this is to say that we ought to do X regardless of what anyone wants because X leads to a particular "end in itself", whatever that means.

your reduction of hypothetical imperatives assumes that effective means to one's ends ought to be taken, and this assumption is also a form of cheating

My reduction of hypothetical imperative doesn't assume this. It only translates 'ought' into a prediction about what would realize the specified end. If there's something mysterious left over, I'm curious what you think it is and whether it is real or merely a figment of folk wisdom and linguistic practice.

Replies from: poqwku, Peterdjones
comment by poqwku · 2011-06-15T01:02:06.552Z · LW(p) · GW(p)

So, on your use of 'end', an 'end' cannot be objective and unconditional? I think that's a highly uncommon use of the term.

But if you go this way, it seems like it's less of a reduction of 'ought' and more of a misinterpretation, like reducing 'Santa Claus'-talk into talk about Christmas cheer, or 'God'-talk into talk of love.

After all, one important constraint on any interpretation of any 'ought to X' is that it should be positive towards X as opposed to negative or neutral, in favor of some action or attitude as opposed to against it or indifferent. But a mere predictive causal claim doesn't have any valence at all: it's just a neutral claim about what will probably lead to what, without anything positive or negative. So any attempt to reduce oughts to predictive causal claims seems doomed to failure.

EDIT: For the record, I'm an expressivist about normativity, and I think any attempt to understand it in terms of some actual or hypothetical ontology that could serve as the truth-conditions for a descriptive belief is a mistake. The mystery, I would say, lies in a descriptive interpretation of normativity, not in normativity itself.

Replies from: lukeprog
comment by lukeprog · 2011-06-15T01:35:19.659Z · LW(p) · GW(p)

So, on your use of 'end', an 'end' cannot be objective and unconditional? I think that's a highly uncommon use of the term.

No, I just should have been clearer that when I said "stated end" I meant "subjective, desired end". As commonly used, 'end' includes unconditional ends, I just haven't ever been presented with an argument that persuaded me to think that such ends exist.

one important constraint on any interpretation of any 'ought to X' is that it should be positive towards X as opposed to negative or neutral, in favor of some action or attitude as opposed to against it or indifferent. But a mere predictive causal claim doesn't have any valence at all: it's just a neutral claim about what will probably lead to what, without anything positive or negative. So any attempt to reduce oughts to predictive causal claims seems doomed to failure.

Sure, you can stick that feature into your meaning of 'ought' if you want. But I'm not going too deep into the conceptual analysis game with you. If you want to include positive valence in the meaning of a hypothetical ought, then we could translate like this:

"If you want to stretch, you ought to stand" -> "standing will make it more likely that you fulfill your desire to stretch, and I have positive affect toward you standing so as to fulfill your desire to stretch"

As I said in my post, expressivism is fine - and certainly true of how many speakers use normative language. I happen to be particularly interested in those who use normative language cognitively, and whether their stated moral judgments are true or false, and under which conditions - which is why I'm investigating translations/reductions of normative language into natural statements that have truth conditions.

comment by Peterdjones · 2011-06-21T12:35:18.882Z · LW(p) · GW(p)

I don't really know what it means for something to be an "end in itself".

What do you think human wellbeing is for,then?

comment by Peterdjones · 2011-06-11T21:56:05.648Z · LW(p) · GW(p)

What reasons are there for doubting the existence of categorical imperatives that do not equally count against the existence of hypothetical imperatives?

The set of non-ethical categorical imperatives is non-empty. The set of non-ethical hypothetical imperatives is non-empty. Hypothetical imperatives include instrumental rules, you have to use X to achieve Y, game-laying rules, etc.

Replies from: poqwku
comment by poqwku · 2011-06-15T00:28:41.884Z · LW(p) · GW(p)

How exactly does this answer the question?

The set of non-ethical categorical imperatives is non-empty.

I agree. Epistemic imperatives are categorical, but non-empty.

The set of non-ethical hypothetical imperatives is non-empty. Hypothetical imperatives include instrumental rules, you have to use X to achieve Y, game-laying rules, etc.

Right, those are examples where non-ethical hypothetical imperatives often show up.

So how does this add up to a reason to think there is a case against categorical imperatives that doesn't equally well count against hypothetical imperatives?

comment by Peterdjones · 2011-06-09T01:04:08.189Z · LW(p) · GW(p)

My thought process on sources of normativity looks something like this:

People claim all sorts of justifications for 'ought' statements (aka normative statements). Some justify ought statements with respect to natural law or divine commands or non-natural normative properties or categorical imperatives.

There's a lot of kinds of normative/"ought" statements. Some relate to games, some to rationality, and so on. Hypothetical "ought" statements do not require any special metaphysical apparatus to explain them, they just require rules and payoffs. Categorical imperatives are another story.

When interrogating about such warrants, I usually find that the only evidences on offer are pieces of folk wisdom, intuitions, and conventional linguistic practice.

One man's conventional linguistic practice is another's analytical truth.

Replies from: poqwku
comment by poqwku · 2011-06-11T00:15:28.466Z · LW(p) · GW(p)

Hypothetical "ought" statements do not require any special metaphysical apparatus to explain them, they just require rules and payoffs. Categorical imperatives are another story.

Rules and payoffs explain "ought" statements only if you assume that the rules are worth following and the payoffs worth pursuing. But if hypothetical imperatives can help themselves to such assumptions (assuming e.g. that one's own desires ought to be satisfied), then categorical imperatives can help themselves to such assumptions (assuming e.g. that everyone's desires ought to be satisfied, or that everyone's happiness ought to be maximized, or that everyone ought to develop certain character traits).

Replies from: Peterdjones
comment by Peterdjones · 2011-06-11T21:38:17.771Z · LW(p) · GW(p)

Rules and payoffs explain "ought" statements only if you assume that the rules are worth following and the payoffs worth pursuing.

I don't think so. You ought to use a hammer to drive in nails even if you don't want to dive in nails. Anyone who is playing chess should move the bishop diagonally.That doesn't mean you are playing chess.

Of course those are hypothetical, and non-ethical. It might wll be the case that the only categorical imperatives are moral categorical imperatives; that. ethics is the only area where you should do things or refrain form things unconditionally.

Replies from: poqwku
comment by poqwku · 2011-06-15T00:27:11.324Z · LW(p) · GW(p)

I don't think so. You ought to use a hammer to drive in nails even if you don't want to dive in nails. Anyone who is playing chess should move the bishop diagonally.That doesn't mean you are playing chess.

Again, you're assuming that the rule 'if you're driving in nails, use a hammer' is worth following, and that the rule 'if you're playing chess, move bishops diagonally' is worth following. A nihilist would reject both of those rules as having any normative authority, and say that just because a game has rules it doesn't mean that game-players ought to follow those rules, at most it means that lots and lots of rule-violations make the game go away.

comment by poqwku · 2011-06-06T19:33:49.240Z · LW(p) · GW(p)

I don't think hypothetical imperatives can be reduced. The if-ought of a hypothetical imperative is a full-blooded normative claim. But you can't reduce that to a simple if-then about cause and effect.

To see why, consider a nihilist about oughts. She recognizes the causal connections between calorie consumption/burning and weight loss. But she doesn't accept any claim about what people ought to do, even hypothetical imperatives about people who desire weight loss. This seems perfectly coherent: she accepts causal claims, but not normative claims, and there's no contradiction or incoherence there. But this means the causal claims she accepts are not conceptually equivalent to the normative claims she rejects.

For another way to see why, consider the causal claim "less calorie consumption and more calorie burning leads to weight loss". This causal claim points in no normative direction. It doesn't recommend anything, or register any approval, or send any positive or negative messages. Of course, we can take it in one direction or another, but only by combining it with separate normative claims:

  1. Less calorie consumption and more calorie burning leads to weight loss.
  2. People ought to take causally efficacious steps to satisfy their desires.
  3. Therefore, if you desire to lose weight, you ought to consume less calories and burn more calories.

Premise 2 is what provides the normativity. It points us in the direction of satisfying desires. But we could easily take things in the opposite direction.

  1. Less calorie consumption and more calorie burning leads to weight loss.
  2. People ought to take causally efficacious steps to frustrate their desires.
  3. Therefore, if you desire to lose weight, you ought to consume more calories and burn less calories.

Again, premise 2 is what provides the normativity. But it points us in the opposite direction, viz. the direction of frustrating desires.

So it's pretty clear that premise 1 has no normativity in it. It can't be reduced to either of the two 3's. For we cannot arrive at a 3 without a 2.

Replies from: torekp
comment by torekp · 2011-06-07T00:51:37.569Z · LW(p) · GW(p)

I think stating premise 2 is a little odd. It is a bit deja "Tortoise and Achilles" all over again. If there's a norm hiding around here, it's an "ought" portrayed by the desire.

Second, conceptual analysis (or conceptual equivalence) is not necessary for reduction. Look at reduction in the sciences for examples.

Replies from: poqwku
comment by poqwku · 2011-06-07T19:14:15.012Z · LW(p) · GW(p)

Well, I'll acknowledge that you could change premise 2 into an inference rule. But notice that you could change either premise 2—the pro-desire-satisfaction one and the pro-desire-frustration one—into an inference rule. Indeed, you could change any normative claim into an inference rule: you could change "people who want to have gay sex ought to go see a trained Baptist minister to get cured" into an inference rule, and then validly go from "I want to have gay sex" to "I ought to go see a trained Baptist minister to get cured". So from the fact that premise 2 could be changed into an inference rule, I don't think anything follows that might jeopardize its status as a full-blooded normative claim.

On the second point, I thought lukeprog was discussing direct conceptual reduction. But if he wants to provide hypothetical imperatives with a synthetic reduction, he'll need a theory of reference capable of explaining why the normative claim turns out to make reference to (and have its truth-conditions provided by) simple causal facts. And on this score, I think hypothetical imperatives and categorical moral imperatives are on an equal footing: since reductionist moral realists have a hard time with synthetic reductions, I would expect reductionist 'instrumental realists' to have a hard time as well.

Replies from: torekp
comment by torekp · 2011-06-08T22:16:37.724Z · LW(p) · GW(p)

For what it's worth, I think what's really being inferred by the advice-giver is:

  • 1 Granting your (advisee's) starting point, you ought to lose weight.
  • 2 Less calorie consumption and more calorie burning leads to weight loss.
  • 3 Therefore you ought to consume less and burn more calories.

The advisee's desire portrays the starting-point as a truth.

Replies from: poqwku
comment by poqwku · 2011-06-11T00:05:13.390Z · LW(p) · GW(p)

Perhaps so, but then the normativity stems from premise 1, leaving premise 2 as non-normative as ever. But the question is whether premise 2 could be a plausible reduction basis for normative claims.

comment by RichardChappell · 2011-06-03T19:23:57.976Z · LW(p) · GW(p)

Tangentially:

facts about the well-being of conscious creatures are mind-dependent facts

How so? (Note that a proposition may be in some sense about minds without its truth value being mind-dependent. E.g. "Any experience of red is an experience of colour" is true regardless of what minds exist. I would think the same is true of, e.g., "All else equal, pain is bad for the experiencer.")

Replies from: lukeprog, Oligopsony
comment by lukeprog · 2011-06-08T06:28:45.005Z · LW(p) · GW(p)

I'm borrowing the concept 'well-being of conscious creatures' from Sam Harris, who seems to think of it in terms of mind-dependent facts, perhaps involving (e.g.) brain states we might call 'pain' or 'pleasure'.

Replies from: RichardChappell
comment by RichardChappell · 2011-06-08T17:32:26.198Z · LW(p) · GW(p)

That doesn't really answer my question. Let me try again. There are two things you might mean by "mind dependent".

(1) You might just mean "makes some reference to the mind". So, for example, the necessary truth that "Any experience of red is an experience of colour" would also count as "mind-dependent" in this sense. (This seems a very misleading usage though.)

(2) More naturally, "mind dependent" might be taken to mean that the truth of the claim depends upon certain states of mind actually existing. But "pain is bad for people" (like my example above) does not seem to be mind-dependent in this sense.

Which did you have in mind?

Replies from: lukeprog
comment by lukeprog · 2011-06-08T22:35:58.361Z · LW(p) · GW(p)

By saying that "facts about the well-being of conscious creatures are mind-dependent facts," I just mean that statements about the well-being of conscious creatures are made true or false by facts abound mind states. A statement about my well-being is mind-dependent in the sense that a statement about my well-being (as I am using the term) is a statement about my brain states. A statement about the distance between my chair and my desk is not a statement about brain states, and would be true or false whether or not our Hubble volume still contained any minds.

comment by Oligopsony · 2011-06-04T20:25:34.964Z · LW(p) · GW(p)

I believe the "facts" in question were synthetic ones ("all else being equal, being set on fire is bad for the person set on fire,") not analytic ones ("all else equal, pain is bad for the experiencer.")

Replies from: RichardChappell
comment by RichardChappell · 2011-06-04T20:38:03.082Z · LW(p) · GW(p)

It's not analytic that pain is bad. Imagine some crazy soul who thinks that pain is intrinsically good for you. This person is deeply confused, but their error is not linguistic (as if they asserted "bachelors are female"). They could be perfectly competent speakers of the english language, and even logically omniscient. The problem is that such a person is morally incompetent. They have bizarrely mistaken ideas about what things are good (desirable) for people, and this is a substantive (synthetic), not merely analytic, matter.

Perhaps the thought is that contingent (rather than necessary) facts about wellbeing are mind-dependent. That's still not totally obvious to me, but it does at least seem less clearly false than the original (unrestricted) claim.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-05T17:00:16.070Z · LW(p) · GW(p)

The issue is more whether anyone could think pain is good for them themselves. One could imagine a situation where pain receptors connect up to pleasure centers, but then it becomes a moot point as to whether that is actually pain.

Replies from: RichardChappell
comment by RichardChappell · 2011-06-05T17:56:46.842Z · LW(p) · GW(p)

Yes, I was imagining someone who thought that unmitigated pain and suffering was good for everyone, themselves included. Such a person is nuts, but hardly inconceivable.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-06-05T18:30:27.673Z · LW(p) · GW(p)

In the not distant past, some surgeons opposed pain-killing medication for post-operative pain, believing that the pain was essential to the healing process.

There's also the reports by patients who have had morphone for pain relief, that the pain is still there, but it takes the hurting out of it.

Replies from: RichardChappell, poqwku
comment by RichardChappell · 2011-06-05T19:10:18.062Z · LW(p) · GW(p)

Just to clarify: By 'pain' I mean the hurtful aspect of the sensation, not the base sensation that could remain in the absence of its hurting.

In your first paragraph you describe people who take pain to be instrumentally useful in some circumstances, to bring about some other end (e.g. healing) which is itself good. I take no stand on that empirical issue. I'm talking about the crazy normative view that pain is itself (i.e. non-instrumentally) good.

comment by poqwku · 2011-06-06T19:15:33.974Z · LW(p) · GW(p)

I'm with RichardChappell. Deeming pain intrinsically good is pretty easy, though highly unusual. And deeming pain intrinsically value-neutral is even easier, and not all that unusual. Thus there's nothing incoherent about denying that pain is intrinsically bad.

For example, plenty of people think animal pain doesn't matter, not intrinsically anyway. Perhaps Kant is the most famous example. He thinks cruelty to animals is morally wrong only because it is likely to make us cruel to humans. But the animal pain itself is, Kant thinks, irrelevant and perfectly devoid of any value or disvalue.

Certain Stoics would even say that human pain doesn't matter, not intrinsically anyway. It matters only inasmuch as it causally relates to one's own virtue, and it has no intrinsic relevance to what is good.

If someone went even further, reversing common sense and insisting that pain were intrinsically good, that would be unusual. But it wouldn't be incoherent. Not even close. To invent an example, suppose an extremely credulous religious person were told by their leader that pain is intrinsically good. This true-believer would then be convinced that pain is intrinsically good, and they would try to bring about pain in themselves and in others (so long as they didn't violate any other moral rules endorsed by the leader).

comment by Vladimir_Nesov · 2011-06-02T03:10:48.228Z · LW(p) · GW(p)

That's why we need to decode the cognitive algorithms that generate our questions about value and morality. ... So how can the Empathic Metaethicist answer Alex's question? We don't know the details yet. For example, we don't have a completed cognitive neuroscience.

Assume you have a complete knowledge of all details of the way human brain works, and a detailed trace of the sequence of neurological events that leads people to ask moral questions. Then what?

My only guess is that you look this trace over using your current moral judgment, and decide that you expect that changing certain things in the algorithm will make the judgments of this brain better. But this is not a FAI-grade tool for defining morality (unless we have to go the uploads-driven way, in which case you just gradually and manually improve humans for a very long time).

Replies from: lukeprog, BobTheBob
comment by lukeprog · 2011-06-08T07:03:49.459Z · LW(p) · GW(p)

Yes, a completed cognitive neuroscience would certainly not be sufficient for defining the motivational system of an FAI.

comment by BobTheBob · 2011-06-02T03:37:05.536Z · LW(p) · GW(p)

I think you have hit the nail on the head. There may surely be lots of scientifically interesting and useful reasons for investigating the fine details of the brain processes which eventuate in behaviours -or the uttering of words- which we interpret as moral, but it is far from obvious that this kind of knowledge will advance our understanding of morality.

More generally, there is plausibly a tension (at least on the surface) between two dominant themes on this site:

1) Naturalism: All knowledge - including of what's rational (moral)- is scientific. To learn what's rational (moral) our only option is to study our native cognitive endowments.

2) Our/evolution's imperfection: You can't trust your untutored native cognitive endowment to make rational (or moral) judgements. Unless we make an effort not to, we make irrational judgements.

comment by Vladimir_Nesov · 2011-06-02T02:52:27.460Z · LW(p) · GW(p)

The problem is not that there is no way to identify 'good' or 'right' (as used intuitively, without tabooing) with a certain X. The problem is that X is huge and complicated and we don't (yet) have access to its structure.

Strictly speaking, we can exhibit any definition of "good", even one that doesn't make any of the errors you pointed out, and still ask "Is it good?". The criteria for exhibiting a particular definition are ultimately non-rigorous, even if the selected definition is, so we can always examine them further.

Moore's argument might fail in the unintended use case of post-FAI morality not because at some point there might be no more potential for asking the question, but because, as with "Does 2+2 equal 4?", there is a point at which we are certain enough to turn to other projects, even if in principle some uncertainty and lack of clarity in the intended meaning remains. It's not at all clear this will ever happen to morality.

Replies from: Garren
comment by Garren · 2011-06-03T16:11:22.944Z · LW(p) · GW(p)

Strictly speaking, we can exhibit any definition of "good", even one that doesn't make any of the errors you pointed out, and still ask "Is it good?".

Moore was correct that no alternate concrete meaning is identical to 'good,' his mistake was thinking that 'good' had its own concrete meaning. As Paul Ziff put it, good means 'answers to an interest' where the interest is semantically variable.

In math terms, the open question argument would be like asking the value of f(z) and when someone answers f(3), pointing out that f(z) is not the same thing as f(3).

I think the 'huge and complicated' X that Luke mentions is supposed to be the set all of inputs to f(z) that a given person is disposed to use. Or maybe the aggregate of all such sets for people.

comment by Perplexed · 2011-06-01T02:05:03.590Z · LW(p) · GW(p)

In The Is_Ought Gap, Luke writes

If someone makes a claim of the 'ought' type, either they are talking about the world of is, or they are talking about the world of is not. If they are talking about the world of is not, then I quickly lose interest because the world of is not isn't my subject of interest.

Ironically, this is where I quickly lost interest in this article, because glib word-play isn't my subject of interest.

Replies from: lukeprog, Jonathan_Graehl
comment by lukeprog · 2011-06-08T07:00:58.714Z · LW(p) · GW(p)

I wasn't trying to be glib. All I'm saying is that if somebody uses 'ought' terms to refer to things that can't be reduced to physics, then they're either talking about things that don't exist or they aren't physicalists - and physicalism vs. non-physicalism is beyond the scope of this particular article.

Replies from: TAG, Perplexed
comment by TAG · 2017-12-13T11:48:01.665Z · LW(p) · GW(p)

Can "the best way to play chess" be reduced to physics? Can it usefully be rduced to physics.

comment by Perplexed · 2011-06-08T14:36:45.957Z · LW(p) · GW(p)

... if somebody uses 'ought' terms to refer to things that can't be reduced to physics ...

Your emphasis on can't. But do you really mean "can't be reduced"? Or are you rather excluding anything that "hasn't been reduced"? In the original posting you seem to be demanding that anyone using "ought language" be prepared to perform the reduction on the spot:

If they are making a claim about the world of is, then I ask them which part of the world of is they are discussing. I ask which ought-reductionism they have in mind.

I'm sorry, Luke. My commitment to reductionism is more of a vague hope for future physicalist explanation. I did not agree to use only that language which I can simultaneously show to be universally tabooable.

Replies from: lukeprog
comment by lukeprog · 2011-06-08T17:27:31.635Z · LW(p) · GW(p)

Perplexed,

I feel you're being uncharitable. Do you really think we're having a disagreement about how reductionism works? I'm not demanding that somebody have a full down-to-atoms reduction in mind whenever they use any term whatever. My target in that paragraph is people who themselves don't think their use of 'ought' will ever reduce into physics - for example Richard Chappell.

Replies from: Perplexed
comment by Perplexed · 2011-06-08T17:40:53.814Z · LW(p) · GW(p)

I feel you're being uncharitable.

I believe that you do feel that. But if you think that excluding people like Chappell from the discussion is a fruitful way to proceed, then I am curious why you believe that what you are discussing is properly termed "Meta-ethics".

ETA: Ok, I'll try for a bit more charity. Why does it matter whether a reduction of "ought" to physics can or cannot be accomplished? Why not simply present your ideas and then point out that they have the virtue of making 'ought' reducible?

It seems to me that you are trying too hard to be a good follower of Eliezer. Take my word for it; that isn't necessary to be respected in this forum. It is possible (though admittedly sometimes expensive) to communicate without first achieving some kind of metaphysical solid ground.

Replies from: lukeprog
comment by lukeprog · 2011-06-08T22:37:27.737Z · LW(p) · GW(p)

I didn't spend more time on reductionism because that is covered in the reductionism sequence. This post is already too long; I don't have the space to re-hash all the arguments for reductionism. Chappell already knows what's in the reductionism sequence, and why he disagrees with it, and that is a different discussion.

Replies from: timtyler
comment by timtyler · 2011-06-09T23:04:45.898Z · LW(p) · GW(p)

Luke's position seems more reasonable here.

I have to say that the "following" around here is kinda irritating to me too.

There's a scene from The Life of Brian that comes to mind - the "you've got to think for yourselves" scene.

comment by Jonathan_Graehl · 2011-06-01T05:09:22.683Z · LW(p) · GW(p)

Can you elaborate?

Replies from: Perplexed
comment by Perplexed · 2011-06-01T17:00:09.576Z · LW(p) · GW(p)

Can you elaborate?

"World of is" vs "World of is not" is a false dichotomy.

"I lose interest" is possibly the worst and most uncharitable of all forms of philosophical rhetoric. (Hence my 'turnabout'.)

Luke gave a false impression of what non-naturalistic ethics comprises by providing only a single example of such a position - a position which was absurd because it presumed the existence of a deity.

Even Hume allowed that works of mathematics need not be "committed to the flames". Mathematics, to my mind, does not deal with the "world of is", neither does it deal with the "world of is not". Yet if someone were to provide a non-reductionist, but axiomatic, 'definition' of 'good' using the methods and practices of mathematical logic, I certainly would not dismiss it as "uninteresting".

Replies from: Matt_Simpson, Matt_Simpson
comment by Matt_Simpson · 2011-06-01T18:30:47.783Z · LW(p) · GW(p)

But mathematics does deal with the "world of is" either potentially or as "rules of thought" (all thoughts are in minds). God, on the other hand, is different.

Replies from: magfrump
comment by magfrump · 2011-06-02T01:44:00.368Z · LW(p) · GW(p)

In my mind, the practice of mathematics is the practice of distinguishing between the "world of is" and the "world of is not;" if something is mathematically provable, it is; if something is mathematically disprovable it is not.

comment by Matt_Simpson · 2011-06-01T18:17:50.794Z · LW(p) · GW(p)

"World of is" vs "World of is not" is a false dichotomy.

Existence and non-existence pretty much exausts all possibilities here, no?

comment by Will_Sawin · 2011-06-01T01:41:05.797Z · LW(p) · GW(p)

Consider this dialog:

Student: "Wise master, what ought I do?"

Wise master: "You ought to help the poor by giving 50% of your income to efficient charity and supporting the European-style welfare state."

Student: "Alright."

*student runs off and gives 50% of his or her income to efficient charity and supports the European-style welfare state

This dialog rings true as a fact about ought statements - once we become convinced of them, they do and should constrain our behavior.

But my dialogs and your dialogs contradict each other! Because if "ought" determines our behavior, and we can define what "ought" means, then we can define proper behavior into existence - a construction as absurd as Descartes defining God into existence or Plato defining man as both a hairless featherless biped and a mortal.

We must give up one, and I say give up yours. "ought" is one of those words that we are not free to define - it has a single meaning. Look to its consequences, not its causes.

Replies from: lukeprog, Normal_Anomaly, wedrifid, Peterdjones, Will_Sawin
comment by lukeprog · 2011-06-01T02:10:38.624Z · LW(p) · GW(p)

I'm not sure I understand. Are you saying that we are not free to stipulate definitions for the word-tools we use (when it comes to morality), because you have a conceptual intuition in favor of motivational internalism for the use of 'ought' terms?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T02:43:59.386Z · LW(p) · GW(p)

Wikipedia defines motivational internalism as the belief that:

there is an internal, necessary connection between one's conviction that X ought to be done and one's motivation to do X.

I want to view this as a morally necessary connection. One should do what one ought to do, and this serves as the definition of "ought".

You will note that I am using circular definitions. That is because I can't define "should" except in terms of things that have a hidden "should" in there. But I am trying to access the part of you that understands what I am saying.

The useful analogue is this:

modus ponens: "If you know 'A', and you know 'If A, then B", then you know B"

It's a circular definition getting at something which you can't put into words. I would be wrong to define "If-then" as something else, like maybe "If A, then B" means "75% of elephants with A written on them believe B" because it's already defined.

Does that make any sense?

Replies from: lukeprog, steven0461
comment by lukeprog · 2011-06-01T14:46:32.457Z · LW(p) · GW(p)

Unfortunately, I still don't follow you. Or at least, the only interpretations I've come up with look so obviously false that I resist attributing them to you. Maybe I can grok your disagreement from another angle. Let me try to pinpoint where we disagree. I hope you'll have some time to approach mutual understanding on this issue. When Will Sawin disagrees with me, I pay attention.

Do you agree that there are many words X such that X is used by different humans to mean slightly different things?

Do you agree that there are many words X such that different humans have different intuitions about the exact extension of X, especially in bizarre sci-fi hypothetical scenarios?

Do you agree that many humans use imperative terms like 'ought' and 'should' to communicate particular meanings, with these meanings often being stipulated within the context of a certain community?

I'll stop there for now.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T15:42:32.683Z · LW(p) · GW(p)

Thanks. I'm thinking of doing a post on the discussion section where I can explain where my intuitions come from in more detail.

For your questions:

Yes.

Yes.

I don't really know what the third question means. It seems like the primary use of "ought" and "should" is as part of an attempt to convince people to do what you say they should do. I would say that is the meaning being communicated. There are various ways this could be within the context of a community. Are you saying that you're only trying to convince members of that community?

Replies from: lukeprog
comment by lukeprog · 2011-06-04T21:19:07.456Z · LW(p) · GW(p)

Note: I'm planning to come back to this discussion in a few days. Recently my time has been swamped running SI's summer minicamp.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-04T21:34:43.200Z · LW(p) · GW(p)

I may also write something which expresses my ideas in a new, more concise and clear form.

Replies from: lukeprog
comment by lukeprog · 2011-06-05T23:20:39.835Z · LW(p) · GW(p)

I think that would be the most efficient thing to do. For now, I'll wait on that.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-09T18:59:17.380Z · LW(p) · GW(p)

If you haven't noticed, I just made that post.

Replies from: lukeprog, lukeprog
comment by lukeprog · 2011-06-13T05:58:23.234Z · LW(p) · GW(p)

Any response to this?

comment by lukeprog · 2011-06-10T16:55:31.767Z · LW(p) · GW(p)

Excellent. I'm busy the next few days, but I'll respond when I can, on that thread.

comment by steven0461 · 2011-06-01T03:43:52.356Z · LW(p) · GW(p)

That is because I can't define "should" except in terms of things that don't have a hidden "should" in there.

I think you meant to leave out either the "except" or the "don't"?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T10:57:13.179Z · LW(p) · GW(p)

Correct.

comment by Normal_Anomaly · 2011-06-01T02:09:34.571Z · LW(p) · GW(p)

Your dialogue looks similar to the one about losing weight above. I can define proper behavior given my terminal values. If I want to lose weight, I should eat less. Upon learning this fact, I start eating less. My values and some facts about the world are sufficient to determine my proper behavior. "Defining my behavior into existence" seems no more absurd to me than defining the rational action using a decision theory.

I'm not sure I've explained myself very clearly here. Please advise on what, if anything, that I'm saying is hard to understand.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T02:47:33.579Z · LW(p) · GW(p)

If it is the case that you should do what you want, yes.

If you want to punch babies, then you should not punch babies. (x)

If you should lose weight, then you should eat less.

Proper values and some facts about the world are sufficient to determine proper behavior.

What are proper values? Well, they're the kind of values that determine proper behavior.

x: Saying this requirems me to know a moral fact. This moral fact is a consequence of an assumption I made about the true nature of reality. But to assume is to stoop lower than to define.

Replies from: Normal_Anomaly, Antisuji, Will_Sawin, Peterdjones
comment by Normal_Anomaly · 2011-06-01T11:06:17.930Z · LW(p) · GW(p)

If you want to punch babies, then you should not punch babies. (x)

This is WillSawin_Should. NormalAnomaly_Should says the same thing, because we're both humans. #$%^$_Should, where #$%^$ is the name of an alien from planet Mog, may say something completely different. You and I both use the letter sequence s-h-o-u-l-d to refer to the output of our own unique should-functions.

Lukeprog, the above is how I understand your post. Is it correct?

Replies from: Will_Sawin, wedrifid
comment by Will_Sawin · 2011-06-01T11:21:58.853Z · LW(p) · GW(p)

No. We both use the letter sequence "should" to direct our actions.

We believe that we should follow the results of our should-functions. We believe that the alien from Mog is wrong to follow the results of his should-function. These are beliefs, not definitions.

Imagine if you said "The sun will rise tomorrow" and I responded:

"This is NormalAnomaly_Will. WillSawin_Will says the same thing, because we're both humans. #$%^$_Will, where #$%^$ is the name of an alien from planet Mog, may say something completely different. You and I both use the letter sequence w-i-l-l to refer to the output of our own unique will-functions."

Replies from: wedrifid
comment by wedrifid · 2011-06-01T12:16:57.533Z · LW(p) · GW(p)

Normal_Anomaly's ontology is coherent. What you describe regarding beliefs is also coherent but refers to a different part of reality space than what Normal is trying to describe.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T15:52:18.019Z · LW(p) · GW(p)

I don't understand what "ontology" and "reality space" mean in this context.

Here's a guess:

You're saying that the word "WillSawin_Should" is a reasonable word to use. It is well-defined, and useful in some contexts. But Plain-Old-Should is also a word with a meaning that is useful in some contexts.

in which case I would agree with you.

Replies from: wedrifid
comment by wedrifid · 2011-06-01T16:07:48.961Z · LW(p) · GW(p)

I was trying to convey that when you speak of beliefs and determination of actions you are describing an entirely different concept than what Normal_Anomaly was describing. To the extent that presenting your statements as a contradiction of Normal's is both a conversational and epistemic error.

comment by wedrifid · 2011-06-01T11:13:43.690Z · LW(p) · GW(p)

You can write_underscored_names by escaping the _ by preceding it with a \.

comment by Antisuji · 2011-06-01T06:27:04.734Z · LW(p) · GW(p)

So you're defining "should" to describe actions that best further one's terminal values? Or is there an additional "shouldness" about terminal values too?

Also, regarding

Because if "ought" determines our [proper] behavior, and we can define what "ought" means, then we can define proper behavior into existence

in the grandparent, it sounds like you're equivocating between defining what the word "ought" means and changing the true nature of the concept that "ought" usually refers to. (Unless I was wrong to add the "proper" in the quote, in which case I actually don't know what point you were making.) To wit: "ought" is just a word that we can define as we like, but the concept that "ought" usually refers to is an adaptation and declaring that "ought" actually means something different will not change our actual behavior, except insofar as you succeed in changing others' terminal values.


Incidentally this is a very slippery topic for me to talk about for reasons that I don't fully understand, but I suspect it has to do with my moral intuitions constantly intervening and loudly claiming "no, it should be this way!" and the like. I also strongly suspect that this difficulty is nearly universal among humans.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T11:04:16.801Z · LW(p) · GW(p)

Or is there an additional "shouldness" about terminal values too?

There is.

(Unless I was wrong to add the "proper" in the quote, in which case I actually don't know what point you were making.)

You weren't.

in the grandparent, it sounds like you're equivocating between defining what the word "ought" means and changing the true nature of the concept that "ought" usually refers to.

I do not think I am equivocating. Rather, I disagree with lukeprog about what people are changing when they disagree about morality.

lukeprog thinks that people disagree about what ought means / the definition of ought.

I believe that (almost) everybody things "ought" means the same thing, and that people disagree about the concept that "ought" usually refers to.

This concept is special because it has a reverse definition. Normally a word is defined by the situations in which you can infer that a statement about that word is true. However, "ought" is defined the other way - by what you can do when you infer that a statement about "ought" is true.

Is it the case that Katy ought to buy a car? Well, I don't know. But I know that if Katy is rational, and she becomes convinced that she ought to buy a car, then she will buy a car.

Replies from: prase
comment by prase · 2011-06-01T15:50:09.912Z · LW(p) · GW(p)

I believe that (almost) everybody things "ought" means the same thing, and that people disagree about the concept that "ought" usually refers to.

What is the difference between what "ought" means and what it refers to?

Edit:

This concept is special because it has a reverse definition. Normally a word is defined by the situations in which you can infer that a statement about that word is true. However, "ought" is defined the other way - by what you can do when you infer that a statement about "ought" is true.

In the above, do you say that "You ought to do X." is exactly equivalent to the command"Do X!", and "I ought to do X." means "I will do X on the first opportunity and not by accident." ?

Is it the case that Katy ought to buy a car? Well, I don't know. But I know that if Katy is rational, and she becomes convinced that she ought to buy a car, then she will buy a car.

Ought we base the definition of "ought" on a pretty complicated notion of rationality?

Replies from: Will_Sawin, Will_Sawin
comment by Will_Sawin · 2011-06-01T16:07:52.037Z · LW(p) · GW(p)

In the above, do you say that "You ought to do X." is exactly equivalent to the command"Do X!", and "I ought to do X." means "I will do X on the first opportunity and not by accident." ?

To the first one, yes, but they have different connotations.

To the second one, sort of. "I" can get fuzzy here. I have akrasia problems. I should do my work, but I will not do it for a while. If you cut out a sufficiently small portion of my mind then this portion doesn't have the opportunity to do my work until it actually does my work, because the rest of my mind is preventing it.

Furthermore I am thinking about them more internally. "should" isn't part of predicting actions, its part of choosing them.

Ought we base the definition of "ought" on a pretty complicated notion of rationality?

It doesn't seem complicated to me. Certainly simpler than lukeprog's definitions.

These issues are ones that should be cleared up by the discussion post I'm going to write in a second.

Replies from: prase, Peterdjones
comment by prase · 2011-06-01T16:13:40.148Z · LW(p) · GW(p)

These issues are ones that should be cleared up by the discussion post I'm going to write in a second.

It seems that my further questions rather ought to wait a second, then.

comment by Peterdjones · 2011-06-01T19:04:42.315Z · LW(p) · GW(p)

"You ought to do X." is exactly equivalent to the command"Do X!"

It isn't equivalent to a moral "ought", since one person can command another to do something they both think is immoral.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T19:53:09.336Z · LW(p) · GW(p)

This would require one of two situations:

a. A person consisting of multiple competing subagents, where the "ought" used by one is not the same as the "ought" used by another.

b. .A person with two different systems of morality, one dictating what is moral and the other how much they will accept deviating from it.

In either case you would need two words because there are two different kinds of should in the mind.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-01T20:51:49.691Z · LW(p) · GW(p)

I gave the situation of one person commanding another. You replied with a scenario about one person with different internal systems. I don't know why you did that.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T21:04:27.258Z · LW(p) · GW(p)

It's generally believed that if you shouldn't tell people to do things they shouldn't do.

So your problem reduces to the problem of someone who does things that they believe they shouldn't.

If you're not willing to make that reduction, I'll have to think about things further.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-01T21:47:54.514Z · LW(p) · GW(p)

I think it is obvious that involves someone doing something they think they shouldn't. Which is not uncommon.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T22:02:46.176Z · LW(p) · GW(p)

Which requires either a or b.

comment by Will_Sawin · 2011-06-01T15:55:06.826Z · LW(p) · GW(p)

(yay, I finally caused a confusion that should be really easy to clear up!)

Alice and Bob agree that "Earth" means "that giant thing under us". Alice and Bob disagree about the Earth, though. They disagree about that giant thing under them. Alice thinks it's round, and Bob thinks it's flat.

Replies from: Antisuji, prase
comment by Antisuji · 2011-06-01T18:12:42.076Z · LW(p) · GW(p)

Yes, this is the distinction I had in mind.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T19:57:36.930Z · LW(p) · GW(p)

So do you now think that I do not equivocate?

Replies from: Antisuji
comment by Antisuji · 2011-06-02T04:39:00.107Z · LW(p) · GW(p)

No, I think there is still equivocation in the claim that your dialog and Luke's contradict one another. Luke is talking about the meaning of the word "Earth" and you are talking about the giant thing under us.

I also do not completely buy the assertion that "ought" is special because it has a reverse definition. This assertion itself sounds to me like a top-down definition of the ordinary type, if an unusually complex one.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-02T10:54:05.838Z · LW(p) · GW(p)

Well there are two possible definitions, Luke's and my reverse definition (or top-down definition of the ordinary type).

If you accept both definitions, then you have just proved that the right thing to do is XYZ. One shouldn't be able to prove this just from definitions. Therefore you cannot accept both definitions.

Replies from: torekp
comment by torekp · 2011-06-04T03:01:23.983Z · LW(p) · GW(p)

Let's try an analogy in another normative arena.

Suppose we propose to define rationality extensionally. Scientists study rationality for many decades and finally come up with a comprehensive definition of rationality that becomes the consensus. And then they start using that definition to shape their own inference patterns. "The rational thing to do is XYZ," they conclude, using their definitions.

Where's the problem?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-04T18:29:09.218Z · LW(p) · GW(p)

Because the rational thing means the thing that produces good results, so you need a shouldness claim to scientifically study rationality claims, and science cannot produce shouldness claims.

The hidden assumption is something like "Good results are those produced by thinking processes that people think are rational" or something along those lines. If you accept that assumption, or any similar assumption, such a study is valid.

Replies from: torekp
comment by torekp · 2011-06-05T00:40:03.971Z · LW(p) · GW(p)

Let's temporarily segregate rational thought vs. rational action, though they will ultimately need to be reconciled. I think that we can, and must, characterize rational thought first. We must, because "good results" are good only insofar as they are desired by a rational agent. We can, because while human beings aren't very good individually at explicitly defining rationality, they are good, collectively via the scientific enterprise, at knowing it when they see it.

In this context, "science cannot produce shouldness claims" is contentious. Best not to make a premise out of it.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-05T04:20:20.360Z · LW(p) · GW(p)

But why are rational thoughts good? Because they produce rational actions.

The circular bootstrapping way of doing this may not be untenable. (This is similar to arguments of Eliezer's.)

In this context, "science cannot produce shouldness claims" is contentious. Best not to make a premise out of it.

You're making hidden assumptions about what "good" and "rational" mean. Of course, some people accept those assumptions, that's their prerogative. But those assumptions are not true by definition.

Replies from: torekp
comment by torekp · 2011-06-06T00:47:04.193Z · LW(p) · GW(p)

Yes I am making assumptions (they're pretty open) about goodness and rationality. Or better: hypotheses. I advance them because I think theories constructed around them will ultimately cohere better with our most confident judgments and discoveries. Try them and see. By all means try alternatives and compare.

Circular bootstrapping is fine. And sure, the main reason rational thoughts are good is that they typically lead to good actions. But they're still rational thoughts - thus correct - even when they don't.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-06T02:39:12.248Z · LW(p) · GW(p)

I think if you accept that you are making "assumptions" or "hypotheses" you agree with me.

Because you are thinking about the moral issue in a way reminiscent of scientific issues, as a quest for truth, not as a proof-by-definition.

comment by prase · 2011-06-01T16:02:19.970Z · LW(p) · GW(p)

I have difficulty to apply the analogy to ought.

comment by Will_Sawin · 2011-06-01T22:15:10.667Z · LW(p) · GW(p)

I'm not really sure why this was downvoted, compared to everything else I've written on the topic.

Did it have to do with the excessive bolding somehow? Do my claims sound especially stupid stated like this?

Replies from: wedrifid
comment by wedrifid · 2011-06-01T22:41:30.952Z · LW(p) · GW(p)

I'm not really sure why this was downvoted, compared to everything else I've written on the topic.

It seems to completely miss Normal_Anomaly's point, speaking right past him. As to the 'compared to everything else you have written' I refrained from downvoting your replies to myself even though I would have downvoted them if they were replies to a third party. It is a general policy of mine that I find practical, all else being equal.

comment by Peterdjones · 2011-06-01T18:49:57.655Z · LW(p) · GW(p)

What are proper values? Well, they're the kind of values that determine proper behaviour.

Not for objective metaethicists, who seem to be able to escape your circle.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T19:55:58.835Z · LW(p) · GW(p)

This doesn't seem to actually be a term, after a few seconds of googling. Could you provide a link to a description of this philosophy?

comment by wedrifid · 2011-06-01T12:25:13.525Z · LW(p) · GW(p)

We must give up one, and I say give up yours.

I would much prefer to keep Luke's. Basically because it is is actually useful when communicating with others who aren't interested in having the other person's values rammed down their throat. Because if you went around saying an ought at me using your definition then obviously you should expect me to reject it regardless of what the content is. Because the way you are using the term is such that it assumes that the recipient is ultimately subject to something that refers to your own mind.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T16:00:32.480Z · LW(p) · GW(p)

So if you tell me I should go do something, and I agree with you, and I never go do that, you would see nothing inconsistent?

I'm totally comfortable with claims of the form "If you believe XYZ normative statements, then you should do W." It should work just as well as conditionals about physical statements.

Replies from: wedrifid
comment by wedrifid · 2011-06-01T16:10:46.946Z · LW(p) · GW(p)

So if you tell me I should go do something, and I agree with you, and I never go do that, you would see nothing inconsistent?

No, that is not something that is implied by my statements.

It is an example of someone not acting according to their own professed ideals and is inconsistent in the same way that all such things are.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T16:20:01.114Z · LW(p) · GW(p)

So you're saying that I am only allowed to use "should" to mean "WillSawin_should". I can't use it to mean "wedrifid_should".

This seems like an odd way to run a conversation to me.

Replies from: wedrifid
comment by wedrifid · 2011-06-01T16:29:54.933Z · LW(p) · GW(p)

So you're saying that I am only allowed to use "should" to mean "WillSawin_should". I can't use it to mean "wedrifid_should".

No, that is another rather bizarre thing which I definitely did not say. Perhaps it will be best for me to just leave it with my initial affirmation of Luke's post:

We must give up one, and I say give up yours.

I would much prefer to keep Luke's.

In my observation Luke's system for reducing moral claims provides more potential for enabling effective communication between agents and a more comprehensive way to form a useful epistemic model of such conversations.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T17:10:30.300Z · LW(p) · GW(p)

No, that is another rather bizarre thing which I definitely did not say. Perhaps it will be best for me to just leave it with my initial affirmation of Luke's post:

So suppose I say:

"I wedrifid_should do X" and then don't do X. Clearly, I am not being inconsistent.

but if I say:

"I should do X" and then don't do X then I am being inconsistent.

Something must therefore prevent me from using "should" to mean "wedrifid_should".

Replies from: Manfred, wedrifid
comment by Manfred · 2011-06-01T22:34:46.491Z · LW(p) · GW(p)

I'd agree that you can (and probably do) use plain old "should" to mean multiple things. The trouble is that this isn't very useful for communication. So when communicating, us humans use heuristics to figure out what "should" is meant.

In the example of the conversation, if I say "you should X" and you say "I agree," then I generally use a shortcut to think you meant Will-should. The obvious reason for this is that if you meant Manfred-should, you would have just repeated my own statement back to me, which would be not communicating anything, and it's a decent shortcut to assume that when people say something they want to communicate. The only other obvious "should" in the conversation is Will-should, so it's a good guess that you meant Will-should.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T23:31:10.459Z · LW(p) · GW(p)

"I agree" generally means the same thing as repeating someone's statement back at them. We can expand:

"You wedrifd_should do X"

"I agree, I will_should do X"

I seem to be making an error of interpretation here if I say things the way you normally say them! Why, in this instance, is it considered normal and acceptable to interpret professed agreement as expressing a different belief than the one being agreed to?

It all seems fishy to me.

Replies from: Manfred
comment by Manfred · 2011-06-02T00:14:25.318Z · LW(p) · GW(p)

Huh, yeah that is weird. But on thinking about it, I can only think of two situations I've heard or used "I agree." One is if there's a problem with an unsure solution, where it means "My solution-finding algorithm also returned that," and if someone offers a suggestion about what should be done, where I seem to be claiming it usually means "My should-finding algorithm also returned that."

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-02T01:48:40.531Z · LW(p) · GW(p)

In the first case, would you say that the \Manfred_solution is something or other? You and I mean something different by "solution"?

Of course not.

So why would you do something different for "should"?

Replies from: Manfred
comment by Manfred · 2011-06-02T01:58:56.404Z · LW(p) · GW(p)

Because there's no objective standard against which "should algorithms" can be tested, like there is for the standard for "solution-finding algorithms" If there was no objective standard for solutions, I would absolutely stop talking about "the solution" and start talking about the Manfred_solution.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-02T02:02:44.048Z · LW(p) · GW(p)

Didn't you say in the other thread that we can disagree about the proper state of the world?

When we do that, what thing are we disagreeing about? It's certainly not a standard, but how can it be subjective?

That's the objective thing I am talking about.

Replies from: Manfred
comment by Manfred · 2011-06-02T04:51:52.904Z · LW(p) · GW(p)

Hm. I agree that you can disagree about some world-state that you'd like, but I don't understand how we could move that from "we disagree" to "there is one specific world-state that is the standard." So I stand by "no objective standard" for now.

Replies from: Peterdjones, Will_Sawin
comment by Peterdjones · 2011-06-02T12:49:52.994Z · LW(p) · GW(p)

I assume you are talking about proper or desirable world-states rather than actual ones.

comment by Will_Sawin · 2011-06-02T10:50:05.056Z · LW(p) · GW(p)

I didn't say it was the standard.

The idea is this.

If we disagree about what world state is best, there has to be some kind of statement I believe and you don't, right? Otherwise, we wouldn't disagree. Some kind of statement like "This world state is best."

Replies from: Manfred
comment by Manfred · 2011-06-02T22:20:36.029Z · LW(p) · GW(p)

But the difference isn't about some measurable property of the world, but about internal algorithms for deciding what to do.

Sure, to the extent that humans are irrational and can pit one desire against another, arguing about how to determine "best" is not a total waste of time, but I don't think that has much bearing on subjectivity.

I'm losing the thread of the conversation at this point.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-02T22:32:05.903Z · LW(p) · GW(p)

I have no solution to that problem.

comment by wedrifid · 2011-06-01T22:16:56.942Z · LW(p) · GW(p)

Perhaps the meaning of the paragraph you quote wasn't clear - I was trying hard to be polite rather than frank. You seem to be attacking a straw man using rhetorical questions so trivial that I would consider them disingenuous prior to adjusting for things like the illusion of transparency. Your conversation with lukeprog seems like one with more potential for useful communication. He cares about the subject far more than I do.

comment by Peterdjones · 2011-06-01T18:48:06.173Z · LW(p) · GW(p)

But my dialogs and your dialogs contradict each other! Because if "ought" determines our behavior, and we can define what "ought" means, then we can define proper behavior into existence

Moral ideas don't determinne behaviour with any great reliability, so there is no analytical or necessary relationship there. If that's what you were getting at.

comment by Will_Sawin · 2011-06-01T18:03:45.826Z · LW(p) · GW(p)

I don't want to spam but if people haven't noticed then hopefully this comment should inform them that my first-ever lesswrong post, which might or might not make this clearer, is up.

comment by lukeprog · 2011-06-01T01:08:57.656Z · LW(p) · GW(p)

I have created anchors for the sections, BTW:

Many Moral Reductionisms
Cognitivism vs. Non-Cognitivism
Objective vs. Subjective
Relative vs. Absolute
Moore's Open Question Argument
The Is-Ought Gap
Moral Realism vs. Anti-Realism
Empathic Metaethics

Replies from: Jack
comment by Jack · 2011-08-21T18:36:58.988Z · LW(p) · GW(p)

(Replying to the comment instead of the post to be sure you'll see this)

In "The Is-Ought Gap" you conclude

Either our intended meaning of 'ought' refers (eventually) to the world of math and physics (in which case the is-ought gap is bridged), or else it doesn't (in which case it fails to refer).

I have a lot to say about this particular issue but I'm not sure if you think you've exhausted the issue in this post or if you plan to come back to it. Just to begin with, though, I hope you're aware that the two reasonable camps that take the gap most seriously would both agree with this conclusion. The issue is exactly this: we don't think 'ought' refers.

comment by zefreak · 2011-06-03T19:07:15.164Z · LW(p) · GW(p)

I think you are incorrect with regards to Hume's is-ought gap, although I find its relevance to be somewhat overstated. A hypothetical imperative such as your example relies on an equivocation between 'ought' as (1) a normative injunction and (2) conveying a possible causal pathway from here to there.

-

Here is the incorrect syllogism:

Premise 1: A desires C (is)

Premise 2: B will produce C (is)

Conclusion: A ought to do B (ought)

-

There is a hidden normative premise that is often ignored. It is

Premise 3: A should obtain its desires. (ought)

-

The correct syllogism would then be:

Premise 1 (is): A desires C

Premise 2 (is): B will produce C

Premise 3 (ought): A ought to obtain its desires.

Conclusion: A ought to do B (ought)

-

The necessity of Premise 3 is made clear by use of an admittedly extreme example:

P1: Hitler wants to kill a great number of people

P2: Zyklon B will kill a great number of people

C1: Hitler ought to use Zyklon B to kill a great number of people

While the conclusion is derived from the premises using definition (2) of the word 'ought', few would express it as a normative recommendation.

-

Hume's fact/value dichotomy remains valid. A normative conclusion can only be validly deduced from a group of premises including at least one which is itself normative.

comment by Peterdjones · 2011-06-01T17:04:03.510Z · LW(p) · GW(p)

Or, perhaps someone has a moral reductionism in mind during a particular use of 'ought' language. Perhaps by "You ought to be more forgiving" they really mean "If you are more forgiving, this is likely to increase the amount of pleasure in the world."

As you can see, it is not hard to bridge the is-ought gap.

I don't think it is impossible, but it is harder than you are making out. The examples given are not complete syllogisms, or other logical forms. It is easy to validly derive an ought form an is: you start with the factual statement and then invoke a bridging principle of the form:

if then

However, the argument is not sound unless the bridging statement is true. But the bridging statement is itself an derivation of an ought from an is, so there is a kind of circularity there. You are assuming that the ought-from-is problem has been solved in order to solve it.

As I said, I don't think the situation is hopeless. The bridging premise is not exactly the same thing as a moral argument: it is usually more of a general statement along the lines of "if X increases well being, it should be done". That provides some scope for an analytical justification of bridging principles.

comment by RichardWein · 2011-09-27T00:31:08.300Z · LW(p) · GW(p)

[Re-post with correction]

Hi Luke,

I've questioned your metaethical views before (in your "desirist" days) and I think you're making similar mistakes now as then. But rather than rehash old criticisms I'd like to make a different point.

Since you claim to be taking a scientific or naturalized approach to philosophy I would expect you to offer evidence in support of your position. Yet I see nothing here specifically identified as evidence, and very little that could be construed as evidence. I don't see how your approach here is significantly different from the intuition-based philosophical approaches that you've criticised elsewhere.

Some people who say "Stealing is wrong" are really just trying to express emotions: "Stealing? Yuck!" Others use moral judgments like "Stealing is wrong" to express commands: "Don't steal!" Still others use moral judgments like "Stealing is wrong" to assert factual claims, such as "stealing is against the will of God" or "stealing is a practice that usually adds pain rather than pleasure to the world."

How do you know this? Where's the evidence? I don't doubt that some people say, "Stealing is wrong because it's against the will of God". But where's the evidence that they use "Stealing is wrong" to mean "Stealing is against the will of God"?

But moral terms and value terms are about what we want.

How do you know? And this seems to contradict your claim above that some people use "Stealing is wrong" to mean "stealing is against the will of God". That's not about what we want. (I say that moral terms are primarily about obligations, not wants.)

Replies from: None
comment by [deleted] · 2011-09-27T00:37:21.273Z · LW(p) · GW(p)

You can edit (pencil on paper icon) your posts, you don't have to delete and repost.

comment by RichardWein · 2011-09-26T18:37:39.986Z · LW(p) · GW(p)

Hi Luke,

I've questioned your metaethical views before (in your "desirist" days) and I think you're making similar mistakes now as then. But rather than rehash old criticisms I'd like to make a different point.

Since you claim to be taking a scientific or naturalized approach to philosophy I would expect you to offer evidence in support of your position. Yet I see nothing here specifically identified as evidence, and very little that could be construed as evidence. I don't see how your approach here is significantly different from the intuition-based philosophical approaches that you've criticised elsewhere.

Some people who say "Stealing is wrong" are really just trying to express emotions: "Stealing? Yuck!" Others use moral judgments like "Stealing is wrong" to express commands: "Don't steal!" Still others use moral judgments like "Stealing is wrong" to assert factual claims, such as "stealing is against the will of God" or "stealing is a practice that usually adds pain rather than pleasure to the world."

How do you know this? Where's the evidence? I don't doubt that some people say, "Stealing is wrong because it's against the will of God". But where's the evidence that they use "Stealing is wrong" to mean "Stealing is against the will of God"? (Indeed, if they meant that it would be very strange to say "Stealing is wrong because it's against the will of God". That would be equivalent to saying "A is true because A is true." Yet it seems perfectly natural for someone to say this.)

But moral terms and value terms are about what we want.

How do you know? And this seems to contradict your claim above that some people use "Stealing is wrong" to mean "stealing is against the will of God". That's not about what we want. (I say that moral terms are primarily about obligations, not wants.)

comment by Peterdjones · 2011-06-01T18:37:15.575Z · LW(p) · GW(p)

the Austere Metaethicist replies:

"Tell me what you mean by 'right', and I will tell you what is the right thing to do."

That is of course, not what is right, but what she thinks is right. So far, so subjective.

You may not know what you mean by 'right.' But let's not stop there. Here, let me come alongside you and help decode the cognitive algorithms that generated your question in the first place, and then we'll be able to answer your question. Then we can tell you what the right thing to do is.

Again, that is not the right thing, that is just what she thinks. An Objective metaethicist could answer the question what is right.

But moral terms and value terms are about what we want.

No: they are value terms about what we should want and be and do.

And the "we" is important here. Your metaethicists are like therapsists or life coaches or personal shoppers who advise people how to make their individual lives spiffier. But moral action is not solipsistic: moral choices affect other people. That's why we can't stop at "whatever you think is right is right". I don't want one of your metaethicists telling my neighbour how to be a better serial killer.

Replies from: Manfred, torekp, nshepperd
comment by Manfred · 2011-06-01T22:03:19.637Z · LW(p) · GW(p)

I think you're missing the point of why there isn't a universal code of ethics. If two people disagree about the definition of a word, the way forward isn't to jump into the argument on one side or the other. The way forward is to stop using that word that way. This is subjective with respect to the word (if you don't also specify how you're defining it) but we stopped using that word that way anyhow - it's objective with respect to the world, and that's what's important to how people act.

Replies from: Peterdjones, Will_Sawin
comment by Peterdjones · 2011-06-01T22:46:06.190Z · LW(p) · GW(p)

If two people disagree about the definition of a word, the way forward isn't to jump into the argument on one side or the other. The way forward is to stop using that word that way.

If someone uses "cat" to mean "animal that barks", should everyone then stop using "cat"?

This is subjective with respect to the word (if you don't also specify how you're defining it) but we stopped using that word that way anyhow - it's objective with respect to the world, and that's what's important to how people act.

I can't make any sense of that.

Replies from: Manfred, FAWS
comment by Manfred · 2011-06-02T00:07:51.158Z · LW(p) · GW(p)

If someone uses "cat" to mean "animal that barks", should everyone then stop using "cat"?

You're right, it's more complicated. It seems like the solution here is to make word choice a coordination problem, communication being a major goal of language - if a million people use it one way and one person uses it the other way, the one should say "an animal that barks." On the other hand if everyone has the same several definitions for a word, like "sound," then splitting up the word when necessary improves communication.

This is subjective with respect to the word (if you don't also specify how you're defining it) but we stopped using that word that way anyhow - it's objective with respect to the world, and that's what's important to how people act.

I can't make any sense of that.

You complain that letting people specify what they mean by "right" makes "right" subjective where people diverge. But this doesn't make the communication subjective if people replace "right" by an objective criterion for the world, so the bad stuff associated with just drifting off into subjectivity doesn't happen.

Although I guess you could be saying that "right" being subjective is inherently bad. But I would suggest that you're thinking about right_Peter, which is still objective.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T00:37:01.566Z · LW(p) · GW(p)

If everyone has their own notion of right, we still have the Bad Thing that an action can only be allowed of forbidden, not Peter-allowed and Manfred-forbidden.

Replies from: Manfred
comment by Manfred · 2011-06-02T00:59:29.763Z · LW(p) · GW(p)

? So it's impossible for two people to rationally disagree about whether or not an action is forbidden if the external state of the world is the same? I see no reason why "forbidden" in the moral sense should be objective.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T01:02:16.275Z · LW(p) · GW(p)

So it's impossible for two people to rationally disagree about whether or not an action is forbidden if the external state of the world is the same?

If they disagree and they are both being rational, where's the objectivity?

I see no reason why "forbidden" in the moral sense should be objective.

Try explaining to someone that something they like should be forbidden because you don't like it.

Replies from: Manfred
comment by Manfred · 2011-06-02T01:56:14.287Z · LW(p) · GW(p)

So it's impossible for two people to rationally disagree about whether or not an action is forbidden if the external state of the world is the same?

If they disagree and they are both being rational, where's the objectivity?

I agree, it doesn't look like there is much in this concept.

I see no reason why "forbidden" in the moral sense should be objective.

Try explaining to someone that something they like should be forbidden because you don't like it.

Okay. "If you don't stop, I will shoot you."

But seriously, WTF? Is that supposed to be an argument that if something is morally forbidden to one person it should be the same for another person?

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T02:23:11.301Z · LW(p) · GW(p)

I agree, it doesn't look like there is much [objectivity] in this concept.

If you don't go looking for it, you won't find it. As is so ofen the case on LW, that door has been shut without even trying to see what is behind it.

I see no reason why "forbidden" in the moral sense should be objective

Try explaining to someone that something they like should be forbidden because you don't like it.

Okay. "If you don't stop, I will shoot you."

Arbitrary rules enforced with threats of violence is not an optimal outcome for me.

If you have an option other than

a) subjective laissez faire where serial killers are allowed to do their own thing

or

b) Tyranny

I'd be glad to hear it. I know I have.

But seriously, WTF? Is that supposed to be an argument that if something is morally forbidden to one person it should be the same for another person?

Of course. No one should murder. I'm surprised you find that surprising.

Replies from: Manfred, Nornagest
comment by Manfred · 2011-06-02T05:04:56.334Z · LW(p) · GW(p)

If you don't go looking for it, you won't find it.

Rather than chastising me, why not explain how "forbidden" is objective?

Re: shooting people: it was a joke. The WTF was not with respect to shooting people. It was because your demand was a non sequitur.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T12:48:18.335Z · LW(p) · GW(p)

Rather than chastising me, why not explain how "forbidden" is objective?

For start: are you aware of various (googleablle) standard defenses of metaethical ojectivism?

Replies from: Manfred, Manfred
comment by Manfred · 2011-06-05T22:33:18.416Z · LW(p) · GW(p)

Is there some other place where you have defended metaethical (or even moral) objectivism?

comment by Manfred · 2011-06-02T22:06:56.575Z · LW(p) · GW(p)

I am aware of various defenses and find them all thoroughly lacking, but I doubt I have exhausted even the common possibilities. Go for it.

comment by Nornagest · 2011-06-02T06:42:28.277Z · LW(p) · GW(p)

Arbitrary rules enforced with threats of violence is not an optimal outcome for me. If you have an option other than

a) subjective laissez faire where serial killers are allowed to do their own thing

or

b) Tyranny

I'd be glad to hear it. I know I have.

Assuming you're generalized that properly and aren't seriously arguing the most egregious false dichotomy I've seen in weeks, I'm afraid that condemning the set of ethics based on social or personal consequences as "tyranny" amounts to dismissing an entire school of thought on aesthetic grounds. Forgive me if I don't find such a thing particularly convincing.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T12:46:22.947Z · LW(p) · GW(p)

As I stated: "I know I have" I don't think a) and b) are the only options either. I don't see why the decision to call force-based ethics "tryrrany" counts as "aesthetic". I don't see why you are so hostile to reason-based ethics. You and various other people seem to think the rational/ojeciive approach to ethics needs to be dispensed with, but you don't say why.

comment by FAWS · 2011-06-02T00:52:56.652Z · LW(p) · GW(p)

If someone uses "cat" to mean "animal that barks", should everyone then stop using "cat"?

In conversations with that particular person, assuming they can't easily be persuaded to change their usage? Yes, definitely.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T01:05:36.571Z · LW(p) · GW(p)

That's hardly an optimal outcome. They are making a mistake, although it seems no one wants to admit that.

Replies from: FAWS
comment by FAWS · 2011-06-02T11:27:47.804Z · LW(p) · GW(p)

Obviously the "optimal outcome" would be the easy persuasion I mentioned. Do you think someone misusing that word justifies arbitrary high effort in persuasion, or drastic measures?

comment by Will_Sawin · 2011-06-01T22:19:12.191Z · LW(p) · GW(p)

Why do you think it's the definition of the word that's at issue?

Replies from: Manfred
comment by Manfred · 2011-06-01T22:53:38.643Z · LW(p) · GW(p)

Because it is possible for people to disagree about whether something is right or wrong without disagreeing about the state of the world.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-01T23:24:27.580Z · LW(p) · GW(p)

Why must all disagreements be disagreements about the state of the world?

It seems to me like there are two kinds of disagreements, positive disagreements about the state of the world, and normative disagreements about the proper state of the world.

Nothing blows up when I believe that.

Knowing the fact that you just stated does not make fighting wars over morality seem less reasonable. In fact, it makes them seem more reasonable. Do you disagree? Do you think it makes sense to fight wars over a definition?

Replies from: Manfred
comment by Manfred · 2011-06-02T00:10:16.963Z · LW(p) · GW(p)

It seems to me like there are two kinds of disagreements, positive disagreements about the state of the world, and normative disagreements about the proper state of the world.

Sure. But if people start arguing over what's right, they should argue over the proper state of the world, not over what's "right."

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T00:11:26.548Z · LW(p) · GW(p)

I don't see much difference between "right" and "proper".

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-02T01:50:25.536Z · LW(p) · GW(p)

He could also mean that we have to argue about states of the world.

But what else would we argue about, normatively? Abstract concepts, say, like "drugs are bad". But then I would agree with him.

So I think we agree.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T02:24:36.967Z · LW(p) · GW(p)

Again, I don't follow you.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-02T02:31:38.697Z · LW(p) · GW(p)

So perhaps he is saying that people should argue over the proper state of the world and not over the right XYZ, for some concept XYZ.

For instance, people should argue over the proper state of the world, not the right flavor of ice cream.

That is a true statement, there is certainly no objective right flavor of ice cream.

It is the most reasonable explanation I can come up with.

comment by torekp · 2011-06-04T02:10:47.730Z · LW(p) · GW(p)

I agree that Luke's approach has at times seemed implausibly individualistic. Moral reasoning is interpersonal from the get-go.

comment by nshepperd · 2011-06-02T02:26:18.564Z · LW(p) · GW(p)

You're missing the point. The empathic metaethicist is trying to figure out what she means by 'right'. Assuming she's a well-adjusted human being, that's probably the same as what you mean by 'right', so with any luck we'll work out what you mean by 'right' as well (and hence, what "is right"). But we're not asking Alex what she thinks peterdjones.getMeaning("right").getExtension() is.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T02:31:25.228Z · LW(p) · GW(p)

That isn't a good theoretical argument that "right" has only a subjective definition, and it isn't practically as good as being able make individual notions of moral rightness more correct, where they need fixing.

Replies from: nshepperd
comment by nshepperd · 2011-06-02T03:50:54.280Z · LW(p) · GW(p)

Whatever you mean by "only a subjective definition", I'm probably not trying to argue that.

Do you think you mean something other than what is right when you say "right"? If not, then replace "Alex" with "Peterdjones". Do you still think the empathic metaethicist is going to get the wrong answer when they try to figure out what you mean by "right"?

comment by Ichoran · 2013-11-19T23:17:57.053Z · LW(p) · GW(p)

Although I think this series of posts is interesting and mostly very well reasoned, I find the discussion about objectivity to be strangely crafted. At the risk of arguing about definitions: the hierarchy you lay out about objectivity is only remotely related to what I mean by objective, and my sense is that it doesn't cohere very well with common usage.

First, there seems no better reason to split off objective1 than objectiveA which is "software-independent facts". Okay, so I can't say anything objective about my web browser, just because we've said I can't. Why is this helpful? The only reason to split this out is if you are some sort of dualist; otherwise the mind is a computational phenomenon just like DNA replication or whatnot.

Second, as Emile already pointed out, nowhere in the hierarchy is uniqueness addressed, yet this is the clearest conventional distinction between subjectivity and objectivity. 5+7 = 12 for everyone. Mint chocolate chip ice cream is better than rocky road ice cream is not the case for everyone (in the conventional sense, anyway). So these things are all colloquially objective:

  • Rocky road has more chocolate than mint chocolate chip
  • The author of this post enjoys mint chocolate chip more than rocky road
  • My IPv4 address has a higher value than does lesswrong.org
  • The Bible describes God endorsing the consumption of only certain animals

Referring to God doesn't make things non-objective in the standard sense presuming God exists. Of course, without a way to measure God's preferences, you may lose your theoretical objectivity, but any other single source or self-consistent group can fill in (e.g. the Pope) as an source for objective answers to what would otherwise be subjective questions.

The issue isn't whether that is subjective or objective; it's whether that method of gaining objectivity is practical and useful.

And since humans are the only sentient beings, I really fail to see what the distinction is between 2 and 3 is in a practical way, once you split off God (or any other singularly identifiable entity).

So I strongly suggest that this section ought to be rethought. Objectivity seems central to this sort of moral reductionism, and so it is worth using definitions that are not too misleading. Either the definitions should change, or there should be much more motivation about why we care about the distinctions between any of the definitions you've offered.

comment by theyangist · 2013-03-15T17:03:17.619Z · LW(p) · GW(p)

In jest, I'm going to accuse you of plagiarizing my work, then tell you two problems that I have with the approach that you've outlined, and then wax e-peen and say that mine is similar, but more instructive on moral discourse among all users of it.

My problem here is that we already have a common language (in terms of wants) which reduces "should" and provides the kind of plurality that you're seeking out of this approach, so there's no need to claim, "I'm using 'is good' to mean P," and then eek out a true statement whose truth is a matter of lexical elaboration, when instead people use moral language to alter others' of their own behavior, perspective, etc. without all of that theorizing on top of it. Most people don't make moral arguments on the basis of grand (meta)ethical stances. But it seems like everyone would have to be a deep ethicist to get any traction out of this theory, and that would mean that it really only explains how experts use moral language, but not how everyday people do. But why would one have to be deep about ethics to prescribe that someone do something? Can't people prescribe an action and justify it without appealing to a definition of 'goodness' at all?

My last issue here is a potential paradox that I spotted when I made two pluralistic moral reductionists confront each other:

PMR1: "Is Harris's defined 'being good' better than Craig's defined 'being good'?"

PMR2: "What do you mean by 'better'?"

PMR1: "I mean whatever you mean when you say 'better' in questions like this."

PMR2: "But by 'better', I mean whatever you mean when you say 'better' in questions like this."

comment by Vladimir_Nesov · 2011-06-02T02:17:01.522Z · LW(p) · GW(p)

Moral facts are objective3 if they are made true or false by facts independent of the opinions of humans, otherwise they are subjective3.

Are the words of this comment subjective3 (if we drop the specifier "moral" for a moment, given that this idea is not defined in this context)? They are determined by my reasoning, but they also obey the laws of physics. The notion of "independent" is not easy to make precise.

Replies from: lukeprog, wedrifid
comment by lukeprog · 2011-06-08T07:08:03.924Z · LW(p) · GW(p)

Right. It would take a lot of unpacking to make precise the term 'mind-independent.'

comment by wedrifid · 2011-06-02T11:43:26.714Z · LW(p) · GW(p)

Are the words of this comment subjective3 (if we drop the specifier "moral" for a moment, given that this idea is not defined in this context)? They are determined by my reasoning, but they also obey the laws of physics. The notion of "independent" is not easy to make precise.

I had similar thoughts when reading that passage.

There is (also) a certain sense in which the concept 'objectively subjective' is relevant here.

comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2021-06-25T12:39:47.047Z · LW(p) · GW(p)

"He's an unmarried man, but is he a bachelor?" This is a 'closed' question. The answer is obviously "Yes."

This is a false claim, unfortunately. Bachelor is not merely an "unmarried man", it is an "unmarried man who could've been married in his society" (as all the long-discussed things like "#My 5-year-old son is a bachelor" and "#The Pope is a bachelor" show). ETA: the part beginning with "who" is probably a presupposition rather than assertion ("The Pope is not a bachelor" is only felicitous if used as metalinguistic "The Pope cannot be described by the word 'bachelor'", not if used in the literal sense "The Pope is married although it is not allowed").

Austere Metaethicist: Your definition doesn't connect to reality. It's like [? · GW] talking about atom-for-atom 'indexical identity' even though the world is [? · GW] made of configurations and amplitudes instead of Newtonian billiard balls. Gods don't exist. [? · GW]

This one is also not obviously true. We can ask what Sherlock Holmes would approve of despite the fact that he never existed (and I can imagine a morality that says "good is what Sherlock approves of" - a strange morality though it would be). Why can't we take "an essentially just and loving God" as a similar literature character?

comment by imbatman · 2013-05-01T20:15:07.354Z · LW(p) · GW(p)

Did the next few posts Luke mentions would be about empathic metaethics ever get written? I don't see them anywhere.

Replies from: lukeprog
comment by lukeprog · 2013-09-01T20:09:38.686Z · LW(p) · GW(p)

Now that I'm running MIRI, I'll probably never write it, but here's where I was headed.

comment by Wei Dai (Wei_Dai) · 2011-06-15T01:26:16.812Z · LW(p) · GW(p)

I'm commenting on the post-change "Is-Ought" section. It seems to me that most of the examples given of "ought" reductions do not support the conclusion that "the is-ought gap can be bridged", because the reductions are wrong. Anyone can propose a naturalistic definition of "ought", but at a minimum, to be right a translation of an "ought" statement into an "is" statement has to preserve the truth value of the "ought" statement, and most of the reductions listed fail to do so.

Take the first example:

"X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z."

If you give me any specific proposal for Y and Z, I'm pretty sure I can find an X such that "you ought to X" is obviously false and "X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z" is true, or vice versa.

comment by XiXiDu · 2011-06-05T18:51:11.080Z · LW(p) · GW(p)

Luke, what is the meta-purpose of this sequence? It seems that you are trying to dissolve the persisting confusion about ethics, but what is the underlying purpose? I doubt this sequence is a terminal goal of yours, am I wrong? The reason for my doubt is that, before you started writing this sequence, you decided to join the visiting fellows program at the Singularity Institute for Artificial Intelligence. Which makes me infer that the purpose of this sequence has to do with friendly AI research.

If I am right that the whole purpose of this sequence has to do with friendly AI research, then how is it useful to devote so much resources to explaining the basics, instead of trying to figure out how to design, or define mathematically, something that could extrapolate human volition?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-06-05T21:45:06.188Z · LW(p) · GW(p)

If I am right that the whole purpose of this sequence has to do with friendly AI research,

Luke confirms your guess in the comments section of this post.

then how is it useful to devote so much resources to explaining the basics, instead of trying to figure out how to design, or define mathematically, something that could extrapolate human volition?

I can see a couple of benefits of going over the basics first. One, it lets Luke confirm that his understanding of the basics are correct, and two, it can interest others to work in the same area (or if they are already interested, can help bring them up to the same level as Luke). I would like to see Luke continue this sequence. (Unless of course he already has some good new ideas about FAI, in which case write those down first!)

Replies from: XiXiDu, lukeprog
comment by XiXiDu · 2011-06-06T08:26:59.245Z · LW(p) · GW(p)

But how does ethics matter for friendly AI? If a friendly AI is going to figure out what humans desire, by extrapolating their volition, might it conclude that our volition is immoral and therefore undesirable?

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-06T11:09:46.214Z · LW(p) · GW(p)

What morality unit would it have other than human's volition?

If it has another, separate volition unit, yes.

If not, then only if humans fundamentally self-contradict, which seems unlikely, because biological systems are pretty robust to that.

Replies from: XiXiDu
comment by XiXiDu · 2011-06-06T11:35:18.846Z · LW(p) · GW(p)

What morality unit would it have other than human's volition?

I am not sure what a 'morality unit' is supposed to be or how it would be different from a volition unit. Either morality is part of our volition, instrumental or an imperative. In each case one could ask what we want and arrive at morality.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-06T15:51:09.329Z · LW(p) · GW(p)

What I'm saying is that: If Clippy tried to calculate our volition, he would conclude that our volition is immoral. (Probably. Maybe our volition IS paperclips.)

But if we programmed an AI to calculate our volition and use that as its volition, and our morality as its morality, and so on, then it would not find our volition immoral unless we find our volition immoral, which seems unlikely.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-06T16:04:55.855Z · LW(p) · GW(p)

An AI that was smarter than us might deduce that we were not applying the Deep Structure of our morality properly because of bias or limited intelligence. It might conclude that human morality requires humans to greatly reduce their numbers in order to lessen the impact on other species, for instance.

comment by lukeprog · 2011-06-08T07:13:05.898Z · LW(p) · GW(p)

I can see a couple of benefits of going over the basics first. One, it lets Luke confirm that his understanding of the basics are correct, and two, it can interest others to work in the same area (or if they are already interested, can help bring them up to the same level as Luke).

Correct!

comment by Emile · 2011-06-02T12:34:23.838Z · LW(p) · GW(p)

Here are some common uses of the objective/subjective distinction in ethics:

  • Moral facts are objective1 if they are made true or false by mind-independent facts, otherwise they are subjective1.
  • Moral facts are objective2 if they are made true or false by facts independent of the opinions of sentient beings, otherwise they are subjective2.
  • Moral facts are objective3 if they are made true or false by facts independent of the opinions of humans, otherwise they are subjective3.

Hmm, that doesn't cover the way I understand "objective" and "subjective" - I see them as referring to whether the answer to a question varies from one person to another, i.e. whether they are function of the person speaking. "Is that play interesting?" is subjective4, "Does that play follow the three unities of classical drama?" is objective4 - so to match your pattern it would be something like "Moral facts are objective4 if they are made true or false by facts independent of any single human, otherwise they are subjective4".

i.e. it seems to me that some things are considered "objective" if they are merely social customs that may not hold in another society in another age.

But my thinking around this isn't very clear, I'm mostly reacting to the fact that none of the definitions you listed seemed to fit the way I understood the words.

Replies from: nshepperd, torekp, Peterdjones, lukeprog
comment by nshepperd · 2011-06-08T11:50:45.284Z · LW(p) · GW(p)

It seems to me that this subjective/objective difference can be somewhat dissolved by noticing (or, if you like, postulating) that "Is that play interesting?" is a different question when asked of Alice than when asked of Bob. The real question is, probably, in one case "Is that play interesting to Alice?" and in the other case "Is that play interesting to Bob?".

In sentences like "Do you like vanilla ice cream?" it's more explicit and clear that the meaning of "you" in it must be inferred from context (specifically, the identity of the questionee).

I would postulate that in general a "question" is directly about facts about the universe or mathematics (hopefully grounded in anticipation of sensory experience), but the sentence the question is asked in may contain ambiguous words which require context to interpret. And further, if the sentence together with the context is unambiguous, it indicates a "real question".

In either case of the first example, the question has a single truth value which doesn't change depending who you ask, though Alice and Bob may react differently to the sound waves (or light patterns) that are understood as "Is that play interesting?".

Perhaps the sentences usually called "subjective4 questions" would be those containing such ambiguous words as "interesting" or "sexy", in regard to which people are likely to suffer from the mind projection fallacy... I don't know.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-08T13:37:13.302Z · LW(p) · GW(p)

It seems to me that this subjective/objective difference can be somewhat dissolved by noticing (or, if you like, postulating) that "Is that play interesting?" is a different question when asked of Alice than when asked of Bob. The real question is, probably, in one case "Is that play interesting to Alice?" and in the other case "Is that play interesting to Bob?".

I don't see why that would count as a dissolution. The subjective is defined as varying with individuals and the objective is defined as not so varying. All that your restatements as ".."to Alice", "..to Bob" do is make that dependence explicit.

Perhaps the sentences usually called "subjective4 questions" would be those containing such ambiguous words as "interesting" or "sexy",

These words look ambiguous if you expect them to have a single value. They can be unambiguous if they have well defined values for different people. Subjective-ness isn't ambiguity.

Replies from: nshepperd
comment by nshepperd · 2011-06-09T10:30:35.441Z · LW(p) · GW(p)

I guess I just don't think it's interesting or in any way special that there are questions that are usually asked by including non-verbal information, or that there are words that refer to "the recipient of this message".

Wrt "ambiguity" all I mean is that you don't know what the speaker intended the word to refer to until you know who they were talking to. Steve said "Is that play interesting?" to Alice, implying he wants to know whether Alice found the play interesting. Responding with Bob's opinion of the play would be unhelpful, which is why you need the context. Maybe "ambiguous" isn't the best word for that. Whatever.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-09T12:25:46.715Z · LW(p) · GW(p)

I guess I just don't think it's interesting or in any way special that there are questions that are usually asked by including non-verbal information, or that there are words that refer to "the recipient of this message".

That would be OK if a) there were a clear distinction between the two categories and/or b) nothing much rode on the distinction.

But neither is the case wrt morallity. a) We don't have a "usual" practices with regard to moral language. Committed objectivists speak one way, subjectivists another, and many others are undecided b) it is hard to coneive of anything more important than morality -- and the two ways of speaking mean something different. Alice can't wish Bob to be punished just for doing something that's wrong-for-Alice.

comment by torekp · 2011-06-04T02:46:43.231Z · LW(p) · GW(p)

I agree that a very common meaning of objective/subjective is along your lines. In metaethics, it seems the most discussed. But I'd suggest function of the speaker's beliefs and attitudes, not just function of the speaker. My left hand has five fingers: this truth is a function of the speaker, but if we want to communicate effectively in English, we'd better call this fact objective.

comment by Peterdjones · 2011-06-08T10:51:12.297Z · LW(p) · GW(p)

I see them as referring to whether the answer to a question varies from one person to another, i.e. whether they are function of the person speaking. "Is that play interesting?" is subjective4, "

Answers can vary because people make mistakes about objective issues. You need to specify ideal agents to define objectivity, or to define subjectivity as answers that properly vary with individuals, ie the individual has the last word on their favourite flavour of ice cream.

comment by lukeprog · 2011-06-08T07:13:49.510Z · LW(p) · GW(p)

Certainly, there are far more than 3 uses of the objective/subjective distinction! Check for footnote for a pointer to others.

comment by Garren · 2011-06-02T00:22:32.962Z · LW(p) · GW(p)

Sounds like a form of speaker relativism, with the 'empathetic' project being about going beyond merely saying that people are expressing different fundamental standards, values, etc. to developing ways to bring those out into the open.

comment by Peterdjones · 2011-06-01T17:55:31.043Z · LW(p) · GW(p)

In a sense, pluralistic moral reductionism can be considered a robust form of moral 'realism', in the same way that pluralistic sound reductionism is a robust form of sound realism. "Yes, there really is sound, and we can locate it in reality — either as vibrations or as mental auditory experiences

In a sense it can't be considered robust realism. The two meanings of "sound" don't lead to any confusion in practice: we know that powerful hi fis produce a lot of sound1 and we know that deaf people standing in front of the won't hear any sound2.

However, morality is tied to the actions of oneself and others. There's no coherent situation in which gay marriage can be both right and wrong, because there is no coherent way it can be both allowed and forbidden. In practice we tend to just average out preferences when it comes to defining law and allocating resources...but the preferences in question might as well be subjective. It is difficult to see what is being bought by the objectivity in pluralistic objective reductionism.

But in another sense, pluralistic moral reductionism is 'anti-realist'. It suggests that there is no One True Theory of Morality. (We use moral terms in a variety of ways, and some of those ways refer to different sets of natural facts.)

Of course normative pluralism doesn't follow from descriptive pluralism. Some uses of moral terms be wrong. You said Craigs theological morality was wrong. If reducing and naturalising morality doesn't allow you to say anybody is right or wrong, what is the point?

And as a reductionist approach to morality, it might also leave no room for moral theories which say there are universally binding moral rules for which the universe (e.g. via a God) will hold us accountable.

Or, eg,, not via God. Unless there are: if you can say 1 theory is wrong, you can say N-1 are wrong and arrive at one not-wrong theory by elimination. So I don't see how your approach supports pluralism as a fixed principle: it can only depend on how things pan out in practice.

Replies from: Manfred, wedrifid
comment by Manfred · 2011-06-01T22:09:01.385Z · LW(p) · GW(p)

There's no coherent situation in which gay marriage can be both right and wrong

There's also no coherent situation in which something can be sound and not sound.

You said Craigs theological morality was wrong.

Even worse than wrong (wrong here meaning wrong(Manfred) )! It follows from a probably false premise, and so is undefined when phrased as coming from God.

If reducing and naturalising morality doesn't allow you to say anybody is right or wrong, what is the point?

Ending the arguments over this stuff.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-01T22:44:02.622Z · LW(p) · GW(p)

There's also no coherent situation in which something can be sound and not sound.

Sure there is: unheard compression waves.

Ending the arguments over this stuff.

Should we end all arguments by giving up on any fact of the matter?

Replies from: Manfred, gjm
comment by Manfred · 2011-06-01T23:48:33.041Z · LW(p) · GW(p)

There's also no coherent situation in which something can be sound and not sound.

Sure there is (uses multiple definitions)

So then let's do the same for the gay marriage example.

Should we end all arguments by giving up on any fact of the matter?

We should end all arguments about subjects where there is no fact of the matter.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T00:07:01.768Z · LW(p) · GW(p)

So then let's do the same for the gay marriage example.

I don't think you can. The different meanings of "sound" are disambiguated by context

We should end all arguments about subjects where there is no fact of the matter.

OK. Then we just need to have the argument about whether there is a fact of the matter. Oh,,,we are.

Replies from: Manfred
comment by Manfred · 2011-06-02T00:17:56.367Z · LW(p) · GW(p)

Not anymore!

comment by gjm · 2011-06-01T23:54:35.936Z · LW(p) · GW(p)

If "unheard compression waves" counts as an instance in which something is "sound and not sound", then gay marriage can equally be "right and wrong" if, e.g., it is prohibited by the Bible but isn't any obstacle to maximizing net preference satisfaction.

No doubt you want to protest: No, but there really is a single absolute truth about what's right and what's wrong, whereas there really isn't a single absolute truth about what counts as "sound". Maybe so, but that's a highly disputable matter and you might do well to offer some actual arguments.

comment by wedrifid · 2011-06-01T22:51:02.259Z · LW(p) · GW(p)

If reducing and naturalising morality doesn't allow you to say anybody is right or wrong, what is the point?

To convince people that it is ok for me to smack Urist McHatedRival in the face with a sharp rock and take all his stuff. And probably his daughters. The same as any other discussion about morality.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-01T23:02:14.470Z · LW(p) · GW(p) In that case, I'll make a note to steal all your stuff, since you wouldn't be so hypocritical as to object. Replies from: wedrifid
comment by wedrifid · 2011-06-02T05:08:35.657Z · LW(p) · GW(p)

Are you telling me you are a Urist sympathiser and also practitioner of witchcraft? You ought not say such things. Come tribe, we should slay him too before the wrath of Odin falls upon us.

Moralizing is all about using hypocrisy effectively to achieve personal gains through social influence.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T12:42:22.141Z · LW(p) · GW(p)

You haven't argued that in any way. The point of my parable was that , while it may be easy to condemn other people's uses of morality , it is much harder to do without when you feel vicitimised yourself.

Replies from: wedrifid
comment by wedrifid · 2011-06-02T14:55:50.519Z · LW(p) · GW(p)

You haven't argued that in any way.

Or you simply haven't understood.

The point of my parable was that , while it may be easy to condemn other people's uses of morality , it is much harder to do without when you feel vicitimised yourself.

And your point would have be relevant if I was condemning the use of morality rather than describing them for the sake of giving a straight answer to a rhetorical question. Your 'parable' misses the mark. A "devil's advocate" against a straw man, as it were.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T15:02:03.548Z · LW(p) · GW(p)

You haven't that in any way

Or you simply haven't understood.

Haven't understood what? Please repeat whatever argument you believe I have missed.

And your point would have be relevant if I was condemning the use of morality rather than describing them for the sake of giving a straight answer to a rhetorical question.

Your comments about reality read like a contentious claim to me. Retroactively calling a contentious claim a "description" does nothing to remove the need for argumentative support.

Replies from: wedrifid
comment by wedrifid · 2011-06-02T15:31:59.240Z · LW(p) · GW(p)

Your comments about reality read like a contentious claim to me. Retroactively calling a contentious claim a "description" does nothing to remove the need for argumentative support.

This is still not making sense as a reply in the context. If you had expressed disagreement with a claim of mine then we could use arguments to try to persuade people about the subject. But if you present a refutation of something I didn't say then it is an error for me to present arguments for whatever straw man you happened to to attack.

The following is the claim I do make:

One of the points of engaging in extensive debate about systems of morality is that while doing so you have the opportunity to influence the way your community thinks about how people should behave. This allows you to gain practical advantages for yourself and cause harm to your rivals.

I'm not going to provide an extended treatise on that subject here - it isn't appropriate for the context. But if you do actually disagree with me then that gives you a clear position to argue against and I will leave you to do so without refutation.

Replies from: Peterdjones
comment by Peterdjones · 2011-06-02T19:09:51.365Z · LW(p) · GW(p)

One of the points of engaging in extensive debate about systems of morality is that while doing so you have the opportunity to influence the way your community thinks about how people should behave. This allows you to gain practical advantages for yourself and cause harm to your rivals.

I disagree your original claim that gaining advantage is the point of engaging in moral debate. Your revised claim, that it is only one of the points, is uninteresting, since any tool can be misused.

comment by TimFreeman · 2011-06-01T17:05:33.361Z · LW(p) · GW(p)

If someone makes a claim of the 'ought' type, either they are talking about the world of is, or they are talking about the world of is not.

When people are talking about 'ought', they are frequently mean something that's different from 'is' but is like 'is' in that it's a primary concept. For them, 'ought' is not something that can be defined in terms of 'is'.

So IMO people who are talking about 'ought' often really are talking about the world of 'ought', and that's about all you can say about it.

If they are talking about the world of is not, then I quickly lose interest because the world of is not isn't my subject of interest.

You're entitled to be uninterested in the world of 'ought' as a primary concept as well. I am not interested in it either, so I can't defend the point of view of these 'ought' believers. I have repeatedly had conversations with them, so I am sure they exist.

comment by TheAncientGeek · 2014-11-25T21:50:45.874Z · LW(p) · GW(p)

Why stop with morality?For any debate about some X, you can shit that there is no real disagreement, and the two parties are just talking about two different things, X1 and X2, which you arrive at by treating one set of theoreticalclaims as the definition of X1 and the other as X2..

I don't think this dissolution of disputes is a good idea in general, because I think theories aren't definitions, and Ithink there are real disputes, and I'm suspicious of unuversal solvents. But I like tthe Sound example. But I dont like the morality example.. pluralism is bad for the sane reason that subhectivism is bad ... morality has a practical, aspect... laws are passed, sanctions handed out and those either happen or don't.

comment by luke_turner · 2012-04-24T15:56:09.249Z · LW(p) · GW(p)

I'm confused. If this article promotes pluralistic moral reduction, why does Luke M. make a statement that sounds reductionaly singular rather than plural? I mean this statement here:

But moral terms and value terms are about what we want.

Isn't this a reduction of morality to a single thing, specifically, "what we want", i.e., desire?

Isn't this defining morality as the practice of acting on our desires? And does not this definition contradict other definitions/reductions?

comment by syzygy · 2012-03-08T07:37:01.770Z · LW(p) · GW(p)

Am I correct in (roughly) summarizing your conclusion in the following quote?

Yes, there really is morality, and we can locate it in reality — either as a set of facts about the well-being of conscious creatures, or as a set of facts about what an ideally rational and perfectly informed agent would prefer, or as some other set of natural facts.

If so, what is the logical difference between your theory and moral relativism? What if a person's set of natural facts for morality is "those acts which the culture I was born into deem to be moral"?

comment by lukeprog · 2011-06-24T19:53:55.459Z · LW(p) · GW(p)

Update: I've had the pleasure of discussing some of the topics from this post in episode 87 of video-podcast 'Truth-Driven Thinking.'

comment by lukeprog · 2011-06-14T17:37:34.653Z · LW(p) · GW(p)

I'm about to rewrite the section on the is-ought gap, for clarity, so here's a copy of the original text:

Many claim that you cannot infer an 'ought' statement from a series of 'is' statements. The objection comes from Hume, who wrote that he was surprised whenever an argument made of is and is not propositions suddenly shifted to an ought or ought not claim without explanation.

Many of Hume's followers concluded that the problem is not just that this shift happens without adequate explanation, but that one can never derive an 'ought' claim from a series of 'is' claims.

But why should this be? If someone makes a claim of the 'ought' type, either they are talking about the world of is, or they are talking about the world of is not. (What they're talking about is plausibly reducible to physics, or else it isn't.) If they are talking about the world of is not, then I quickly lose interest because the world of is not isn't my subject of interest. If they are making a claim about the world of is, then I ask them which part of the world of is they are discussing. I ask which ought-reductionism they have in mind.

Often, they have in mind a common ought-reductionism known as the hypothetical imperative. This is an ought of the kind: "If you desire to lose weight, then you ought to consume fewer calories than your burn." (But usually, people leave off the implied if statement, and simply say "You should eat less and exercise more.")

A hypothetical imperative (a kind of ought statement) reduces in a straightforward way to a prediction about reality (a kind of is statement). "If you desire to lose weight, then you ought to consume fewer calories than you burn" translates to the claim "If you consume fewer calories than you burn, then you will (or are, ceteris paribus, more likely to) fulfill your desire to lose weight."

Or, perhaps someone has a moral reductionism in mind during a particular use of 'ought' language. Perhaps by "You ought to be more forgiving" they really mean "If you are more forgiving, this is likely to increase the amount of pleasure in the world."

As you can see, it is not hard to bridge the is-ought gap. 'Ought' statements either collapse into the world of is (as with hypothetical imperatives or successful moral reductionisms), or else they collapse into the world of is not (as with Craig's moral theory of divine approval).

comment by smijer · 2011-06-06T02:28:22.976Z · LW(p) · GW(p)

Stealing often means wrongful taking of property... but point well taken.

Replies from: lukeprog
comment by lukeprog · 2011-06-08T07:10:33.864Z · LW(p) · GW(p)

True! Maybe I need a different example.

comment by lukstafi · 2011-06-01T11:03:53.612Z · LW(p) · GW(p)

I miss the discussion (on LW in general) of an approach to ethics that strives to determine what actions should be unlawful for an agent, as opposed to, say, what probability distribution over actions is optimal for an agent. (And I don't mean "deontologic", as the "unlawfulness" can be predicated on the consequences.) If you criticize this comment for confusion of "descriptive ethics vs. normative ethics vs. metaethics", try to be constructive.

Replies from: gjm, wedrifid
comment by gjm · 2011-06-01T15:09:55.861Z · LW(p) · GW(p)

Please explain why you think there should be more of that on LW.

Replies from: lukstafi, Peterdjones
comment by lukstafi · 2011-06-02T12:42:37.415Z · LW(p) · GW(p)

What: discussion of the "social contract" aspect of ethics, for example of the right to not have one's options (sets of actions) constrained beyond a threshold X, what that threshold should be, e.g. a property that the actions that infringe right-X of others are forbidden-X.

Why should there be more of that on LW: (1) it is equally practical and important aspect of ethics as the self-help aspect, (1) it seems to be simpler than determining optimal well-being conditions.

Replies from: wedrifid, gjm
comment by wedrifid · 2011-06-02T15:07:27.248Z · LW(p) · GW(p)

Why should there be more of that on LW: (1) it is equally practical and important aspect of ethics as the self-help aspect

It would call for a series of 'self help' style of posts explaining:

  • The benefits of creating boundaries between your own identity and other people's declarations of wrongness.
  • The art of balancing freedom with political expedience when dealing with other agents who are attempting to coerce you socially.
  • How to maintain internal awareness of the distinction between what you do not do for fear of social consequences vs what you do not do because of your own ethical values.
  • The difference between satisfying the preferences of others vs acquiescing to their demands. Included here would be how to deal with those who haven't developed the ability to express their own desires except indirectly via the declarations of what it is 'right' for others do.
Replies from: MatthewBaker
comment by MatthewBaker · 2011-06-03T22:51:29.374Z · LW(p) · GW(p)

On the other hand, when i look at self help i see something i will continue to delay/slowly progress at a constant rate because my current situation seems to be quite similar to my ideal situation. I think that once you reach that point is when you start a more constant but passive process of improvement.

comment by gjm · 2011-06-02T13:18:44.414Z · LW(p) · GW(p)

Your first #1 doesn't seem to me to be a good justification for having more of it on LW. Lots of things are practical and important but don't belong on LW.

Your second #1 seems to me wrong; deciding what's actually right and wrong is very much not "simpler than determining optimal well-being conditions", for the following reasons. (a) It's debatable whether it's even meaningful (since many people here are moral nonrealists or relativists of one sort or another). (b) There is no obvious way to reach agreement on what actually influences what's right and what's wrong. Net preference satisfaction? The will of a god? Obeying some set of ethical principles somehow built into the structure of the universe? Or what? (c) Most of the theories held by moral realists about what actually matters make it extraordinarily difficult to determine, in difficult cases, whether a given thing is right or wrong. Utilitarianism requires you to sum (or average, or something) the utilities of perhaps infinitely many beings, over a perhaps infinite extent of time and space. The theory Luke calls "desirism" requires you to work out the consequences of having many agents adopt any possible set of preferences. Intuitionist theories and divine-command theories make the details of what's right and wrong entirely inaccessible. Etc.

Now, perhaps in fact you have some specific meta-ethical theory in mind such that, if that theory is true, then the ethical calculations become manageable. In that case, you might want to say what that meta-ethical theory is and why you think it makes the calculations manageable :-).

comment by Peterdjones · 2011-06-01T19:11:52.680Z · LW(p) · GW(p)

My answer to that question is that it what morality is actually about, and that personal preference-optimisation is something else.

Replies from: gjm
comment by gjm · 2011-06-01T21:31:18.402Z · LW(p) · GW(p)

I don't think Luke, at least, is conflating morality with personal preference-optimization. He's saying: Different people have different notions of "should"-ness, and if someone says "What should I do?" then giving them a good answer has to begin with working out what notion of "should" they're working with. That applies whether "should" is being used morally or prudentially or both.

Also: What makes a moral agent a moral agent is having personal preferences that give substantial weight to moral considerations. And what such an agent is actually deciding, on any given occasion, is what serves his/her/its goals best: it's just that among the important goals are things like "doing what is right" and "not doing what is wrong". So, actually, for a moral agent "personal preference-optimization" will sometimes involve a great deal of "what morality is actually about".

Replies from: Peterdjones
comment by Peterdjones · 2011-06-01T21:46:06.354Z · LW(p) · GW(p)

There's an important difference between saying preferences may or may not include moral values, and saying morality is, by definition, preference-maximiisation.

Replies from: gjm
comment by gjm · 2011-06-01T22:00:06.623Z · LW(p) · GW(p)

Yup, there is. Did anyone say that morality is, by definition, preference-maximization?

Replies from: Peterdjones
comment by Peterdjones · 2011-06-01T22:47:33.824Z · LW(p) · GW(p)

Yes.

Replies from: gjm
comment by gjm · 2011-06-01T23:43:58.116Z · LW(p) · GW(p)

Do please feel free to provide more information.

comment by wedrifid · 2011-06-01T11:59:43.986Z · LW(p) · GW(p)

I miss the discussion (on LW in general) of an approach to ethics that strives to determine what actions should be unlawful for an agent, as opposed to, say, what probability distribution over actions is optimal for an agent. (And I don't mean "deontologic", as the "unlawfulness" can be predicated on the consequences.) If you criticize this comment for confusion of "descriptive ethics vs. normative ethics vs. metaethics", try to be constructive.

I don't criticize your comment on the basis of any confusion. It appears be more or less a coherent indication of preference. I criticize it based on considering the state which you desire to be both abhorrent and not (sufficiently) lacking here.

Replies from: lukstafi
comment by lukstafi · 2011-06-01T14:22:19.389Z · LW(p) · GW(p)

Do you find the "classification problem" variant of the "optimization problem" already repugnant, or is it something deeper?

Replies from: wedrifid, wedrifid
comment by wedrifid · 2011-06-01T14:39:01.058Z · LW(p) · GW(p)

Do you find the "classification problem" variant of the "optimization problem" already repugnant, or is it something deeper?

Classification vs optimization is not necessarily a feature I was commenting on.

comment by wedrifid · 2011-06-01T14:35:48.359Z · LW(p) · GW(p)

Do you find the "classification problem" variant of the "optimization problem" already repugnant, or is it something deeper?

The degree of bullshit that is intrinsic to such conversations when engaged in by human participants may be a contributing factor.

comment by Psychosmurf · 2014-01-03T19:32:14.595Z · LW(p) · GW(p)

The Yudkowskian response is to point out that when cognitivists use the term 'good', their intuitive notion of 'good' is captured by a massive logical function that can't be expressed in simple statements

This is the weakest part of the argument. Why should anybody believe that there is a super complicated function that determines what is 'good'? What are the alternative hypotheses?

I can think of a much simpler hypothesis that explains all of the relevant facts. Our brains come equipped with a simple function that maps "is" statements to "ought" statements. Thus, we can reason about "ought" statements just like we do with "is" statements.

The special thing about this function is that there is nothing special about it at all. It is absolutely trivial. Any "ought" statement can potentially be inferred from any "is" statement. Therefore, "ought" statements can never be conditioned by evidence. This explains not only why there is lots of disagreement among people about what is "good" and that our beliefs about what constitutes "good" will be very complicated, but also that there will be no way to resolve these disagreements.

Replies from: TheAncientGeek, LawChan
comment by TheAncientGeek · 2014-11-16T22:55:27.190Z · LW(p) · GW(p)

The situation is more complex than a set of objective moral truths that everyone agrees on, but it is also more complex than complete divergence. There is basis for convergence on the wrongness of murder and theft and some other things. Complete divergence would would mean itwould be irrational to even try to find common ground , and enshrine it in law.

comment by LawrenceC (LawChan) · 2014-11-16T21:17:21.891Z · LW(p) · GW(p)

This is the weakest part of the argument. Why should anybody believe that there is a super complicated function that determines what is 'good'? ... Our brains come equipped with a simple function that maps "is" statements to "ought" statements. Thus, we can reason about "ought" statements just like we do with "is" statements

I think the claim isn't that there is a super-complicated function that determines what is 'good', but that the mapping from 'is' statements to 'ought' statements in the human brain is extremely complicated. If we claim that what is 'good' is what our brain considers is 'good', though, we merely encapsulate this complexity in a convenient black box.

That's not to say that it's not a solution, though: have you looked into desire utilitarianism? What you're proposing here is really similar to (as I understand it) that school of moral philosophy claims. If you have time, Fyfe's A Better Place is a good introduction.

comment by Peterdjones · 2013-01-07T01:23:01.888Z · LW(p) · GW(p)

But whatever our intended meaning of 'ought' is, the same reasoning applies. Either our intended meaning of 'ought' refers (eventually) to the world of maths and physics (in which case the is-ought gap is bridged), or else it doesn't (in which case it fails to refer).12

The is-ought problem is an epistemic problem. Being informed that some A is ultimately, ontologically, the same as some B does not tell me how that A entails that B. If I cannot see how an "is" implies an "ought", being informed that the "ought" ultimately refer to states of the world -- states of the world far too complex for me to include in my epistemic calculations -- does not help. I can't cram a (representation of a) world-state into my brain. Being informed that if I could I would no longer have an is-ought problem under the unlikely circumstances that I could do so doesn't help. The ontological claim that is's and ought's ultimately have the same referents can only be justified by some epistemic procedure. That is the only way any ontological claim is justified.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-07T19:13:43.962Z · LW(p) · GW(p)

You really shouldn't be using your own comments as evidence in an argument. It makes your reasoning appear... just a little motivated.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-07T21:54:58.232Z · LW(p) · GW(p)

An argument works or it doens''t.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-07T22:03:04.298Z · LW(p) · GW(p)

That's true. Which means you really should have brought this argument up and resolved it, instead of making this argument and then declaring the matter unresolved.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-07T22:18:44.073Z · LW(p) · GW(p)

I didn't declared anything unresolved. I have argued that PMR does not close the is-ought gap. AFAIC that stands until someone counterargues. But hey, you could always downvote it out of visibiliy.