Less Wrong views on morality?

post by hankx7787 · 2012-07-05T17:04:21.187Z · LW · GW · Legacy · 146 comments

Do you believe in an objective morality capable of being scientifically investigated (a la Sam Harris *or others*), or are you a moral nihilist/relativist? There seems to be some division on this point. I would have thought Less Wrong to be well in the former camp.

 

Edit: There seems to be some confusion - when I say "an objective morality capable of being scientifically investigated (a la Sam Harris *or others*)" - I do NOT mean something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean something in reality that's mind-independent - in the sense that it is hard-wired, e.g. by evolution, and thus independent/prior to any later knowledge or cognitive content - and thus can be investigated scientifically. It is a definite "is" from which we can make true "ought" statements relative to that "is". See drethelin's comment and my analysis of Clippy.

146 comments

Comments sorted by top scores.

comment by [deleted] · 2012-07-05T17:42:30.011Z · LW(p) · GW(p)

There seems to be some division on this point.

I might be mistaken but I got the feeling that there's not much of a division, the picture I've got of LW on meta-ethics is something along the lines of: values exist in peoples heads, those are real, but if there were no people there wouldn't be any values. Values are to some extent universal, since most people care about similar things, this makes some values behave as if they were objective. If you want to categories - though I don't know what you would get out of it, it's a form of nihilism.

An appropriate question when discussing objective and subjective morality is:

  • What would an objective morality look like? VS a subjective one?
Replies from: Jack, DanArmak
comment by Jack · 2012-07-05T19:30:30.195Z · LW(p) · GW(p)

People here seem to share anti-realist sensibilities but then balk at the label and do weird things for anti-realists like treat moral judgments as beliefs, make is-ought mistakes, argue against non-consequentialism as if there were a fact of the matter, and expect morality to be describable in terms of a coherent and consistent set of rules instead of an ugly mess of evolved heuristics.

I'm not saying it can never be reasonable for an anti-realist to do any of those things, but it certainly seems like belief in subjective or non-cognitive morality hasn't filtered all the way through people's beliefs.

Replies from: TimS, None, Viliam_Bur, komponisto, bryjnar
comment by TimS · 2012-07-05T19:42:17.760Z · LW(p) · GW(p)

I attribute this behavior in part to the desire to preserve the possibility of universal provably Friendly AI. I don't think a moral anti-realist is likely to think an AGI can be friendly to me and to Aristotle. It might not even be possible to be friendly to me and any other person.

Replies from: Jack
comment by Jack · 2012-07-05T20:02:28.237Z · LW(p) · GW(p)

I attribute this behavior in part to the desire to preserve the possibility of universal provably Friendly AI

Well that seems like the most dangerous instance of motivated cognition ever.

Replies from: ChristianKl
comment by ChristianKl · 2012-07-06T18:25:09.259Z · LW(p) · GW(p)

It seems like an issue that's important to get right. Is there a test we could run to see whether it's true?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-07-07T05:28:08.699Z · LW(p) · GW(p)

Yes, but only once. ;)

Replies from: RobertLumley
comment by RobertLumley · 2012-07-07T15:54:56.535Z · LW(p) · GW(p)

Did you mean to link to this comment?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-07-07T23:16:08.706Z · LW(p) · GW(p)

Thanks, fixed.

comment by [deleted] · 2012-07-06T10:23:38.792Z · LW(p) · GW(p)

People here seem to share anti-realist sensibilities but then balk at the label

When I explain my meta-ethical standpoint to people in general, I usually avoid using phrases or words such as "there is no objective morality" or "nihilism” because there is usually a lot of emotional baggage, often times go they go “ah so you think everything is permitted” which is not really what I’m trying to convey.

do weird things for anti-realists like treat moral judgments as beliefs, make is-ought mistakes, argue against non-consequentialism as if there were a fact of the matter, and expect morality to be describable in terms of a coherent and consistent set of rules instead of an ugly mess of evolved heuristics.

In a lot of cases you are absolutely correct, but there are times when I think people on LW try answer “what do I think is right?”, this becomes a question concerning self-knowledge that is e.g. to what degree I'm I aware of what motivates me or can I formulate what I value?

Replies from: Jack, mwengler
comment by Jack · 2012-07-06T13:27:35.682Z · LW(p) · GW(p)

When I explain my meta-ethical standpoint to people in general, I usually avoid using phrases or words such as "there is no objective morality" or "nihilism” because there is usually a lot of emotional baggage, often times go they go “ah so you think everything is permitted” which is not really what I’m trying to convey.

Terms like "moral subjectivism" are often associated with 'naive undergraduate moral relativism' and I suspect a lot of people are trying to avoid affiliating with the latter.

comment by mwengler · 2012-07-10T06:57:41.874Z · LW(p) · GW(p)

When I explain my meta-ethical standpoint to people in general, I usually avoid using phrases or words such as "there is no objective morality" or "nihilism” because there is usually a lot of emotional baggage, often times go they go “ah so you think everything is permitted” which is not really what I’m trying to convey.

So you don't think everything is permitted?

How do you convey thinking there is no objective truth value to any moral statement and then convey that something is forbidden?

Replies from: None
comment by [deleted] · 2012-07-10T10:51:33.911Z · LW(p) · GW(p)

How do you convey thinking there is no objective truth value to any moral statement and then convey that something is forbidden?

Sure, I can. Doing something that is forbidden, results in harsh consequences (that other agents impose), that is the only meaningful definition I can come up with. Can you come up with any other useful definition?

Replies from: mwengler
comment by mwengler · 2012-07-10T20:06:40.108Z · LW(p) · GW(p)

I like to stick with other people's definitions and not come up with my own. Merriam-Webster for example:

1: not permitted or allowed

Thanks for being my straight man! :)

Replies from: None
comment by [deleted] · 2012-07-22T21:03:01.753Z · LW(p) · GW(p)

While reading your response the first time I got a bit annoyed frankly speaking. So I decided to answer it later when I wouldn't just scream blue!

I might have misinterpreted your meaning, but it seems like you present a straw man of my argument. I was trying to make concepts like forbidden and permitted pay rent - even in a world where there is no objective morality, as well as show that our - at least my - intuition about "forbiddeness" and "permittedness" is derived form the kind of consequences that they result in. It's not like something is not permitted in a group, but do not have any bad consequences if preformed.

Replies from: mwengler
comment by mwengler · 2012-07-22T23:28:25.881Z · LW(p) · GW(p)

The largest rent I can ever imagine getting from terms which are in wide and common use is to use them to mean the same things everybody else means when using them. To me, it seems coming up with private definitions for public words decreases the value of these words.

I was trying to make concepts like forbidden and permitted pay rent - even in a world where there is no objective morality,

There are many words used to make moral statements. When you declare that no moral statement can be objectively true, then I don't think it makes sense to redefine all these words so they now get used in some other way. I doubt you will ever convince me to agree to the redefining of words away from their standard definitions because to me that is just a recipe for confusion.

I have no idea what is "straw man" about any of my responses here.

comment by Viliam_Bur · 2012-07-06T08:37:24.737Z · LW(p) · GW(p)

treat moral judgments as beliefs, make is-ought mistakes, argue against non-consequentialism

A few examples could help me understand what you mean, because right now I don't have a clue.

expect morality to be describable in terms of a coherent and consistent set of rules instead of an ugly mess of evolved heuristics

I guess the goal is to simplify the mess as much as possible, but not more. To find a smallest set of rules that would generate a similar result.

comment by komponisto · 2012-07-05T22:54:37.149Z · LW(p) · GW(p)

Well said.

comment by bryjnar · 2012-07-05T22:41:14.591Z · LW(p) · GW(p)

I agree. I can't figure out clearly enough exactly what Eliezer's metaethics is, but there definitely seem to be latent anti-realist sympathies floating around.

comment by DanArmak · 2012-07-05T18:59:22.179Z · LW(p) · GW(p)

Agreed.

I just posted a more detailed description of these beliefs (which are mine) here.

If anyone here believes in an objectively existing morality I am interested in dialogue. Right now it seems like a "not even wrong", muddled idea to me, but I could be wrong or thinking of a strawman.

comment by cousin_it · 2012-07-05T17:13:49.892Z · LW(p) · GW(p)

After reading lots of debates on these topics, I'm no longer sure what the terms mean. Is a paperclip maximizer a "moral nihilist"? If yes, then so am I. Same for no.

Replies from: private_messaging, TimS
comment by private_messaging · 2012-07-06T20:12:05.944Z · LW(p) · GW(p)

a paperclip maximizer is something that, as you know, requires incredible amount of work put into defining what a paperclip is (if that is even possible without fixing a model). It subsequently has an incredibly complex moral system, very stupid one, but incredibly complex nonetheless. Try something like equation-solver.

comment by TimS · 2012-07-05T17:15:39.127Z · LW(p) · GW(p)

I suspect the OP is asking whether you are a moral realist or anti-realist.

Replies from: cousin_it
comment by cousin_it · 2012-07-05T17:16:53.538Z · LW(p) · GW(p)

Okay, is a paperclip maximizer a moral realist?

Replies from: Jack, hankx7787
comment by Jack · 2012-07-05T17:38:00.250Z · LW(p) · GW(p)

I see no reason to think a paperclip maximizer would need to have any particular meta-ethics. There are possible paperclip maximizers that are and one's that aren't. As rule of thumb, an agent's normative ethics, that is, what it cares about, be it human flourishing or paperclips does not logically constrain it's meta-ethical views.

Replies from: cousin_it
comment by cousin_it · 2012-07-05T18:19:10.123Z · LW(p) · GW(p)

That's a nice and unexpected answer, so I'll continue asking questions I have no clue about :-)

If metaethics doesn't influence paperclip maximization, then why do I need metaethics? Can we point out the precise difference between humans and paperclippers that gives humans the need for metaethics? Is it the fact that we're not logically omniscient about our own minds, or is it something deeper?

Replies from: Jack, Manfred, TimS
comment by Jack · 2012-07-05T18:59:50.512Z · LW(p) · GW(p)

Perhaps I misunderstood. There are definitely possible scenarios in which metaethics could matter to a paperclip maximizer. It's just that answering "what meta-ethics would the best paperclip maximizer have?" isn't any easier than answering "what is the ideal metaethics?". Varying an agent's goal structure doesn't change the question.

That said, if you think humans are just like paperclip maximizers except they're trying to maximize something else than you're already 8/10ths of the way to moral anti-realism (Come! Take those last two steps the water is fine!).

Of course it's also the case that meta-ethics probably matters more to humans than paperclip maximizers: In particular metaethics matters for humans because of individual moral uncertainty, group and individual moral change, differences in between individual moralities, and the overall complexity of our values. There are probably similar possible issues for paperclip maximizers-- like how should they resolve uncertainty over what counts as a paperclip or deal with agents that are ignorant of the ultimate value of paperclips-- and thinking about them pumps my anti-realist intuitions.

comment by Manfred · 2012-07-05T21:47:20.328Z · LW(p) · GW(p)

Is it the fact that we're not logically omniscient about our own minds, or is it something deeper?

Well, there's certainly that. Also, human algorithms for decision-making can feel different from simply looking up a utility - the algorithm can be something more like a "treasure map" for locating morality, looking out at the world in a way that can feel as if morality was a light shining from outside.

comment by TimS · 2012-07-05T19:18:18.821Z · LW(p) · GW(p)

Consider dealings with agents that have morals that conflict with your own. Obviously, major value conflicts preclude co-existence. Let's assume it is a minor conflict - Bob believes eating cow milk and beef at the same meal is immoral.

It is possible to develop instrumental or terminal values to resolve how much you tolerate Bob's different value - without reference to any meta-ethical theory. But I think that meta-ethical considerations play a large role in how tolerance of value conflict is resolved - for some people, at least.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-07-05T21:18:36.872Z · LW(p) · GW(p)

Obviously, major value conflicts preclude co-existence.

Not obvious. (How does this "preclusion" work? Is it the best decision available to both agents?)

Replies from: TimS
comment by TimS · 2012-07-05T23:26:11.165Z · LW(p) · GW(p)

Well, if I don't include that sentence, someone nitpicks by saying:

How does one tolerate Hitler McHitler the murdering child-molester?

I was trying preempt by making it clear that McH gets imprisoned or killed, even by moral anti-realists (unless they are exceptionally stupid).

comment by hankx7787 · 2012-07-05T17:25:22.895Z · LW(p) · GW(p)

I would certainly say a paperclip maximizer morality falls in the former camp (objective, able to be scientifically investigated, real), although I'm not intimately familiar with the realist/anti-realist terminology.

Replies from: TimS
comment by TimS · 2012-07-05T17:49:10.865Z · LW(p) · GW(p)

Hank, why would Clippy believe that maximizing paperclips is based on something external to its own mind? Clippy could just as easily be programmed to desire staples, and Clippy is probably intelligent enough to know that.

That said, I think Jack's general point about the relationship between ethics and meta-ethics is probably right.

Replies from: hankx7787
comment by hankx7787 · 2012-07-05T18:01:51.501Z · LW(p) · GW(p)

Presumably Clippy has a hard-coded utility function sitting in his source code somewhere. It's a real set of 0s and 1s sitting on a disk somewhere, and we could open the source file and investigate the code.

Clippy's value system is a specific objective, pre-programmed utility function that's inherent in his design and independent/prior to any knowledge or other cognitive content Clippy eventually gains or invents.

And yes, it could have been easily changed (until such a time as Clippy is all grown up and would prevent such change) to make him desire staples. But then that's a different design and we'd probably better call him Stapley at that point.

Replies from: TimS
comment by TimS · 2012-07-05T19:04:36.629Z · LW(p) · GW(p)

Ideally, one would like objective facts to be universally compelling. If Clippy shows its source code to me, or to another AGI, neither of us would update in favor of believing that paper-clip maximizing is an appropriate terminal value.

Replies from: hankx7787
comment by hankx7787 · 2012-07-05T23:44:02.992Z · LW(p) · GW(p)

Ah, no, I don't mean "objective morality" in the sense of something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean what I said above, something in reality that's mind-independent and can be investigated scientifically - a definite "is" from which we can make true "ought" statements relative to that "is".

See drethelin's comment below.

Replies from: buybuydandavis, Jack, TimS
comment by buybuydandavis · 2012-07-06T00:48:09.561Z · LW(p) · GW(p)

something in reality that's mind-independent and can be investigated scientifically

Clippy's code is his mind.

comment by Jack · 2012-07-06T00:26:53.154Z · LW(p) · GW(p)

No. The physical instantiation of a utility function is not an argument for moral realism. On the complete contrary, defining moral actions as "whatever an agents utility function says" is straight-forward, definitional, no-bones-about-it moral subjectivism.

Put it this way: the paperclip maximizer is not going to approve of your behavior.

comment by TimS · 2012-07-05T23:50:53.140Z · LW(p) · GW(p)

Hank, I definitely don't think there's any activity that (1) can reasonably be labeled "scientific investigation" and (2) can solve the is-ought divide.

Replies from: hankx7787
comment by hankx7787 · 2012-07-06T00:12:43.578Z · LW(p) · GW(p)

I didn't think you would :) I'm curious about the consensus on LW, though. But incidentally, what do you think of Thou Art Godshatter?

Replies from: TimS
comment by TimS · 2012-07-06T01:16:43.659Z · LW(p) · GW(p)

First, that essay is aimed primarily at those who think dualism is required in order to talk about morality at all - obviously that's not the discussion we are having.

Second, the issue is not whether there are (1) universal (2) morally relevant (3) human preferences that (4) have been created by evolution. The answer to that question is yes (i.e. hunger, sexual desire). But that alone does not show that there is a universal way for humans to resolve moral dilemma.

If we study and quantify the godshatter to the point that we can precisely describe "human nature," we aren't guaranteed in advance to know that appeals to human nature will resolve every moral dilemma. If reference to human nature doesn't, then evolutionary preference doesn't prove moral realism.

Replies from: hankx7787
comment by hankx7787 · 2012-07-06T01:50:30.062Z · LW(p) · GW(p)

I'm not sure why "universality" is really that important here. Suppose we are just talking about one person, why can't they reduce their value judgments down to their own precisely described nature to resolve every moral dilemma they face? With a read-out of their actual terminal values defined by the godshatter, they can employ the usual consequentialist expected utility calculus to solve any question, in principle.

Replies from: TimS
comment by TimS · 2012-07-06T02:09:36.943Z · LW(p) · GW(p)

This says it better than I could.

Replies from: hankx7787
comment by hankx7787 · 2012-07-06T02:21:40.844Z · LW(p) · GW(p)

But that's just a confusion between two different meanings "objective vs. subjective".

People apparently tend to interpret "objective" as something "universal" in the sense of like some metaphysical Form of Good, as opposed to "subjective" meaning "relative to a person". That distinction is completely stupid and wouldn't even occur to me.

I'm using it in the sense of, something relative to a person but still "a fact of reality able to be investigated by science that is independent/prior to any of the mind's later acquisition of knowledge/content", versus "something that is not an independent/prior fact of reality, but rather some later invention of the mind".

Replies from: Jack
comment by Jack · 2012-07-06T02:50:45.117Z · LW(p) · GW(p)

So lets clear something up: the two attributes objective/subjective and universal/relative are logically distinct. You can can have objective relativism ("What is moral is the law and the law varies from place to place.") and subjective universalism ("What is moral is just our opinions, but we all have the same opinions").

a fact of reality able to be investigated by science that is independent/prior to any of the mind's later acquisition of knowledge/content", versus "something that is not an independent/prior fact of reality, but rather some later invention of the mind".

The attribute "objective" or "subjective" in meta-ethics refers to the status of moral judgments themselves not descriptive facts about what moral judgments people actually make or the causal/mental facts that lead people to make them. Of course it is the case that people make moral judgments, that we can observe those judgments and can learn something about the brains that make them. No one here is denying that there are objective facts about moral psychology. The entire question is about the status of the moral judgments themselves. What makes it true when I say "Murder is immoral"? If your answer references my mind your answer is subjectivist, if your answer is "nothing" than you are a non-cognitivist or an error theorist. All those camps are anti-realist camps. Objectivist answers include "the Categorical Imperative" and "immorality supervenes on human suffering".

Replies from: TimS
comment by TimS · 2012-07-06T03:07:56.439Z · LW(p) · GW(p)

Is there accessible discussion out there of why one might expect real world correlation between objectivity and universality.

I see that subjective universalism is logically coherent, but I wouldn't expect it to be true - it seems like too much of a coincidence that nothing objective requires people have the same beliefs, yet people do anyway.

Replies from: Jack
comment by Jack · 2012-07-06T03:20:43.504Z · LW(p) · GW(p)

Lots of things can cause a convergence of belief other than objective truth, e.g. adaptive fitness. But certainly, objectivity usually implies universality.

comment by fubarobfusco · 2012-07-05T20:38:30.814Z · LW(p) · GW(p)

Morality is a human behavior. It is in some ways analogous to trade or language: a structured social behavior that has developed in a way that often approximates particular mathematical patterns.

All of these can be investigated both empirically and intellectually: you can go out and record what people actually do, and draw conclusions from it; or you can reason from first principles about what sorts of patterns are mathematically possible; or both. For instance, you could investigate trade either beginning from the histories of actual markets, or from principles of microeconomics. You could investigate language beginning from linguistic corpora and historical linguistics ("What sorts of language do people actually use? How do they use it?"); or from formal language theory, parsing, generative grammar, etc. ("What sorts of language are possible?")

Some of the intellectual investigation of possible moralities we call "game theory"; others, somewhat less mathematical but more checked against moral intuition, "metaethics".

Asking whether there are universal, objective moral principles is a little like asking whether there are universal, objective principles of economics. Sure, in one sense there are: but they're not the sort of applied advice that people making moral or economic claims are usually looking for! There are no theorems of economics that give applied advice such as "real estate is always a good investment," and there are no theorems of morality that say things like "it's never okay to sleep with your neighbor's wife".

Replies from: mwengler
comment by mwengler · 2012-07-10T07:06:08.079Z · LW(p) · GW(p)

... is a little like asking whether there are universal, objective principles of economics. Sure, in one sense there are: but they're not the sort of applied advice that people making moral or economic claims are usually looking for!

In seems in economics without much of a stretch you have things like:

  • buy low and sell high

  • don't produce items for sale at price x that cost you y>x to produce.

  • don't buy a productive asset that produces c cash per year for more than c/i where i is the risk free interest rate

I'm not saying I can do this trick with morality, but economics seems to produce piles of actionable results.

comment by ArisKatsaris · 2012-07-05T17:30:35.325Z · LW(p) · GW(p)

In summary of my own current position (and which I keep wanting to make a fuller post thereof):

If factual reality F can represent a function F(M) -> M from moral instructions to moral instructions (e.g. given the fact that burning people hurts them, F("it's wrong to hurt people")-> "It's wrong to burn people"), then there may exist universal moral attractors for our given reality -- these would represent objective moralities that are true for a vast set of different moral starting positions. Much like you reach the Sierpinski Triangle no matter the starting shape.

This would however still not be able to motivate an agent that starts with an empty set of moral instructions.

Replies from: mwengler
comment by mwengler · 2012-07-06T16:13:41.005Z · LW(p) · GW(p)

This would however still not be able to motivate an agent that starts with an empty set of moral instructions.

That sounds likely to me.

The other things that sound possible to me are that we could ultimately determine

We ought not build dangerously uncontrolled (AI, nuclear reactors, chemical reactors) or tolerate those who do. We ought not get sentiences to do things for us through promises of physical pain or other disutility (threats and coercion)j

We may have a somewhat different understanding of what ought means by that point, just as we have a different sense of what time and position and particle and matter and energy mean from having done physics. For example, "ought" might be how we refer to a set of policiies that produce an optimum tradeoff between the short term interests of of the individual and the short terim interests of the group, perhaps designed so that a system in which the individuals followed these rules would have productivity growth at some maximum rate, or would converge on some optimum measure of individual freedom or both. Maybe "ought" would suggest such a wide ranging agreement that individuals who don't follow these rules must be adjusted or restrained because their cost unmodified or unrestrained to the group is so clearly "unfair," even possibly dangerous.

I am not even Leonardo da Vinci here trying to describe what the future of science might look like. I am some tradesman in Florence in da Vinci's time trying to describe what the future of science might look like. My point isn't that any of the individiual details should be the ones we learn when we finally perfect "the moral method" (in analogy with the scientific method), but rather that the richness of what COULD happen makes it very hard to say never, and that someone being told about the possibilities of "objective science" 1000 years ago would have been pretty justified in saying "we will never know whether the sun will rise tomorrow, we will never be able to derive "will happen" from "did happen," (which I take to be the scientific analogy of can't derive "ought" from "is").

Replies from: torekp
comment by torekp · 2012-07-13T00:21:13.419Z · LW(p) · GW(p)

Well said. I'd go further:

The "is-ought" dichotomy is overrated, as are the kindred splits between normative and descriptive, practice and theory, etc. I suggest that every "normative" statement contains some "descriptive" import and vice versa. For example, "grass is green" implies statements like "if you want to see green, then other things being equal you should see some grass", and "murder is immoral" implies something like "if you want to be able to justify your actions to fellow humans in open rational dialogue, you shouldn't murder." Where the corresponding motivation (e.g. "wanting to see green") is idiosyncratic and whimsical, the normative import seems trivial and we call the statement descriptive. Where it is nearly-universal and typically dear, the normative import looms large. But the evidence that there are two radically different kinds of statement - or one category of statement and a radically different category of non-statement - is lacking. When philosophers try to produce such evidence, they usually assume a strong form of moral internalism which is not itself justifiable.

comment by [deleted] · 2012-07-05T17:26:41.664Z · LW(p) · GW(p)

I'm just wondering if you have read the metaethics sequence.

Replies from: hankx7787
comment by hankx7787 · 2012-07-05T17:29:23.433Z · LW(p) · GW(p)

Yeah, and also the Complexity of value sequence.

comment by Jack · 2012-07-06T21:35:03.163Z · LW(p) · GW(p)

There seems to be some confusion - when I say "an objective morality capable of being scientifically investigated (a la Sam Harris or others)" - I do NOT mean something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean something in reality that's mind-independent - in the sense that it is hard-wired, e.g. by evolution, and thus independent/prior to any later knowledge or cognitive content - and thus can be investigated scientifically. It is a definite "is" from which we can make true "ought" statements relative to that "is". See drethelin's comment and my analysis of Clippy.

So first of all, that's not what Sam Harris means so stop invoking him. Second of all, give an example of what kind of facts you would refer to in order to decide whether or not murder is immoral.

If you are referring to facts about your brain/mind then your account is subjectivist. Nothing about subjectivism says we can't investigate people's moral beliefs scientifically.

Now it is the case that if you define morality as "whatever that thing in my brain that tells me what is right and wrong says" there is in some sense an "is from which you can get an ought". But this is not at all what Hume is talking about. Hume is talking about argument and justification. His point is that an argument with only descriptive premises can't take you to a normative conclusion. But note that your "is" potentially differs from individual to individual. I suppose you could use it to justify your own moral beliefs to your self but that does not moral realism make. What you can't do is use it to convince anyone else.

This discussion is getting rather frustrating because I don't think your beliefs are actually wrong. You're just a) refusing to use or learn standard terminology that can be quickly picked up by glancing at the Stanford Encyclopedia of Philosophy and b) thinking that whether or not we can learn about evolved or programmed utility function-like things is a question related to the whether or not moral realism is true. I'm a very typical moral anti-realist but I still think humans have lots of values in common, that there are scientific ways to learn about those values, and that this is a worthy pursuit.

If you still disagree I'd like to hear what you think people in my camp are supposed to believe.

Replies from: buybuydandavis, hankx7787
comment by buybuydandavis · 2012-07-07T23:39:54.725Z · LW(p) · GW(p)

I'm a very typical moral anti-realist but I still think humans have lots of values in common, that there are scientific ways to learn about those values, and that this is a worthy pursuit.

Me too. We're not identical, but we have similarities, and understanding them will likely allow us to better achieve mutual satisfaction.

This is my disappointment with Harris. He ran a fundamentally sound project into the ditch by claiming, entirely unscientifically, that his utilitarian ideal was the one word of moral truth - Thou shalt serve the well being of all conscious creatures - that is the whole of law!

comment by hankx7787 · 2012-07-11T00:09:25.054Z · LW(p) · GW(p)

So first of all, that's not what Sam Harris means so stop invoking him.

I'm not sure what you're referring to here, but here's my comment explaining how this relates to Sam Harris.

If you are referring to facts about your brain/mind then your account is subjectivist. Nothing about subjectivism says we can't investigate people's moral beliefs scientifically.

I addressed this previously, explaining that I am using 'objective' and 'subjective' in the common sense way of 'mind-independent' or 'mind-dependent' and explained in what specific way I'm doing that (that is, the proper basis of terminal values, and thus the rational basis for moral judgments, are hard-wired facts of reality that exist prior to, and independent of, the rest of our knowledge and cognition - and that the proper basis of terminal values is not something that is invented later, as a product of, and dependent on, later acquired/invented knowledge and chains of cognition). You just went on insisting that I'm using the terminology wrong purely as a matter of the meaning in technical philosophy.

This discussion is getting rather frustrating because I don't think your beliefs are actually wrong. You're just a) refusing to use or learn standard terminology that can be quickly picked up by glancing at the Stanford Encyclopedia of Philosophy and b) thinking that whether or not we can learn about evolved or programmed utility function-like things is a question related to the whether or not moral realism is true. I'm a very typical moral anti-realist but I still think humans have lots of values in common, that there are scientific ways to learn about those values, and that this is a worthy pursuit.

You do not have to demand, as you've been doing throughout this thread, that I only use words to refer to things that you want them to mean, when I am explicitly disclaiming any intimacy with the terms as they are used in technical philosophy and making a real effort to taboo my words in order to explain what I actually mean. Read the article on Better Disagreement and try to respond to what I'm actually saying instead of trying to argue over definitions.

Now it is the case that if you define morality as "whatever that thing in my brain that tells me what is right and wrong says" there is in some sense an "is from which you can get an ought".

Ok, great. That's kind of what I mean, but it's more complicated than that. What I'm referring to here are actual terminal values written down in reality, which is different from 1) our knowledge of what we think our terminal values are, and 2) our instrumental values, rationally derived from (1), and 3) our faculty for moral intuition, which is not necessarily related to any of the above.

To answer your previous question,

Second of all, give an example of what kind of facts you would refer to in order to decide whether or not murder is immoral.

One must, 1) scientifically investigate the nature of their terminal values, 2) rationally derive their instrumental values as a relation between (1) and the context of their current situation, and 3) Either arrive at a general principle or to an answer to the specific instance of murder in question based on (1) and (2), and act accordingly.

But this is not at all what Hume is talking about. Hume is talking about argument and justification. His point is that an argument with only descriptive premises can't take you to a normative conclusion. But note that your "is" potentially differs from individual to individual. I suppose you could use it to justify your own moral beliefs to your self but that does not moral realism make. What you can't do is use it to convince anyone else.

I don't understand why people insist on equating 'objective morality' with something magically universal. We do not have a faculty of divination with which to perceive the Form of the Good existing out there in another dimension. If that's what Hume is arguing against, then his argument is against a straw man as far as I'm concerned. Just because I'm pointing out an idea for an objective morality that differs from individual to individual doesn't make it any less 'objective' or 'real' - unless you're using those terms specifically to mean to some stupid, mystical 'universal morality' - instead of the terms just meaning objective and real in common sense. Trying to find a morality that is universal among all people or all mind designs is impossible (unless you're just looking at stuff like this which could be useful), and if that's what you're doing, or that's what you're taking up a position against, then either you're working on the wrong problem, or you're arguing against a stupid straw man position.

What you can't do is use it to convince anyone else.

For the particular idea I've been putting forward here, people's terminal values relate to one other through the following kinds of ways:

1) Between normal humans there is a lot in common 2) You could theoretically reach into their brain and mess with the hardware in which their terminal values are encoded 3) You can still convince and trade based on instrumental values, of course 4) Humans seem to have terminal values which actually refer to other people, whether it's simply finding value in the perception of another human's face, various kinds of bonding, pleasurable feelings following acts of altruism, etc.

Replies from: TimS, Jack, torekp, Eugine_Nier
comment by TimS · 2012-07-11T00:24:21.411Z · LW(p) · GW(p)

You do not have to demand, as you've been doing throughout this thread, that I only use words to refer to things that you want them to mean, when I am explicitly disclaiming any intimacy with the terms as they are used in technical philosophy and making a real effort to taboo my words in order to explain what I actually mean. Read the article on Better Disagreement and try to respond to what I'm actually saying instead of trying to argue over definitions.

Hank,

If you don't use the technical jargon, it is not clear what you mean, or if you are using the same meaning every time you use a term, or whether your meaning captures what it gestures at in a meaningful, non-contradictory way.

To give a historical example, thinkers once thought they knew what infinity meant. Then different infinite sets that were "obviously" different in size were show to be the same size. But not all infinite sets were the same size. Now, we know that the former usage of infinity was confused and precise references to infinite sets need some discussion of cardinality.

In short, you can't deviate from a common jargon and also complain that people are misunderstanding you - particularly when your deviations sometimes appeal to connotations of the terms that your particular usages do not justify.

Edit: Remove comment re: editting

Replies from: hankx7787
comment by hankx7787 · 2012-07-11T00:29:15.028Z · LW(p) · GW(p)

In short, you can't deviate from a common jargon and also complain that people are misunderstanding you

Yes I can - if 1) I use the word in it's basic common sense way, and then, as a bonus in case people are confusing the common sense usage with some other technical meaning, 2) I specifically say "I'm not intimately familiar with the technical jargon, so here is what I mean by this", and then I explain specifically what I mean.

Replies from: TimS
comment by TimS · 2012-07-11T02:33:22.305Z · LW(p) · GW(p)

Hank, I'm sorry - I was a little too harsh. My general difficulty is that I don't think you endorse what Jack called universal relativism. If you don't, then

I addressed this previously, explaining that I am using 'objective' and 'subjective' in the common sense way of 'mind-independent' or 'mind-dependent'

and

I don't understand why people insist on equating 'objective morality' with something magically universal.

don't go well together.

It is the case that objective != universal, but objective things tend to cause universality. If you have a reason why universality isn't caused by objective fact in this case, you should state it.

comment by Jack · 2012-07-21T01:33:25.066Z · LW(p) · GW(p)

I don't understand why people insist on equating 'objective morality' with something magically universal.

I would not equate it with anything magical or universal. Certainly people have tried to ground morality in natural facts, though it is another question whether or not any has succeeded. And certainly it is logically possible to have a morality that is objective, but relative, though few find that avenue plausible. What proponents of objective morality (save you) all agree about is that moral facts are made true by things other than people's attitudes. This does not mean that they don't think people's attitudes are relevant for moral judgments. For instance, there are moral objectivists who are preference utilitarians (probably most preference utilitarians are moral objectivists actually) they think there is one objective moral obligation: Maximize preference satisfaction (with what ever alterations they favor). In order to satisfy that obligation one then has to learn about the mental states of people. But they are moral objectivists because they think that what renders "Maximizing preference satisfaction is morally obligatory" true is that it is a brute fact about the natural world (or dictated by reason, or the Platonic Form of Good etc.).

This is the path Sam Harris takes. He takes as an objective moral fact that human well-being is what we should promote and then examines how to do that through neuroscience. He isn't arguing that we should care about human well-being because there is a module in our brains for doing so. If he were his argument would be subjectivist under the usage of analytic philosophy, even though he is characterized correctly by reviewers as an objectivist.

This is part of why your position was so confusing to me. You were waving the banner of someone who makes the classic objectivist mistake and calling yourself an objectivist while denying that you were making the mistake.

Since you keep bringing up things like "Form of the Good" I assume you can see how hard it would be to justify objective moral judgments (in the way I'm using the term) naturalisticaly and Hume certainly is taking aim at both naturalistic and supernaturalistic (or Platonic/Kantian w/e) justifications for objective morality.

If that's what Hume is arguing against, then his argument is against a straw man as far as I'm concerned. Just because I'm pointing out an idea for an objective morality that differs from individual to individual doesn't make it any less 'objective' or 'real' - unless you're using those terms specifically to mean to some stupid, mystical 'universal morality

Well his argument isn't a strawman, it's just an argument against actual moral objectivists, not you. You'll encounter a lot of apparent strawmen if you go around taking labels of philosophical positions you don't actually agree with. No offense.

1) Between normal humans there is a lot in common 2) You could theoretically reach into their brain and mess with the hardware in which their terminal values are encoded 3) You can still convince and trade based on instrumental values, of course 4) Humans seem to have terminal values which actually refer to other people, whether it's simply finding value in the perception of another human's face, various kinds of bonding, pleasurable feelings following acts of altruism, etc.

My point isn't that the situation is hopeless, just that people's moral beliefs are different from beliefs about other aspects of reality. You can't present arguments or evidence to change people's minds and resolve moral disagreements the way, at least in principle, one does with objective facts like in the natural sciences. That's because those facts are personal. The reason I say "x is moral" has to do with my brain and the reason you say "x is immoral" has to do with your brain. Subjectivist philosophers say the question "is x moral?" is subjective because it depends on the brain you ask.

Treating morality as if it were no different from the typical examples of "objective facts" obscures this crucial difference and that's why traditional terminology is the way it is. Tabooing words is often helpful but in this case you're collapsing subjectivism into objectivism and obscuring the crucial conceptual differences (so much so that you're affiliating with writers you actually disagree with). Most professional analytic philosophers are moral objectivists of the kind I've described. It isn't some kind of obscure, monastic position I'm trying to shoe-horn you into. Most people, even professionals, make the mistake I'm talking about. And it's a dangerous one. So dealing with the problem by using terms that dissolve differences on this question seems like a bad idea. And I'm not trying to be a semantic totalitarian here but it seems natural that someone interested in these questions would want to be able to understand how most people talk about the question and how their own views would be described. I wasn't asking you to take a college course, just read the basic encyclopedia entries to the subject you started a post about. I don't think that is unreasonable. In any case, I've tried to structure the above responses in a way that takes into account our differing usage. Hopefully, I brought some clarity and flexibility to the task.

comment by torekp · 2012-07-11T23:24:23.873Z · LW(p) · GW(p)

I am using 'objective' and 'subjective' in the common sense way of 'mind-independent' or 'mind-dependent' and explained in what specific way I'm doing that (that is, the proper basis of terminal values, and thus the rational basis for moral judgments, are hard-wired facts of reality that exist prior to, and independent of, the rest of our knowledge and cognition - and that the proper basis of terminal values is not something that is invented later, as a product of, and dependent on, later acquired/invented knowledge and chains of cognition).

For what it's worth, not only is your usage a common one, I think it is consistent with the way some philosophers have discussed meta-ethics. Also, I particularly like the narrow way you construct 'mind-dependent'. It seems to me that the facts that I am capable of reason, that I understand English, and that I am not blind are all "objective" in common sense speak, even though they are, in the broadest possible sense of the phrase, mind-dependent. This illustrates the need for care about what kind of mind-dependence makes for subjectivity.

comment by Eugine_Nier · 2012-07-11T07:51:38.240Z · LW(p) · GW(p)

We do not have a faculty of divination with which to perceive the Form of the Good existing out there in another dimension.

Why is this obvious? After all we do have a faculty of divination with which to perceive the Form of Truth.

comment by [deleted] · 2012-07-06T20:44:38.054Z · LW(p) · GW(p)

There seems to be some confusion - when I say "an objective morality capable of being scientifically investigated (a la Sam Harris or others)" - I do NOT mean something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean something in reality that's mind-independent - in the sense that it is hard-wired, e.g. by evolution, and thus independent/prior to any later knowledge or cognitive content - and thus can be investigated scientifically

I think you are bringing up two separate questions:

  • Can science tell us what we value? This question do not rely on whether morality is universal, any more than the scientific investigation of hippos food preference rely on elephants having the same.

  • Can science tell us what to value? If I have not misunderstood Harris, his central claim in The Moral Landscape is that science can. Harris have been criticized for not actually showing that but rather if one presupposes that maximum "well-being" (defined) is morally good - suffering bad - then science can tell us what is moral good/bad action. But this is no different form claiming that if we define morally good the amount of paperclips there are, then science than tell us what is good/bad action.

Replies from: hankx7787
comment by hankx7787 · 2012-07-07T04:47:06.257Z · LW(p) · GW(p)

The latter question is the relevant one.

I have many problems with his book, but I think he is fundamentally taking the perfect approach: rejecting both intrincisist religious dogmatism and subjectivist moral relativism, and putting forward a third path- an objective morality discoverable by science. You're right though, he just presupposes "well-being" as the standard and doesn't really try to demonstrate that scientifically. Eliezer's Complexity of value sequence is the only place I've seen anyone begin to approach this properly (although I have some problems with him as well).

Replies from: None
comment by [deleted] · 2012-07-07T15:58:18.218Z · LW(p) · GW(p)

but I think he is fundamentally taking the perfect approach: rejecting both intrincisist religious dogmatism and subjectivist moral relativism, and putting forward a third path- an objective morality discoverable by science.

I see, but as I asked before what would satisfy as "an objective morality discoverable by science."? What would the world look like if objective morality existed vs if it did not? You need to know what you are looking for, or at least have a crude sketch of how objective morality would work.

comment by [deleted] · 2012-07-06T11:21:44.857Z · LW(p) · GW(p)

If Euthyphro's dilemma proves religious morality to be false, it also does the same to evolutionary morality: http://atheistethicist.blogspot.com/2009/02/euthyphro-and-evolutionary-ethics.html

Replies from: hankx7787, beriukay
comment by hankx7787 · 2012-07-09T12:16:05.434Z · LW(p) · GW(p)

Honestly I don't know why in the world people jump from 'an objective basis for values pre-programmed by evolution' to 'do whatever any of your intuitions say without thinking about it'. To equate those two things is completely stupid and not remotely the point here, so you're entire line of argument in this comment's thread is a wreck.

comment by beriukay · 2012-07-08T09:42:55.447Z · LW(p) · GW(p)

If you're saying that we can't trust the morality that evolution instilled into us to be actually good, then I'd say you are correct. If you're saying that evolutionary ethicists believe that our brain has evolved an objective morality module, or somehow has latched onto a physics of objective morality... I would like to see examples of such arguments.

Replies from: None
comment by [deleted] · 2012-07-08T12:08:43.967Z · LW(p) · GW(p)

I am saying evolutionary morality as a whole is an invalid concept that is irrelevant to the subject of morality.

Actually, I can think of a minutely useful aspect of evolutionary morality: It tells us the evolutionary mechanism by which we got our current intuitions about morality is stupid because it is also the same mechanism that gave lions the intuition to (quoting the article I linked to) 'slaughter their step children, or to behead their mates and eat them, or to attack neighboring tribes and tear their members to bits (all of which occurs in the natural kingdom)'.

If the mechanism by which we got our intuitions about morality is stupid, then we learn that our intuitions are completely irrelevant to the subject of morality. We also learn that we should not waste our time studying such a stupid mechanism.

Replies from: beriukay, Jack, Zetetic
comment by beriukay · 2012-07-09T08:53:46.498Z · LW(p) · GW(p)

I think I'd agree with everything you said up until the last sentence. Our brains are, after all, what we do our thinking with. So everything good and bad about them should be studied in detail. I'm sure you'd scoff if I turned your statement around on other poorly evolved human features. Like, say, there's no point in studying the stupid mechanism of the human eye, and that the eye is completely irrelevant to the subject of optics.

Replies from: None
comment by [deleted] · 2012-07-09T11:32:56.873Z · LW(p) · GW(p)

Nature exerts selective pressure against organisms that have a poor perception of their surroundings, but there is no equivalent selective pressure when it comes to morality. This is the reason why the difference between the human eye and the lion eye is not as significant as the difference between the human intuitions about morality and the lion's intuitions about morality.

If evolution made the perception of the surroundings as wildly variable as that of morality across different species, I would have made an argument saying that we should not trust what we perceive and we should not bother to learn how our senses work. Similarly, if evolution had exerted selective pressure against immoral organisms, I would have agreed that we should trust our intuitions.

Replies from: mwengler, beriukay
comment by mwengler · 2012-07-10T19:54:01.311Z · LW(p) · GW(p)

Nature exerts selective pressure against organisms that have a poor perception of their surroundings, but there is no equivalent selective pressure when it comes to morality.

What an absolutely wild theory!

Humans domination of the planet is totally mediated by the astonishing level of cooperation between humans. Matt Ridley in The Rational Optimist even reports evidence that the ability to trade is an evolutionary adaptation of humans that is more unique to humans even than language is. Humans are able to live together in densities without killing each other in numbers orders of magnitude higher than other primates.

The evolutionarily value of an effective moral system seems overwhelmingly obvious to me, so it will be hard for me o realize where we might disagree.

My claims are: it is human morality that is the basic set of rules for humans to interact. With the right morality, our interaction leads to superior cooperation, superior productivity, and superior numbers. Any one of these would be enough to give the humans with the right morality an evolutionary advantage over humans with a less effective morality. For example if we didn't have a highly developed respect for property, you couldn't hire workers to do as many things: you would spend too much protecting your property from them. If we didn't have such an orientation against doing violence against each other except under pretty limited circumstances, again, cooperative efforts would suffer a lot.

This is the reason why the difference between the human eye and the lion eye is not as significant as the difference between the human intuitions about morality and the lion's intuitions about morality.

It certainly seems the case that our moral intutions align much better with dogs and primates than with lions.

But plenty of humans have decimated their enemies, and armies even till this day tend to rape every woman in sight.

comment by beriukay · 2012-07-12T08:29:46.515Z · LW(p) · GW(p)

But of course evolution made perception of the surroundings as wildly variable as morality. There are creatures with zero perception, and creatures with better vision (or heat perception, or magnetic or electric, or hearing or touch...) than we'll ever have. Even if humans were the only species with morality, arguing about variability doesn't hold much weight. How many things metabolize in arsenic? There's all kinds of singular evolutions that this argument seems to be unable to handle just because of the singularity of the case.

comment by Jack · 2012-07-09T14:03:39.615Z · LW(p) · GW(p)

So I certainly agree that that facts about evolution don't imply moral facts. But the way you talk seems to imply you think there are other ways to discover moral facts. But I doubt there are objective justifications for human morality that are any better than justifications for lion morality. In terms of what we actually end up valuing biological evolution (a long with cultural transmission) are hugely important. Certainly a brain module for altruism is not a confirmation of altruistic normative facts. But if we want to learn about human morality as it actually exists (say, for the purpose of programming something to act accordingly) it seems very unlikely that we would want to neglect this research area.

comment by Zetetic · 2012-07-08T14:01:51.015Z · LW(p) · GW(p)

I initially wrote up a bit of a rant, but I just want to ask a question for clarification:

Do you think that evolutionary ethics is irrelevant because the neuroscience of ethics and neuroeconomics are much better candidates for understanding what humans value (and therefore for guiding our moral decisions)?

I'm worried that you don't because the argument you supplied can be augmented to apply there as well: just replace "genes" with "brains". If your answer is a resounding 'no', I have a lengthy response. :)

Replies from: None, AlonzoFyfe
comment by [deleted] · 2012-07-08T15:46:56.772Z · LW(p) · GW(p)

IMO, what each of us values to themselves may be relevant to morality. What we intuitively value for others is not.

I have to admit I have not read the metaethics sequences. From your tone, I feel I am making an elementary error. I am interested in hearing your response.

Thanks

Replies from: Zetetic, mwengler
comment by Zetetic · 2012-07-10T02:58:29.137Z · LW(p) · GW(p)

I'm not sure if it's elementary, but I do have a couple of questions first. You say:

what each of us values to themselves may be relevant to morality

This seems to suggest that you're a moral realist. Is that correct? I think that most forms of moral realism tend to stem from some variant of the mind projection fallacy; in this case, because we value something, we treat it as though it has some objective value. Similarly, because we almost universally hold something to be immoral, we hold its immorality to be objective, or mind independent, when in fact it is not. The morality or immorality of an action has less to do with the action itself than with how our brains react to hearing about or seeing the action.

Taking this route, I would say that not only are our values relevant to morality, but the dynamic system comprising all of our individual value systems is an upper-bound to what can be in the extensional definition of "morality" if "morality" is to make any sense as a term. That is, if something is outside of what any of us can ascribe value to, then it is not moral subject matter, and furthermore; what we can and do ascribe value to is dictated by neurology.

Not only that, but there is a well-known phenomenon that makes naive (without input from neuroscience) moral decision making: the distinction between liking and wanting. This distinction crops up in part because the way we evaluate possible alternatives is lossy - we can only use a very finite amount of computational power to try and predict the effects of a decision or obtaining a goal, and we have to use heuristics to do so. In addition, there is the fact that human valuation is multi-layered - we have at least three valuation mechanisms, and their interaction isn't yet fully understood. Also see Glimcher et al. Neuroeconomics and the Study of Valuation From that article:

10 years of work (that) established the existence of at least three interrelated subsystems in these brain areas that employ distinct mechanisms for learning and representing value and that interact to produce the valuations that guide choice (Dayan & Balliene, 2002; Balliene, Daw, & O’Doherty, 2008; Niv & Montague, 2008).

The mechanisms for choice valuation are complicated, and so are the constraints for human ability in decision making. In evaluating whether an action was moral, it's imperative to avoid making the criterion "too high for humanity".

One last thing I'd point out has to do with the argument you link to, because you do seem to be being inconsistent when you say:

What we intuitively value for others is not.

Relevant to morality, that is. The reason is that the argument cited rests entirely on intuition for what others value. The hypothetical species in the example is not a human species, but a slightly different one.

I can easily imagine an individual from species described along the lines of the author's hypothetical reading the following:

If it is good because it is loved by our genes, then anything that comes to be loved by the genes can become good. If humans, like lions, had a disposition to not eat their babies, or to behead their mates and eat them, or to attack neighboring tribes and tear their members to bits (all of which occurs in the natural kingdom), then these things would not be good. We could not brag that humans evolved a disposition to be moral because morality would be whatever humans evolved a disposition to do.

And being horrified at the thought of such a bizarre and morally bankrupt group. I strongly recommend you read the sequence I linked to in the quite if you haven't. It's quite an interesting (relevant) short story.

So, I have a bit more to write but I'm short on time at the moment. I'd be interested to hear if there is anything you find particularly objectionable here though.

comment by mwengler · 2012-07-10T18:33:22.914Z · LW(p) · GW(p)

What would probably help is if you said what you thought was relevant to morality, rather than only telling us about things you think are irrelevant. It would make it easier to interpret your irrelevancies.

comment by AlonzoFyfe · 2012-07-09T11:00:33.042Z · LW(p) · GW(p)

Evolutionary Biology might be good at telling us what we value. However, as GE Moore pointed out, ethics is about what we SHOULD value. What evolutionary ethics will teach us is that our mind/brains are maleable. Our values are not fixed.

And the question of what we SHOULD value makes sense because our brains are malleable. Our desires - just like our beliefs - are not fixed. They are learned. So, the question arises, "Given that we can mold desires into different forms, what SHOULD we mold them into?"

Besides, evolutionary ethics is incoherent. "I have evolved a disposition to harm people like you; therefore, you deserve to be harmed." How does a person deserve punishment just because somebody else evolved a disposition to punish him.

Do we solve the question of gay marriage by determining whether the accusers actually have a genetic disposition to kill homosexuals? And if we discover they do, we leap to the conclusion that homosexuals DESERVE to be killed?

Why evolve a disposition to punish? That makes no sense.

What is this practice of praise and condemnation that is central to morality? Of deserved praise and condemnation? Does it make sense to punish somebody for having the wrong genes?

What, according to evolutionary ethics, is the role of moral argument?

Does genetics actually explain such things as the end of slavery, and a woman's right to vote? Those are very fast genetic changes.

The reason that the Euthyphro argument works against evolutionary ethics because - regardless of what evolution can teach us about what we do value, it teaches us that our values are not fixed. Because values are not genetically determined, there is a realm in which it is sensible to ask about what we should value, which is a question that evolutionary ethics cannot answer. Praise and condemnation are central to our moral life precisely because these are the tools for shaping learned desires - resulting in an institution where the question of the difference between right and wrong is the question of the difference between what we should and should not praise or condemn.

Replies from: mwengler, Zetetic, None
comment by mwengler · 2012-07-10T19:22:23.196Z · LW(p) · GW(p)

Its lunchtime so for fun I will answer some of your rhetorical questions.

Evolutionary Biology might be good at telling us what we value. However, as GE Moore pointed out, ethics is about what we SHOULD value.

Unless GE Moore is either an alien or an artificial intelligence, he is telling us what we should value from a human brain that values things based on its evolution. How will he be able to make any value statement and tell you with a straight face that his valuing that thing has NOTHING to do with his evolution?

Besides, evolutionary ethics is incoherent. "I have evolved a disposition to harm people like you; therefore, you deserve to be harmed." How does a person deserve punishment just because somebody else evolved a disposition to punish him.

My disposition to harm people is triggered approximately proportionally to my judgement that this person has or will harm me or someone I care about. My disposition doesn't speak, but neither does my disposition to presume based on experience that the sun will rise tomorrow. What does speak says about the second that being able to predict the future based on the past is an incredibly effective way to understand the universe, so much so that it seems the niverse's continuity from the past to the future is a feature of the universe, not just a feature of the tools my mind has developed to understand the universe. About my incoherent disposition to harm someone who is threatening my wife or my sister, I would invite you to consider life in a society where this disposition did not exist. Violent thieves would run roughshod over the non-violent, who would stand around naked, starving, and puzzled: "what can we do about this after all?"

Do we solve the question of gay marriage by determining whether the accusers actually have a genetic disposition to kill homosexuals? And if we discover they do, we leap to the conclusion that homosexuals DESERVE to be killed?

This sentence seems somewhat incoherent but I'll address what I think are some of the interesting issues it evokes, if not quite brings up.

First, public open acceptance of homosexuality is a gigantic and modern phenomenon. If nothing else, it proves that an incredibly large number of humans DO NOT have any such genetic urge to kill homosexuals, or even to give them dirty looks when walking by them on the street for that matter. So if there is a lesson about concluding moral "oughts" from moral "is-es" here, it is that anybody who previously conclude that homicidal hatred of homosexuals was part of human genetic moral makeup was using insanely flawed methods for understanding genetic morality.

I would say that all attempts to derive ought from is, to design sensible rules for humans living and working together, should be approached with a great deal of caution and humility, especially given the clear tendency towards erroneous conclusions that may also be in our genes. But I would also say that any attempt at determining useful and valuable rules for living and working together which completely ignores what we might learn from evolutionary morality is "wrong" to do so, that any additional human suffering that occurs because these people willfully ignore useful scientific facts is blood on their hands.

What is this practice of praise and condemnation that is central to morality? Of deserved praise and condemnation? Does it make sense to punish somebody for having the wrong genes?

Well, it makes sense to restrict the freedom of anybody who does more social harm than social good if left unrestrained. It doesn't matter whether the reason is bad genes or some other reason. We shoot a lion who is loose and killing suburbanites. You don't have to call it punishment, but what if you do? It is still a gigantically sensible and useful thing to do.

Many genes produce tendencies in people that are moderated by feedback from the world. I have a tendency to be really good at linear algebra and math and building electronic things that work. Without education this might have gone unnoticed. Without positive accolades, I might have preferred to play the electric guitar. Perhaps someone who has a tendency to pick up things he likes and keep them, or to strike out at people who piss him off, will have behavior which is also moderated by his genes AND his environment. Perhaps training him to get along with other people will be the difference between an incarcerated petty thief and a talented corporate raider or linebacker.

The thing that is central to morality is inducing moral behavior. Praise and condemnation are not central, they are two techniques which may or may not help meet that end, and given the fact that they have been enhanced by evolution, I'm guessing they actually do work in a lot of circumstances.

What, according to evolutionary ethics, is the role of moral argument?

Moral argment writ small is a band of humans hashing out how they will get along running on the savannah. This has probably been going on long enough to be evolutionarily meaningful. How do we share the meat and the furs from the animal we cooperatively killed? Who gets to have sex with whom, and how do we change that result to something we like better? What do we do about that guy who keeps pooping in the water supply? The evidence that "talking about it" is useful is the incredibly high level of cooperative organization that humans demonstrate as compared to any other animal. Social insects are the only creatures I know of that even come close, and their high levels of organizations took 10s or 100s of thousands of years to refine, while the productivity of the human corporation or anything we have using a steam engine or a transistor has all been accomplished in 100 years or so.

Does genetics actually explain such things as the end of slavery, and a woman's right to vote? Those are very fast genetic changes.

Does genetics explain an artificial heart? The 4 minute mile? Walking on the moon without dying? The heart is evolved, as is our ability to run, and our need to breathe and for gravity. Without knowing what the answer exactly is, these non-moral and very recent examples bear a similar relationship to our genetics as do the recent moral examples in the question. Sorry to not answer this one, except by tangent.

Because values are not genetically determined, there is a realm in which it is sensible to ask about what we should value, which is a question that evolutionary ethics cannot answer.

What can answer it if evolutionary ethics cannot? A science fiction story like Jesus, Moses, or Scientology that everybody decides to pretend is a morally relevant truth?

ALL your moral questioning and intuitions about right and wrong, about the ability or lack of it for evolutionary investigations to provide answers, it seems to me it is all coming from your evolved brain interacting with the world. Which is what the brain evolved to do. By what reasoning are you able to separate your moral intuitions, which you seem to think are useful for evolving your moral values, from the moral intuitions your evolved brain makes?

Are you under the impression that it is the moral CONCLUSIONS that are evolved? It is not. The brain is a mechanism, some sort of information processor. Evolution occurs when a processor of one type outcompetes a processor of another type. The detailed moral conclusions reached by the mechanism that evolved are just that: new results coming from an old machine from some mixture of inputs, some of which are novel and some of which are same-old-same-old.

Praise and condemnation are central to our moral life precisely because these are the tools for shaping learned desires - resulting in an institution where the question of the difference between right and wrong is the question of the difference between what we should and should not praise or condemn.

And you think this is somehow an alternative to an evolutionary explanation? Go watch the neurobiologists sussing out all the different ways that learning takes place in brains and see if you can tell me where the evolutionary part stopped, because to me it looked like learning algorithms are just beautifully evolved with a compactness which is exceptional, and still unduplicated in silicon, which is millions of times faster than brains.

That was fun. Lunch is over. Back to writing android apps.

comment by Zetetic · 2012-07-10T11:51:04.233Z · LW(p) · GW(p)

First, I do have a couple of nitpicks:

Why evolve a disposition to punish? That makes no sense.

That depends. See here for instance.

Does it make sense to punish somebody for having the wrong genes?

This depends on what you mean by "punish". If by "punish" you mean socially ostracize and disallow mating privileges, I can think of situations in which it could make evolutionary sense, although as we no longer live in our ancestral environment and have since developed a complex array of cultural norms, it no longer makes moral sense.

In any event, what you've written is pretty much orthogonal to what I've said; I'm not defending what you're calling evolutionary ethics (nor am I aware of indicating that I hold that view, if anything I took it to be a bit of a strawman). Descriptive evolutionary ethics is potentially useful, but normative evolutionary ethics commits the naturalistic fallacy (as you've pointed out), and I think the Euthyphro argument is fairly weak in comparison to that point.

The view you're attacking doesn't seem to take into account the interplay between genetic, epigenetic and cultural/mememtic factors in how moral intuitions are shaped and can be shaped. It sounds like a pretty flimsy position, and I'm a bit surprised that any ethicist actually holds it. I would be interested if you're willing to cite some people who currently hold the viewpoint you're addressing.

The reason that the Euthyphro argument works against evolutionary ethics because - regardless of what evolution can teach us about what we do value, it teaches us that our values are not fixed.

Well, really it's more neuroscience that tells us that our values aren't fixed (along with how the valuation works). It also has the potential to tell us to what degree our values are fixed at any given stage of development, and how to take advantage of the present degree of malleability.

Because values are not genetically determined, there is a realm in which it is sensible to ask about what we should value, which is a question that evolutionary ethics cannot answer.

Of course; under your usage of evolutionary ethics this is clearly the case. I'm not sure how this relates to my comment, however.

Praise and condemnation are central to our moral life precisely because these are the tools for shaping learned desires

I agree that it's pretty obvious that social reinforcement is important because it shapes moral behavior, but I'm not sure if you're trying to make a central point to me, or just airing your own position regardless of the content of my post.

comment by [deleted] · 2012-07-09T11:52:59.561Z · LW(p) · GW(p)

Yay for my favorite ethicist signing up for LessWrong!

comment by TheOtherDave · 2012-07-06T01:29:29.063Z · LW(p) · GW(p)

Reading your edit... I believe that there exists some X such that X developed through natural selection, X does not depend on any particular knowledge, X can be investigated scientifically, and for any moral intuition M possessed by a human in the real world, there's a high probability that M depends on X such that if X did not exist, M would not exist either. (Which is not to say that X is the sole cause of M, or that two intuitions M1 and M2 can't both derive from X such that M1 and M2 motivate mutually exclusive judgments in certain real-world situations.)

The proper relationship of X to the labels "objective morality," "moral nihilism", "moral relativism," "Platonic Form of Good", "is statement" and "ought statement" is decidedly unclear to me.

comment by Dolores1984 · 2012-07-06T16:44:58.956Z · LW(p) · GW(p)

Well, an awful lot of what we think of as morality is dictated, ultimately, by game theory. Which is pretty universal, as far as I can tell. Rational-as-in-winning agents will tend to favor tit-for-tat strategies, from which much of morality can be systematically derived.

Replies from: Lightwave
comment by Lightwave · 2012-07-07T10:43:56.359Z · LW(p) · GW(p)

from which much of morality can be systematically derived

Not all of it, though, because you still need some "core" or "terminal" values that you use do decide what counts as a win. In fact, all the stuff that's derived from game theory seems to be what we call instrumental values, and they're in some sense the less important ones, the larger portion of the arguments about morality end up being about those "terminal" values, if they even exist.

Replies from: None
comment by [deleted] · 2012-07-07T16:03:17.526Z · LW(p) · GW(p)

You are talking about different things Dolores is talking about "why should I cooperate instead of cheating" kind of morality. You on the other hand are talking about meta-ethics, that is what is the meaning of right and wrong, what is value etc.

Replies from: Dolores1984
comment by Dolores1984 · 2012-07-07T20:12:54.509Z · LW(p) · GW(p)

Indeed. Terminal values are also... pretty personal, in my book. Very similar across the whole of neurologically intact humanity, maybe, but if someone's are different from yours, good bloody luck talking them out of them.

comment by drethelin · 2012-07-05T21:06:00.799Z · LW(p) · GW(p)

I believe in objective relative moralities.

Replies from: TimS
comment by TimS · 2012-07-05T21:57:22.040Z · LW(p) · GW(p)

What does believing that entail?

Replies from: drethelin
comment by drethelin · 2012-07-05T22:47:48.419Z · LW(p) · GW(p)

I think different intelligent entities will have different values, but that it's objectively possible to determine what these are and what actions are correct for which ones. I also think most people's stated values are only an approximation of their actual values.

Replies from: TimS, hankx7787, Jack
comment by TimS · 2012-07-05T23:22:52.035Z · LW(p) · GW(p)

objectively possible to determine what these [values] are

I agree that it is possible to figure out an agent's terminal values by observing their behavior and such, but I don't understand what work the word "objectively" is doing in that sentence.

Replies from: Jack, drethelin
comment by Jack · 2012-07-06T00:29:35.709Z · LW(p) · GW(p)

I don't understand what work the word "objectively" is doing in that sentence.

Most people, as this thread has exhibited, don't understand what the word means or at least not what it means in phases like "objective moral facts".

Replies from: TimS
comment by TimS · 2012-07-06T01:28:12.950Z · LW(p) · GW(p)

Given the amount of discussion of applied-morality concepts like Friendliness and CEV, I had higher expectations.

comment by drethelin · 2012-07-06T02:30:28.761Z · LW(p) · GW(p)

Basically it means that even though moralities may be subjective I think statements like "that's wrong" or "that's the right thing to do" are useful, even if at base meaningless.

Replies from: mwengler
comment by mwengler · 2012-07-10T14:14:34.564Z · LW(p) · GW(p)

The idea that a meaningless statement can be useful represents a fundamental misunderstanding of what the word "meaningless" means.

If a statement is useful, it must have meaning, or else there would be nothing there to use.

Replies from: Jack
comment by Jack · 2012-07-10T14:30:34.009Z · LW(p) · GW(p)

I think he means "don't refer to anything" rather than "meaningless".

comment by hankx7787 · 2012-07-05T23:53:28.864Z · LW(p) · GW(p)

This is exactly what I mean.

comment by Jack · 2012-07-06T01:00:53.581Z · LW(p) · GW(p)

"Objective" means "mind-independent" so if you're looking at someone's mind to determine those values they're, by definition, subjective. When we use the words "objective" and "subjective" in meta-ethics we're almost always using them in this way and now questioning, say, whether or not there are objective facts about other people's minds.

Replies from: AlonzoFyfe, mwengler, Will_Newsome
comment by AlonzoFyfe · 2012-07-09T12:03:31.005Z · LW(p) · GW(p)

If "objective" is "mind independent", then are facts ABOUT minds not objective? We cannot have a science that discusses, for example, how the pre-frontal lobe functions because no such claim can be mind-independent?

For every so-called subjective statement, there is an objective statement that says exactly the same thing from a different point of view. If I say, "spinich, yumm" there is a corresponding objective statement "Alonzo likes spinich" that says exactly the same thing.

So, why not just focus on the objective equivalent of every subjective statement? Why pretend that there is a difference that makes any difference?

Replies from: Jack
comment by Jack · 2012-07-09T12:56:34.327Z · LW(p) · GW(p)

Why pretend that there is a difference that makes any difference?

Because it makes a huge difference in our understanding of morality. "Alonzo expresses a strong distaste for murder" is a very different fact than "Murder is immoral" (as commonly understood), no?

ETA: Of course, given that I don't think facts like "murder is immoral" exist I'm all about focusing on the other kind of fact. But it's important to get concepts and categories straight because those two facts are not necessarily intensionally or extensionally equivalent.

Replies from: AlonzoFyfe
comment by AlonzoFyfe · 2012-07-09T22:20:04.245Z · LW(p) · GW(p)

Yes. Water is made up of two hydrogen and an oxygern atom is a different fact than the earth and venus are nearly the same size. It does not bring science to its knees.

Replies from: Jack
comment by Jack · 2012-07-09T23:48:10.659Z · LW(p) · GW(p)

And the next time someone says that there are astronomical facts about the chemical make-up of water I will correct them as well. Which is to say I don't know what your point is and can only imagine you think I am arguing for something I am not. Perhaps it's worth clarifying things before we get glib?

Replies from: AlonzoFyfe
comment by AlonzoFyfe · 2012-07-10T00:11:00.458Z · LW(p) · GW(p)

In which case, you will be making a point - not that there are different facts, but that there are different languages. Of course, language is an invention - and there is no natural law that dictates the definition of the word "astronomy".

It is merely a convention that we have adopted a language in which the term "astronomy" does not cover chemical facts. But we could have selected a different language - and there is no law of nature dictating that we could not.

And, yet, these facts about language - these facts about the ways we define our terms - does not cause science to fall to its knees either.

So, what are you talking about? Are you talking about morality, or are you talking about "morality"?

Replies from: Jack, mwengler
comment by Jack · 2012-07-10T00:46:53.442Z · LW(p) · GW(p)

It is merely a convention that we have adopted a language in which the term "astronomy" does not cover chemical facts.

I suppose that is true... but surely that doesn't render the word meaningless? In the actual world where words mean the things they mean and not other things that they could have meant in a world with different linguistic convention "astronomy" still means something like "the study of celestial bodies", right? Surely people asking for astronomical facts about airplanes, as if they were celestial bodies is a sign of confusion and ought to be gently corrected, no?

And, yet, these facts about language - these facts about the ways we define our terms - does not cause science to fall to its knees either.

Where in the world did you get the notion that I wanted science on its knees or that I thought it was? I'm as kinky as the next guy but I quite like science where it is. I'm completely bamboozled by this rhetoric. Do you take me for someone who believes God is required for morality or some other such nonsense? If so let me be clear: moral judgments are neither natural nor supernatura objectivel facts. They are the projection of an individuals preferences and emotions that people mistake for externally existing things, much as people mistake cuteness as an intrinsic property of babies when in fact it is simply the projection of our affinity for babies that makes them appear cute-to-us. That does not mean that there are not facts about moral judgments or that science is not on strong and worthy grounds when gathering such facts.

So, what are you talking about? Are you talking about morality, or are you talking about "morality"?

My chief concern in my initial comment to which you replied was getting everyone straight on what the meta-ethical terminology means. People enjoy freelancing with the meanings of words like "objective", "subjective", and "relative" and it creates a terrible mess when talking about metaethics because no one know what anyone else is talking about. I didn't have any kind of straightforward factual disagreement with the original commenter, bracketing the fact that I was quite sure what their position was and if they in fact thought they had succeeded in solve a two-thousand old debate by discovering and objective foundation for morality when they had in fact just rediscovered moral subjectivism with some choice bits of ev-psych thrown in. Note that hankx7787, at least, does seem to think Sam Harris has found an objective and scientific foundation for morality, so it seems this blustering isn't all semantics. Maybe words have meanings after all.

Replies from: AlonzoFyfe
comment by AlonzoFyfe · 2012-07-10T12:31:06.802Z · LW(p) · GW(p)

Here is the general form of my argument.

A person says, "X" is true of morality or of "moral judgments" in the public at large. This brings the talk of an objective morality to its knees. I answer that X is also true if science "or of "truth judgments" in the public at large. But it does not bring all talk of objectivity n science to its knees. Therefore, the original argument is invalid.

A case in point: whether somethis is moral depends on your definition of moral. But there is no objective way to determine the correct definition of "moral". Therefore, there is no chance of an objective morality.

Well, whether Pluto is a planet depends on your definition of "planet". There is no way todetermine an onjectively correct definition of "planet". Yet, planetology remains a science.

Yes, many moral judgments are projections of an individual's likes and dislikestreated as intrinsic properties. But, then, many of their perceptions and observations are theory-laden. This does not eliminate the possibility of objectivity in science. We simply incorporate these facts about our perceptions into our objective account.

The original post to which I responded did not provide a helpful definition. Defining "subjective" as "mind independent" denies the fact that minds are a part of the real world, and we can make objectively true and false claims about minds. Values may not exist without minds, but minds are real. They are a part of the world. And so are values.

Every "subjective" claim has an "objective" claim that says exactly the same thing.

Replies from: Jack
comment by Jack · 2012-07-10T14:16:07.375Z · LW(p) · GW(p)

A case in point: whether somethis is moral depends on your definition of moral. But there is no objective way to determine the correct definition of "moral". Therefore, there is no chance of an objective morality.

Well, whether Pluto is a planet depends on your definition of "planet". There is no way to determine an objectively correct definition of "planet". Yet, planetology remains a science.

Of course I never made such an argument, so this rebuttal is rather odd.

Your point of course leads to the question: what does make science objective. I would argue for two candidates though some might say they are the same an I'm happy to here others. 1) Scientific theories make predictions about our future experiences, constraining them. When a scientific theory is wrong we have unexpected experiences which lead us to reject or revise that theory. 2) Science reveals the universe's causal structure which gives us power to manipulate one variable which in turn alters another. If we are unable to do this as our theory expects we reject or revise that theory. The process leads to ever-more effective theories which, at their limit, model objective reality. This it seems to me is how science is objective, though again, I'm happy to hear other theories. Now. What are the predictions a moral theory makes? What experiments can I run to test it?

Whether or not Pluto is a planet might not have an "objective definition" (what ever that means) but it sure as heck has an objective trajectory through space that can be calculated with precision using Newtonian physics. You can specify a date and an astronomer can tell you Pluto's position at that date. But there is no objective method for determining what a person should and should not do in an ethical dilemma.

Yes, many moral judgments are projections of an individual's likes and dislikes treated as intrinsic properties.

No, my position is that all moral judgments are these kinds of projections. If there were ones that weren't I wouldn't be an anti-realist

But, then, many of their perceptions and observations are theory-laden. This does not eliminate the possibility of objectivity in science. We simply incorporate these facts about our perceptions into our objective account.

This is neither here nor there. Scientific observations are theory-laden and to some point under-determined by the evidence. But ethical theories are in no way constrained by any evidence of any kind.

Defining "subjective" as "mind independent" denies the fact that minds are a part of the real world,

You mean "objective" and no it doesn't. It just denies that moral judgments are part of the world outside the mind.

and we can make objectively true and false claims about minds.

It does not deny that.

Values may not exist without minds, but minds are real. They are a part of the world. And so are values.

I agree that values exist, I just think they're subjective.

Every "subjective" claim has an "objective" claim that says exactly the same thing.

You're either using these words differently than I am or you're totally wrong.

I'm just gonna leave the wikipedia entry on ethical subjectivism here and see if that clarifies things for anyone.

comment by mwengler · 2012-07-10T14:22:14.840Z · LW(p) · GW(p)

I think the fact that astronomy means astronomy and not chemistry among rational conversationalists is as significant as the fact that the chess piece that looks sort of like a horse is the one rational chess players use as the knight.

I don't think there is anything particularly significant in almost all labels, they're positive use is that you can manipulate concepts and report on your results to others using them.

Replies from: Jack
comment by Jack · 2012-07-10T14:26:50.286Z · LW(p) · GW(p)

But try to move your pawn like a knight and see what happens.

comment by mwengler · 2012-07-10T14:16:39.972Z · LW(p) · GW(p)

"Objective" means "mind-independent" so if you're looking at someone's mind to determine those values they're, by definition, subjective.

Not quite, I don't think. If you are looking at different well-functioning well-informed minds to get the truth value of a statement, and you get different results from different minds, then the statement is subjective. If you can "prove" that all well-functioining well-informed minds would give you the same result, then you have "proved" that the statement is objective.

In principle I could look at the mind of a good physicist to determine whether electrons repel each other, and the fact that my method for making the determination was to look at someone's mind would not be enough to change the statement "electrons repel each other" into a subjective statement.

Replies from: Jack
comment by Jack · 2012-07-10T14:21:55.817Z · LW(p) · GW(p)

It's not about the method of discovery but truth-making features. You could look at the mind of a good physicist to determine whether electrons repel each other but that's not what makes "electrons repel each others" true. In contrast, what makes a moral judgment true according to subjectivism is the attitudes of the person who makes the moral judgment.

comment by Will_Newsome · 2012-07-06T11:48:43.257Z · LW(p) · GW(p)

Does an ontologically privileged transcendental God count as a mind? 'Cuz you'd think meta-ethical theism counts as belief in objective moral truths. So presumably "mind-independent" means something like "person-mind-or-finite-mind-independent"?

Replies from: Jack
comment by Jack · 2012-07-06T13:12:39.365Z · LW(p) · GW(p)

Divine command theories of morality are often called "theological subjectivism". That's another example of a universal but subjective theory. But, say, Thomist moral theory is objectivist (assuming I understand it right).

Replies from: mwengler
comment by mwengler · 2012-07-10T14:17:45.419Z · LW(p) · GW(p)

THat's funny, the wikipedia article listed 'most religiously based moral theories' as examples of moral realism.

Replies from: Jack
comment by Jack · 2012-07-10T14:24:42.850Z · LW(p) · GW(p)

Most religiously based moral theories aren't divine command theory, as far as I know.

comment by Vaniver · 2012-07-05T17:51:32.280Z · LW(p) · GW(p)

Objective morality? Yes, in the sense that game theory is objective. No, in the sense that payoff matrices are subjective.

comment by AlonzoFyfe · 2012-07-06T22:20:29.857Z · LW(p) · GW(p)

I believe it is poosible to scientifically determine whether people generally have many and strong reasons to promote or inhibit certain desires through the use of social tools such as praise, condemnation, reward, and punishment. I also believe that this investigation would make sense of a wealth of moral practices such as the three categories of action (obligation, prohibition, and non-obligatory permission), excuse, the four categories of culpability (intentional, knowing, reckless, negligent), supererogatory action, and. - of course - the role of praise, condemnation, reward, and punishment.

comment by TCB · 2012-07-06T06:42:35.643Z · LW(p) · GW(p)

I agree with what seems to be the standard viewpoint here: the laws of morality are not written on the fabric of the universe, but human behavior does follow certain trends, and by analyzing these trends we can extract some descriptive rules that could be called morals.

I would find such an analysis interesting, because it'd provide insight into how people work. Personally, though, I'm only interested in what is, and I don't care at all about what "ought to be". In that sense, I suppose I'm a moral nihilist. The LessWrong obsession with developing prescriptive moral rules annoys me, because I'm interested in truth-seeking above all other things, and I've found that focusing on what "ought to be" distracts me from what is.

comment by Eugine_Nier · 2012-07-06T06:37:36.649Z · LW(p) · GW(p)

I suspect that there exists an objective morality capable of being investigated, but not using the methods commonly known as science.

What we currently think of as objective knowledge comes from one of two methods:

1) Start with self-evident axioms and apply logical rules of inference. The knowledge obtained from this method is called "mathematics".

2) The method commonly called the "scientific method". Note that thanks to the problem of induction the knowledge obtained using this method can never satisfy method 1's criterion for knowledge.

I suspect investigation morality will require a third method, and that the is-ought problem is analogous to the problem of induction in that it will stop moral statements from being scientific (just as scientific statements aren't mathematical) but ultimately won't prevent a reasonably objective investigation of morality.

Replies from: pragmatist
comment by pragmatist · 2012-07-06T12:21:04.775Z · LW(p) · GW(p)

This is pretty close to my view on the matter.

comment by Nornagest · 2012-07-06T02:45:39.852Z · LW(p) · GW(p)

I'd be extremely surprised if there turned out to be some Platonic ideal of a moral system that we can compare against. But it seems fairly clear to me that the moral systems we adopt influence factors which can be objectively investigated, i.e. happiness in individuals (however defined) or stability in societies, and that moral systems can be productively thought of as commensurable with each other along these axes. Since some aspects of our emotional responses are almost certainly innate, it also seems clear to me that the observable qualities of moral systems depend partly on more or less fixed qualities rather than the internal architecture of the moral system in question.

However, it seems unlikely to me that all of these fixed qualities are human universals, i.e. that there are going to be universally relevant "is" values from which we can derive solutions to arbitrary "ought" questions. Certain points within human mind-design-space are likely to respond differently than others to given moral systems, at least on the object level. Additionally, I think it's unlikely that the observable output of moral systems depends purely on their hosts' fixed qualities: identity maintenance and related processes set up feedback loops, and we can also expect other active moral systems nearby to play a role in their mutual success.

I'd expect, but cannot prove, the success of a moral system in guaranteeing the happiness of its adherents or the stability of their societies to be governed more by local conditions and biology (species-wide or of particular humans) and less by game-theoretic considerations. Conversely, I'd expect the success of a moral system in handling other moral systems to have more of a game-theoretic flavor, and higher meta-levels to be more game-theoretic still.

I have no idea where any of this places me in the taxonomy of moral philosophy.

comment by Sly · 2012-07-06T01:45:42.258Z · LW(p) · GW(p)

I am an anti-realist, and pretty much find myself agreeing with DanArmak and Jack.

comment by mwengler · 2012-07-06T08:48:15.986Z · LW(p) · GW(p)

Until a few days ago I would have said I'm a nihilist, even though a few days ago I didn't know that was the label for someone who didn't believe that moral statements could be objective facts.

Now I would say a hearty "I don't know" and assign almost a 50:50 chance that there are objective moral "ought" statements.

Then in the last few days I was reminded that 1) scientific "objective facts" are generally dependent on unprovable assumptions, like the utiliity of induction, and the idea that what a few electrons did last thursday can be generalized to a bunch of electrons we haven't looked at (and won't likely ever look at) and generalized in to the future. That is, I do think it is a "scientific fact" that electrons in alpha centauri will repel each other next week. 2) 2000 or 3000 years ago, people's intutions were for the most part that "science" and "morality" were "real," that is, objective facts that we spent some effort trying to figure out and understand. And back then, we did not have that much more success with the scientific ones than with the moral ones. But since then we got REALLY REALLY GOOD at science. So now, I look at scientific facts and I look at moral "facts" and I think, wow, science is clearly heads and shoulders above morality in reliability, so we really should use a weaker term for moral statements, like "opinion" or "preference" instead of "truth."

But I cannot know that morality will not finally catch up with science. Certainly our knowledge of how the mind works is being advanced quickly right now.

As to the "is-ought" divide, it SEEMS impassable. But so does/is the "I believe what I saw" and "it applies to the future and to other things I didn't see." divide (induction). If I am willing to cross the induction divide on faith because science works so well, it is possible that moral understanding will get that good, and I will have as much reason to cross the is-ought divide on faith then as I have to cross induction divide on faith now. My basis for crossing the is-ought divide would be essentially a demonstration that people who crossed the divide were making all the moral progress, and the fruit of the moral progress was quite outstanding. That is, analagous to my reasons for crossing the induction divide in science is all the technology that people who cross that divide can build. If we get to a point where moral progress is that great, and the people making the progress crossed the is-ought divide, then I'll have to credit it.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-07-06T21:30:54.046Z · LW(p) · GW(p)

unprovable assumptions, like the utiliity of induction

One might call induction an "undeniable assumption" instead: we cannot do without it; it's part of what we are. As a matter of human nature (and, indeed, the nature of all other animals (and computer programs) capable of learning), we do use induction, regardless of whether we can prove it. Some of the best evidence for induction might be rather crudely anthropic: we, who implicitly and constantly use induction, are here; creatures with anti-inductive priors are not here.

comment by buybuydandavis · 2012-07-06T00:58:10.039Z · LW(p) · GW(p)

Objective morality is like real magic - people who want real magic just aren't satisfied with the magic that's real.

comment by Manfred · 2012-07-05T17:14:35.149Z · LW(p) · GW(p)

Moral relativism all the way. I mean something by morality, but it might not be exactly the same as what you mean.

Of course, moral relativism doesn't single out anything (like changing other people) that you shouldn't do, contrary to occasional usage - it just means you're doing so for your own reasons.

Nor does it mean that humans can't share pretty much all their algorithms for finding goals, due to a common heritage. And this would make humans capable of remarkable agreement about morality. But to call that an objective morality would be stretching it.

comment by [deleted] · 2012-07-06T05:33:55.937Z · LW(p) · GW(p)

The capability to be scientifically investigated is entirely divorced from the existence or discoverability of the thing scientifically investigated. We can devote billions of years and dollars in searching for things that do not and never did and never will exist. If investigation guaranteed existence, I'd investigate a winning lottery ticket in my wallet twice a day.

"Might is Right" is a morality in which there is no distinction between is and ought.

comment by timtyler · 2012-07-06T00:55:54.864Z · LW(p) · GW(p)

Natural selection favours some moralities more than other. The ones we see are those that thrive. Moral relativism mostly appears to ignore such effects.

Replies from: TimS
comment by TimS · 2012-07-06T01:21:19.308Z · LW(p) · GW(p)

It isn't an accident that there are no universal mandatory incest moralities out there in recorded history. That's not actually enough to prove moral realism is true.

In short, there are universal morally relevant human preferences created by evolution (i.e. hunger, sex drive). That doesn't show that evolutionarily created preferences resolve all moral dilemmas.

Replies from: mwengler, timtyler
comment by mwengler · 2012-07-10T14:31:54.282Z · LW(p) · GW(p)

In short, there are universal morally relevant human preferences created by evolution (i.e. hunger, sex drive). That doesn't show that evolutionarily created preferences resolve all moral dilemmas.

The scientific method has hardly resolved all scientific dilemmas. So if there are real things in science, 'resolving all dilemmas' is not a requirement for scientific realism, so it would seem it shouldn't be a requirement for moral realism.

"Descriptive" statements about morality (e.g. 'some, but not all, people think incest is wrong') is objective. The only real question is whether "normative" ethics can be objective. 'people think incest is wrong' is a descriptive statement. 'incest is wrong' is a normative statement. The moral realism question is really whether any normative statement can be objectively true. The intuition pump for thinking "maybe yes" comes not from incest statements, but rather I think from statements like "humans shouldn't pick an 8 year old at random and chop off his limbs with a chainsaw just to see what that looks like." Incest statements are like pumping your intuition about scientific realism by considering statements like "Wave Function Collapse is how we get probabilistic results in real experiments." If you are wondering whether there is ANY objective truth, start with the obvious ones like "the sun will rise tomorrow" and "hacking the arms of reasonably chosen children to see what that looks like is wrong."

Replies from: TimS
comment by TimS · 2012-07-10T16:16:59.496Z · LW(p) · GW(p)

It's hard for me to reconcile this statement with your response to timtyler above. I agree with your response to him. But consider the following assertion:

"Every (moral) decision a human will face has a single choice that is most consistent with human nature."

To me, that position implies that moral realism is true. If you disagree, could you explain why?

comment by timtyler · 2012-07-06T10:23:21.338Z · LW(p) · GW(p)

Resolving all moral dilemmas is not really part of moral realism, though.

For instance, checking with S.E.P., we have:

[Moral realists] hold, at least some moral claims actually are true. That much is the common (and more or less defining) ground of moral realism.

It's easy to come up with some true claims about morality. Morality is part of biology - and is thus subject to the usual forms of scientific enquiry.

Replies from: mwengler
comment by mwengler · 2012-07-10T14:37:04.871Z · LW(p) · GW(p)

It's easy to come up with some true claims about morality. Morality is part of biology - and is thus subject to the usual forms of scientific enquiry.

This is a bad example, because "moral realism" really refers to normative moral statements, not descriptive ones.

I don't think there is any interesting controversy in describing what people think is wrong. The interesting controversy is whether anything is actually wrong or not. THe problem is with "Morality is part of biology" is it is ambiguous at best, many people would see that as a descriptive statement, not telling them that "therefore you ought to do what your biology tells you to do."

Best to work with unambiguous statements since the requirement is "at least some."

"Killing randomly chosen children to see what it feels like is wrong" is a normative moral statement, that if objectively true means morality realists are right.

"Most people think killing randomly chosen childern to see what it feels like is wrong" is a descriptive statement that is objective, but doesn't tell you whether you ought to kill randomly chosen children or not.

Replies from: timtyler
comment by timtyler · 2012-07-10T22:40:58.660Z · LW(p) · GW(p)

I think, at this point, a scientist would ask what you actually meant by normative moral statements. I.e. what you mean by "right" and "wrong". I figure if you are sufficiently clear about that, the issue is dissolved, one way or the other.

Replies from: mwengler
comment by mwengler · 2012-07-11T00:14:52.959Z · LW(p) · GW(p)

Would that scientist be hoping I had something to add beyond what is in wikipedia? Because unless that scientist tells me what he doesn't understand about normative moral statements in philosophy that isn't easily found on the web, I wouldn't know how to improve on the wikipedia article.

Replies from: timtyler
comment by timtyler · 2012-07-11T01:10:02.793Z · LW(p) · GW(p)

The article offers a bunch of conflicting definitions - from social science, economics and elsewhere. Until there's a properly-formed question, it's hard to say very much about the answer.

Replies from: mwengler
comment by mwengler · 2012-07-11T04:48:56.302Z · LW(p) · GW(p)

OK, here you go then.

In philosophy, normative statements affirm how things should or ought to be, how to value them, which things are good or bad, which actions are right or wrong. Normative is usually contrasted with positive (i.e. descriptive, explanatory, or constative) claims when describing types of theories, beliefs, or propositions. Positive statements are factual statements that attempt to describe reality.

For example, "children should eat vegetables", and "those who would sacrifice liberty for security deserve neither" are normative claims. On the other hand, "vegetables contain a relatively high proportion of vitamins", "smoking causes cancer", and "a common consequence of sacrificing liberty for security is a loss of both" are positive claims. Whether or not a statement is normative is logically independent of whether it is verified, verifiable, or popularly held.

http://en.wikipedia.org/wiki/Normative#Philosophy

Replies from: timtyler
comment by timtyler · 2012-07-11T23:31:28.937Z · LW(p) · GW(p)

That doesn't help too much with classifying things into categories of "right" and "wrong". Either one defines these terms as relative to some unspecified agent's preferences, or one gives them a naturalistic definition - e.g. as the preferences associated with universal instrumental values. Then there's the issue of which type of definition is more practical or useful.

Replies from: mwengler
comment by mwengler · 2012-07-12T14:41:20.653Z · LW(p) · GW(p)

My point a few comments ago was that moral realism is the theory that moral statements are real, not that statements about morality are real. Statements about unicorns are real: "unicorns are cute white horses with pointy horns that can only be seen by virgins" is a real statement about unicorns. Unicorns are NOT real.

Any argument or disagreement in this chain arises from what is purely some sort of disagreement about how to use some terms. I don't mean to suggest that the content of moral realism or normative vs descriptive is right or true or real, but I do have rather a thing about using words and terms and other labels in the standard ways they have been used.

For whatever reason, @timtyler considers the standard definitions of either moral relativism or normative to be nonsensical or incomplete or problematic in some serious way. Bully for him. In my opinion, it makes no sense to argue against what the standard definition of various terms are by pointing out that the concepts defined have problems.

Rather than redefining words like moral realism and normative that have quite a long history of meaning what wikipedia describes pretty clearly they mean, I suggest that people who want to create better concepts than these should call them something else, and not argue that the standard definitions are not the standard definitions because they are stupid or wrong or whatever.

comment by RobertLumley · 2012-07-05T17:23:52.267Z · LW(p) · GW(p)

I do not like the word "morality". It is very ambiguously defined. The only version of morality that I even remotely agree with is consequentialism/utilitarianism. I can't find compelling arguments for why other people should think this way though, and I think morality ultimately comes down to people arguing about their different terminal values, which is always pointless.

Replies from: shminux, DanArmak
comment by shminux · 2012-07-05T22:52:23.038Z · LW(p) · GW(p)

morality ultimately comes down to people arguing about their different terminal values, which is always pointless.

Arguing about them may not be fruitful, but exposing truly terminal values tends to be illuminating even to the person instinctively and implicitly holding certain values as terminal.

comment by DanArmak · 2012-07-05T19:04:51.302Z · LW(p) · GW(p)

Arguing about terminal values is pointless, of course. But morality does have a good and useful definition, although some people use the word differently, which muddles the issue.

'Morality' (when I use the word) refers to a certain human behavior (with animal analogues). Namely, the judging of (human) behaviors and actions as 'right' or 'wrong'. There are specific mechanisms for that in the human brain - for specific judgments, but more so for feeling moral 'rightness' or 'wrongness' even when the judgments are largely culturally defined. These judgments, and consequent feelings and reactions, are a human universal which strongly influences behavior and social structures. And so it is interesting and worthy of study and discussion. Furthermore, since humans are capable of modifying their behavior to a large extent after being verbally convinced of new claims, it is worthwhile to discuss moral theories and principles.

Replies from: buybuydandavis, RobertLumley
comment by buybuydandavis · 2012-07-07T23:33:14.594Z · LW(p) · GW(p)

You're about where I am.

We have some in built valuations to behavior and reactions to those evaluations. Humans, like animals, judge behavior, with varying degrees or approval/disapproval, including rewards/punishments, Where we are likely different from animals is that we judge higher order behavior as well - not just the behavior, but the moral reaction to the behavior, then the moral reaction to the moral reaction to the behavior, etc.

Of course morality is real and can studied scientifically, just like anything else about us. The first thing to notice on studying is that we don't have identical morality, just as we don't have identical genes or identical histories. Some of the recent work on morality shows it comes in certain dimensions - fairness, autonomy, purity, group loyalty, etc,, and people tend to weigh these different factors both consistently for their own judgments, and differently compared to the judgments of others. I interpret that as us having relatively consistent pattern matching algorithms that identify dimensions of moral saliency, but less consistent weighting of those different dimensions.

That's the funny thing is that what is termed "objective morality" is transparently nonsense once you look scientifically at morality. We're not identical - obviously. We don't have identical moralities - obviously. Any particular statistic of all actual human moralities, for any population of humans, will just be one from an infinitely many possible statistics - obviously. The attempt to "scientifically" identify the One True Statistic, a la Harris, is nonsense on stilts.

comment by RobertLumley · 2012-07-05T20:43:36.395Z · LW(p) · GW(p)

That's a meta-level discussion of morality, which I agree is perfectly appropriate. But unless someone is already a utilitarian, very few, if any, arguments will make ey one.

Replies from: DanArmak
comment by DanArmak · 2012-07-05T20:59:01.719Z · LW(p) · GW(p)

Why would I want to make someone a utilitarian? I'm not even one myself. I am human; I have different incompatible goals and desires which most likely don't combine into a single utility function in a way that adds up to normality.

comment by confusecious · 2012-07-08T21:50:19.163Z · LW(p) · GW(p)

a creator would be most concerned about the continuing existence of protoplasm over the changing situations it would incur over the millennia. For the non-thinking organisms ecological effects would predominate. Sociological precepts would influence the evolution of apparent morality then idealized by the religious and the philosophical. a scientific study of morality would involve correlations between anticipated historical outcomes and societal values.