Strong moral realism, meta-ethics and pseudo-questions.
post by Roko · 2010-01-31T20:20:47.159Z · LW · GW · Legacy · 181 commentsContents
181 comments
On Wei_Dai's complexity of values post, Toby Ord writes:
There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.
There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. In the Phil Papers survey for example, 56.3% of philosophers lean towards or believe realism, while only 27.7% lean towards or accept anti-realism.
The kind of moral realist positions that apply Occam's razor to moral beliefs are a lot more extreme than most philosophers in the cited survey would sign up to, methinks. One such position that I used to have some degree of belief in is:
Strong Moral Realism: All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory morphism, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.
But most modern philosophers who call themselves "realists" don't mean anything nearly this strong. They mean that that there are moral "facts", for varying definitions of "fact" that typically fade away into meaninglessness on closer examination, and actually make the same empirical predictions as antirealism.
Suppose you take up Eliezer's "realist" position. Arrangements of spacetime, matter and energy can be "good" in the sense that Eliezer has a "long-list" style definition of goodness up his sleeve, one that decides even contested object-level moral questions like whether abortion should be allowed or not, and then tests any arrangement of spacetime, matter and energy and notes to what extent it fits the criteria in Eliezer's long list, and then decrees goodness or not (possibly with a scalar rather than binary value).
This kind of "moral realism" behaves, to all extents and purposes, like antirealism.
- You don't favor shorter long-list definitions of goodness over longer ones. The criteria for choosing the list have little to do with its length, and more with what a human brain emulation with such-and-such modifications to make it believe only and all relevant true empirical facts would decide once it had reached reflective moral equilibrium.
- Agents who have a different "long list" definition cannot be moved by the fact that you've declared your particular long list "true goodness".
- There would be no reason to expect alien races to have discovered the same long list defining "true goodness" as you.
- An alien with a different "long list" than you, upon learning the causal reasons for the particular long list you have, is not going to change their long list to be more like yours.
- You don't need to use probabilities and update your long list in response to evidence, quite the opposite, you want it to remain unchanged.
I might compare the situation to Eliezer's blegg post: it may be that moral philosophers have a mental category for "fact" that seems to be allowed to have a value even once all of the empirically grounded surrounding concepts have been fixed. These might be concepts such as "would aliens also think this thing?", "Can it be discovered by an independent agent who hasn't communicated with you?", "Do we apply Occam's razor?", etc.
Moral beliefs might work better when they have a Grand Badge Of Authority attached to them. Once all the empirically falsifiable candidates for the Grand Badge Of Authority have been falsified, the only one left is the ungrounded category marker itself, and some people like to stick this on their object level morals and call themselves "realists".
Personally, I prefer to call a spade a spade, but I don't want to get into an argument about the value of an ungrounded category marker. Suffice it to say that for any practical matter, the only parts of the map we should argue about are parts that map-onto a part of the territory.
181 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T20:31:18.035Z · LW(p) · GW(p)
I think there's an ambiguity between "realism" in the sense of "these statements I'm making about 'what's right' are answers to a well-formed question and have a truth value" and "the subject matter of moral discourse is a transcendent ineffable stuff floating out there which compels all agents to obey and which could make murder right by having a different state". Thinking that moral statements have a truth value is cognitivism, which sounds much less ambiguous to me, and that's why I prefer to talk about moral cognitivism rather than moral realism.
As a moral cognitivist, I would look at your diagram and disagree that the Baby-Eating Aliens and humans have different views of the same subject matter, rather, we and they are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as "morality" in both cases. Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating, and we would agree about the physical fact that we find different sorts of logical facts to be compelling.
I have a pending post-to-write on how, to the best of my knowledge, there are only two sorts of things that can make a proposition "true", namely physical events and logical implications, and of course mixtures of the two. I mention this because we have a legitimate epistemic preference for simpler hypotheses about the causes of physical events, but no such thing as an epistemic preference for "simpler axioms" when we are talking about logical facts. We may have an aesthetic preference for simpler axioms in math, but that is not the same thing. If there's no preference for simpler assumptions, that doesn't mean the issue is not a factual one, but it may suggest that we are dealing with logical facts rather than physical facts (statements which are made true by which conclusions follow from which premises, rather than the state of a causal event).
Added: Since I have a definite criterion for something being a "fact", I defend the notion of fact-ness against the charge of being a floating extra.
Replies from: komponisto, Furcas, gregconen, Roko, Roko↑ comment by komponisto · 2010-01-31T21:12:59.145Z · LW(p) · GW(p)
I think there's an ambiguity between "realism" in the sense of "these statements I'm making are answers to a well-formed question and have a truth value" and "morality is a transcendent ineffable stuff floating out there which compels all agents to obey and could make murder right by having a different state".
Yes -- and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view. It's what people are automatically going to think you're talking about if you go around shouting "Yes Virginia, there are moral facts after all!"
Meanwhile, the general public has a term for the view that you and I share: they call it "moral relativism".
I don't recall exactly, and I haven't yet bothered to look it up, but I believe when you first introduced your metaethics, there were people (myself among them, I think), who objected, not to your actual meta-ethical views, but to the way that you vigorously denied that you were a "relativist"; and you misunderstood them/us as objecting to your theory itself (I think you maybe even threw in an accusation of not comprehending the logical subtleties of Loeb's Theorem).
What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans. Thus, it is automatically subject to the "chauvinism" objection with respect to e.g. Babyeaters: we prefer one thing, they prefer another -- why should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer". But people find that answer unpalatable -- and one reason they might is because it would seem to imply that different human cultures should similarly run right over each other if they don't think they share the same values. Now, we may not like the term "relativism", but it seems to me that this "chauvinism" objection is one that you (and I) need to take at least somewhat seriously.
Replies from: Eliezer_Yudkowsky, Nick_Tarleton, Roko, blacktrance, Vladimir_Nesov, Unknowns↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T22:52:38.660Z · LW(p) · GW(p)
Yes -- and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view.
No, it's not. The naive, common-sense human view is that sneaking into Jane's tent while she's not there and stealing her water-gourd is "wrong". People don't end up talking about transcendent ineffable stuff until they have pursued bad philosophy for a considerable length of time. And the conclusion - that you can make murder right without changing the murder itself but by changing a sort of ineffable stuff that makes the murder wrong - is one that, once the implications are put baldly, squarely disagrees with naive moralism. It is an attempt to rescue a naive misunderstanding of the subject matter of mind and ontology, at the expense of naive morality.
What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans
I agree that this constitutes relativism, and deny that I am a relativist.
why should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer".
See above. The correct answer is "Because children shouldn't die, they should live and be happy and have fun." Note the lack of any reference to humans - this is the sort of logical fact that humans find compelling, but it is not a logical fact about humans. It is a physical fact that I find that logic compelling, but this physical fact is not, itself, the sort of fact that I find compelling.
This is the part of the problem which I find myself unable to explain well to the LessWrongians who self-identify as moral non-realists. It is, admittedly, more subtle than the point about there not being transcendent ineffable stuff, but still, there is a further point and y'all don't seem to be getting it...
Replies from: komponisto, ciphergoth↑ comment by komponisto · 2010-02-01T01:15:37.790Z · LW(p) · GW(p)
I agree that this constitutes relativism, and deny that I am a relativist.
It looks to me like the opposing position is not based on disagreement with this point but rather outright failure to understand what is being said.
I have the same feeling, from the other direction.
I feel like I completely understand the error you're warning against in No License To Be Human; if I'm making a mistake, it's not that one. I totally get that "right", as you use it, is a rigid designator; if you changed humans, that wouldn't change what's right. Fine. The fact remains, however, that "right" is a highly specific, information-theoretically complex computation. You have to look in a specific, narrow region of computation-space to find it. This is what makes you vulnerable to the chauvinism charge; there are lots of other computations that you didn't decide to single out and call "right", and the question is: why not? What makes this one so special? The answer is that you looked at human brains, as they happen to be constituted, and said, "This is a nice thing we've got going here; let's preserve it."
Yes, of course that doesn't constitute a general license to look at the brains of whatever species you happen to be a member of to decide what's "right"; if the Babyeaters or Pebblesorters did this, they'd get the wrong answer. But that doesn't change the fact that there's no way to convince Babyeaters or Pebblesorters to be interested in "rightness" rather than babyeating or primaility. It is this lack of a totally-neutral, agent-independent persuasion route that is responsible for the fundamentally relative nature of morality.
And yes, of course, it's a mistake to expect to find any argument that would convince every mind, or an ideal philosopher of perfect emptiness -- that's why moral realism is a mistake!
↑ comment by Paul Crowley (ciphergoth) · 2010-01-31T23:19:20.339Z · LW(p) · GW(p)
I promise to take it seriously if you need to refer to Löb's theorem in your response. I once understood your cartoon guide and could again if need be.
If we concede that when people say "wrong", they're referring to the output of a particular function to which we don't have direct access, doesn't the problem still arise when we ask how to identify what function that is? In order to pin down what it is that we're looking for, in order to get any information about it, we have to interview human subjects. Out of all the possible judgment-specifying functions out there, what's special about this one is precisely the relationship humans have with it.
↑ comment by Nick_Tarleton · 2010-01-31T21:48:51.060Z · LW(p) · GW(p)
Yes -- and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view. It's what people are automatically going to think you're talking about if you go around shouting "Yes Virginia, there are moral facts after all!"
Agreed that this is important. (ETA: I now think Eliezer is right about this.)
Meanwhile, the general public has a term for the view that you and I share: they call it "moral relativism".
We believe (a) that there is no separable essence of goodness, but also (b) that there are moral facts that people can be wrong about. I think the general public understands "moral relativism" to exclude (b), and I don't think there's any short term in common (not philosophical) usage that includes the conjunction of (a) and (b).
What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans.
Eliezer doesn't define morality in terms of humans; he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans. See No License to be Human.
Replies from: komponisto↑ comment by komponisto · 2010-01-31T22:05:26.443Z · LW(p) · GW(p)
We believe (a) that there is no separable essence of goodness, but also (b) that there are moral facts that people can be wrong about. I think the general public understands "moral relativism" to exclude (b)
I think that's uncharitable to the public: surely everyone should admit that people can be mistaken, on occasion, about what they themselves think. A view that holds that nothing that comes out of a person's mouth can ever be wrong is scarcely worth discussing.
Eliezer doesn't define morality in terms of humans; he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans.
The fact that this computation just so happens to be instantiated by humans and nothing else in the known universe cannot be a coincidence; surely there's a causal relation between humans' instantiating the computation and Eliezer's referring to it.
Replies from: Alicorn, Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T23:05:52.357Z · LW(p) · GW(p)
surely there's a causal relation between humans' instantiating the computation and Eliezer's referring to it.
Of course there's a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it's not appealed to as the moral justification. We shouldn't save babies because-morally it's the human thing to do but because-morally it's the right thing to do. What physically causes us to save the babies is a combination of the logical fact that saving babies is the right thing to do, and the physical fact that we are compelled by those sorts of logical facts. What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness - in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness. The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like "If we wanted to eat babies, then that would be the right thing to do."
Replies from: Tyrrell_McAllister, ciphergoth, byrnema, komponisto↑ comment by Tyrrell_McAllister · 2010-01-31T23:36:40.839Z · LW(p) · GW(p)
The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like "If we wanted to eat babies, then that would be the right thing to do."
The moral relativist who says that doesn't really disagree with you. The moral relativist considers a different property of algorithms to be the one that determines whether an algorithm is a morality, but this is largely a matter of definition.
For the relativist, an algorithm is a morality when it is a logic that compels an agent (in the limit of reflection, etc.). For you, an algorithm is a morality when it is the logic that in fact compels human agents (in the limit of reflection, etc.). That is why your view is a kind of relativism. You just say "morality" where other relativists would say "the morality that humans in fact have".
You also seem more optimistic than most relativists that all non-mutant humans implement very nearly the same compulsive logic. But other relativists admit that this is a real possibility, and they wouldn't take it to mean that they were wrong to be relativists.
If there is an advantage to the relativists' use of "morality", it is that their use doesn't prejudge the question of whether all humans implement the same compulsive logic.
Replies from: Rain↑ comment by Paul Crowley (ciphergoth) · 2010-01-31T23:24:23.861Z · LW(p) · GW(p)
Right, so a moral relativist is a kind of moral absolutist who believes that the One True Moral Rule is that you must do what is the collective moral will of the species you're part of.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T23:39:49.693Z · LW(p) · GW(p)
Yup, and so long as I'm going to be a moral absolutist anyway, why be that sort of moral absolutist?
↑ comment by byrnema · 2010-02-01T00:01:29.111Z · LW(p) · GW(p)
I agree that it seems as though I just don't understand. Sometimes, I feel perched on the edge of understanding, feel a little dizzy, and decide I don't understand.
I don't claim to be representative in any way, but my stumbling block seems to be this idea about how saving babies is right. Since I don't feel strongly that saving babies is "right", whenever you write, "saving babies is the right thing to do", I translate this as, "X is the right thing to do" where X is something that is right, whatever that might mean. I leave that as a variable to see if it gets answered later.
Then you write, "What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness - in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness."
How is wrongness or rightness baked into a subject matter?
↑ comment by komponisto · 2010-02-01T01:37:58.358Z · LW(p) · GW(p)
Of course there's a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it's not appealed to as the moral justification
Of course it isn't, because we're doing meta-ethics here, and don't yet have access to the notion of "moral justification"; we're in the process of deciding which kinds of things will be used as "moral justification".
It's your metamorality that is human-dependent, not your morality; see my other comment.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T01:44:33.472Z · LW(p) · GW(p)
Now I'm confused. I don't understand how you can have preferences that you use to decide what ought to count as a "moral justification" without already having a moral reference frame.
Since we don't have conscious access to our premises, and we haven't finished reflecting on them, we sometimes go around studying our own conclusions in an effort to discover what counts as a moral justification, but that's not like a philosopher of pure emptiness constructing justificationness from scratch and appeal to some mysterious higher criterion. (Bearing in mind that when someone offers me a higher criterion, it usually ends up looking pretty uninteresting.)
Replies from: komponisto, TheAncientGeek↑ comment by komponisto · 2010-02-01T03:53:06.367Z · LW(p) · GW(p)
I don't understand how you can have preferences that you use to decide what ought to count as a "moral justification" without already having a moral reference frame.
Well, consider an analogy from mathematical logic: when you write out a formal proof that 2+2 = 4, at some point in the process, you'll end up concatenating two symbols here and two symbols there to produce four symbols; but this doesn't mean you're appealing to the conclusion you're trying to prove in your proof; it just so happens that your ability to produce the proof depends on the truth of the proposition.
Similarly, when an AI with Morality programmed into it computes the correct action, it just follows the Morality algorithm directly, which doesn't necessarily refer explicitly to "humans" as such. But human programmers had to program the Morality algorithm into the AI in the first place; and the reason they did so is because they themselves were running something related to the Morality algorithm in their own brains. That, as you know, doesn't imply that the AI itself is appealing to "human values" in its actual computation (the Morality program need not make such a reference); but it does imply that the meta-ethical theory used by the programmers compelled them to (in an appropriate sense) look at their own brains to decide what to program into the AI.
↑ comment by TheAncientGeek · 2014-05-29T15:01:09.322Z · LW(p) · GW(p)
That would be epistemic preferences. It's epistemology (and allied fields, like logic and rationality) thatreally runs into circularity problems.
↑ comment by Roko · 2010-01-31T21:38:17.284Z · LW(p) · GW(p)
why should we do what we prefer rather than what they prefer?
Because Eliezer made the ingenious move of redefining "should" to mean "do what we prefer".
It is an internally consistent way of using language, it is just somewhat unique.
Replies from: Eliezer_Yudkowsky, timtyler, Nick_Tarleton↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T23:02:26.966Z · LW(p) · GW(p)
I truly and honestly say to you, Roko, that while you got most of my points, maybe even 75% of my points, there seems to be a remaining point that is genuinely completely lost on you. And a number of other people. It is a difficult point. People here are making fun of my attempt to explain it using an analogy to Lob's Theorem, as if that was the sort of thing I did on a whim, or because of being stupid. But... my dear audience... really, by this point, you ought to be giving me the benefit of the doubt about that sort of thing.
Also, it appears from the comment posted below and earlier that this mysterious missed point is accessible to, for example, Nick Tarleton.
It looks to me like the opposing position is not based on disagreement with this point but rather outright failure to understand what is being said.
Replies from: SilasBarta, Roko↑ comment by SilasBarta · 2010-01-31T23:21:49.336Z · LW(p) · GW(p)
Well, you did make a claim about what is the right translation when speaking to babyeaters:
we and they are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as "morality" in both cases. Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating
But there has to be some standard by which you prefer the explanation "we mistranslated the term 'morality'" to "we disagree about morality", right? What is that? Presumably, one could make your argument about any two languages, not just ones with a species gap:
"We and Spaniards are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as "morality" in both cases. Morality is about how to protect freedoms, not restrict them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the Spaniards would agree with us about what is moral, we would agree with them about what is familydutyhonoring."
ETA: A lot of positive response to this, but let me add that I think a better term in the last place would be something like "morality-to-Spaniards". The intuition behind the original phrasing was to show how you can redefine Spanish standards of morality to be "not-morality", but rather, just "things that we place different priority on".
But it's clearly absurd there: the correct translation of ética is not "ethics-to-Spaniards", but rather, just plain old "ethics". And the same reasoning should apply to the babyeather case.
Replies from: gregconen↑ comment by gregconen · 2010-02-02T13:38:02.039Z · LW(p) · GW(p)
To go a step further, moral disagreement doesn't require a language barrier at all.
"We and abolitionists are talking about a different subject matter and it is an error of the "computer translation programs" that the word comes out as "morality" in both cases. Morality is about how to create a proper relationship between races, everyone knows that and they happen to be right. If we could get past difficulties of the "translation", the abolitionists would agree with us about what is moral, we would agree with them about what is abolitionism."
↑ comment by Roko · 2010-01-31T23:39:54.802Z · LW(p) · GW(p)
Also, it appears from the comment posted below and earlier that this mysterious missed point is accessible to, for example, Nick Tarleton.
he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans
No, I understand that your long list wouldn't change if humanity itself changed, that if I altered every human to like eating babies, that wouldn't make babyeating right, in the Eliezer world.
"the rightness computation that humanity just happens to instantiate" is different from "whatever computation humanity instantiates"
Is there something else I am missing?
↑ comment by Nick_Tarleton · 2010-01-31T21:53:19.091Z · LW(p) · GW(p)
Because Eliezer made the ingenious move of redefining "should" to mean "do what we prefer".
He doesn't define it indexically (like this) or in terms of humans; as I understand it, he defines it in terms of an objective computation that happens to be instantiated by humans (No License to be Human).
↑ comment by blacktrance · 2014-05-29T16:26:46.935Z · LW(p) · GW(p)
As I understand it, relativism doesn't mean "refers explicitly to particular agents". Suppose there's a morality-determining function that takes an agent's terminal values and their psychology/physiology and spits out what that agent should do. It would spit different things out for different agents, and even more different things for different kinds of agents (humans vs babyeaters). Nevertheless, this would not quite be moral relativism because it would still be the case that there's an objective morality-determining function that is to be applied to determine what one should do. Moral relativism would not merely say that there's no one right way one should act, it would also say that there's no one right way to determine how one should act.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-29T16:30:00.937Z · LW(p) · GW(p)
It's not objective, because it's results differ with differing terminal values. An objective morality machine would tell you what you should do, not tell you how to satisfy your values. Iow, morality isn't decision theory.
Replies from: blacktrance↑ comment by blacktrance · 2014-05-29T16:42:01.446Z · LW(p) · GW(p)
An objective morality machine would tell you what you should do, not tell you how to satisfy your values
Why must the two be mutually exclusive? Why can't morality be about satisfying your values? One could say that morality properly understood is nothing more than the output of decision theory, or that outputs of decision theory that fall in a certain area labeled "moral questions" are morality.
Replies from: komponisto, TheAncientGeek↑ comment by komponisto · 2014-05-29T19:24:30.554Z · LW(p) · GW(p)
Why can't morality be about satisfying your values?
Because that isn't how the term "morality" is typically used by humans. The "morality police" found in certain Islamic countries aren't life coaches. The Ten Commandments aren't conditional statements. When people complain about the decaying moral fabric of society, they're not talking about a decline in introspective ability.
Inherent to the concept of morality is the external imposition of values. (Not just decisions, because they also want you to obey the rules when they're not looking, you see?) Sociologically speaking, morality is a system for getting people to do unfun things by threatening ostracization.
Decision theory (and meta-decision-theory etc.) does not exist to analyze this concept (which is not designed for agents); it exists to replace it.
Replies from: TheAncientGeek, bogus, blacktrance↑ comment by TheAncientGeek · 2014-05-29T20:34:18.731Z · LW(p) · GW(p)
Morality done right is about the voluntary and mutual adjustment of values ( or rather actions expressing them).
Morally done wrong can go two ways, one failure mode is hedonism, where the individual takes no notice of the preferences of others:; the other is authoritarianism, where "society" (rather, its representatives) imposes values that no-one likes or has a say in.
↑ comment by bogus · 2014-05-31T14:59:32.460Z · LW(p) · GW(p)
Because that isn't how the term "morality" is typically used by humans. The "morality police" found in certain Islamic countries aren't life coaches. The Ten Commandments aren't conditional statements. ... Inherent to the concept of morality is the external imposition of values.
Morality is about all of these things. and more besides. Although "outer" morality as embodied in moral codes and moral exemplars is definitely important, if there were no inner values for humans to care about in the first place, no one would be going around and imposing them on others, or even debating them in any way.
And it is a fact about the world that most basic moral values are shared among human societies. Morality may or may not be objective, but it is definitely intersubjective in a way that looks 'objective' to the casual observer.
↑ comment by blacktrance · 2014-05-29T20:39:47.815Z · LW(p) · GW(p)
"Morality" is used by humans in unclear ways and I don't know how much can be gained from looking at common usage. It's more sensible to look at philosophical ethical theories rather than folk morality - and there you'll find that moral internalism and ethical egoism are within the realm of possible moralities.
↑ comment by TheAncientGeek · 2014-05-29T16:53:09.003Z · LW(p) · GW(p)
Note the word objective.
Replies from: blacktrance↑ comment by blacktrance · 2014-05-29T17:05:21.653Z · LW(p) · GW(p)
An objective morality machine would tell you the One True Objective Thing TheAncientGeek Should Do, given your values, but this thing need not be the same as The One True Objective Thing Blacktrance Should Do. The calculations it performs are the same in both cases (which is what makes it objective), but the outputs are different.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-29T18:22:11.793Z · LW(p) · GW(p)
You are misusing "objective". How does your usage differ from telling me what i should do subjectively? How can.true-for-me-but-not-for-you clauses fail to indicate subjectivity? How cam it be coherent to say there is one truth, only it is different for everybody?
Replies from: Vaniver, blacktrance↑ comment by Vaniver · 2014-05-29T18:44:56.586Z · LW(p) · GW(p)
A person's height is objectively measurable; that does not mean all people have the same height.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-29T19:01:33.860Z · LW(p) · GW(p)
"True about person P" is objective.
"True for person P about X" is subjective.
Subjectivity is multiple truths about one thing, ie multiple claims about one thing, which are indexed to individuals, and which would be contradictory without the indexing.
Replies from: Vaniver, blacktrance↑ comment by Vaniver · 2014-05-29T19:28:48.762Z · LW(p) · GW(p)
In this discussion, I understand there to be three positions:
- There is one objectively measurable value system.
- There is an objectively measurable value system for each agent.
- There are not objectively measurable value systems.
The 'objective' and 'subjective' distinction is not particularly useful for this discussion, because it confuses the separation between 'measurable' and 'unmeasurable' (1+2 vs. 3) and 'universal' and 'particular' (1 vs. 2+3).
But even 'universal' and 'particular' are not quite the right words- Clippy's particular preference for paperclips is one that Clippy would like to enforce on the entire universe.
Replies from: komponisto↑ comment by komponisto · 2014-05-29T19:40:35.067Z · LW(p) · GW(p)
No one holds 3. 1 is ambiguous; it depends on whether we're speaking "in character" or not. If we are, then it follows from 2 ("there is one objectively measurable value system, namely mine").
The trouble with Eliezer's "metaethics" sequence is that it's written in character (as a human), and something called "metaethics" shouldn't be.
Replies from: Vaniver, nshepperd↑ comment by Vaniver · 2014-05-29T19:50:31.611Z · LW(p) · GW(p)
No one holds 3.
It is not obvious to me that this is the case.
[edit to expand]: I think that when a cognitivist claims "I'm not a relativist," they need to have a position like 3 to identify as relativism. Perhaps it is an overreach to use 'value system' instead of 'morality' in the description of 3, which was a choice driven more by my allergy to the word 'morality' than to be correct or communicative.
1 is ambiguous; it depends on whether we're speaking "in character" or not. If we are, then it follows from 2 ("there is one objectively measurable value system, namely mine").
One could be certain that God's morality is correct, but be uncertain what God's morality is.
The trouble with Eliezer's "metaethics" sequence is that it's written in character (as a human), and something called "metaethics" shouldn't be.
I agree with this assessment.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-29T22:02:16.469Z · LW(p) · GW(p)
Yes. He's has strong intuitions that his own moral intuitions are really true, combined with strong intuitions that,morality is this very localized .human thing,, that doesn't exist elsewhere. So he defines morality as what humans.think morality is...what I dont know isn't knowledge.
↑ comment by nshepperd · 2014-05-30T02:20:18.357Z · LW(p) · GW(p)
The trouble with Eliezer's "metaethics" sequence is that it's written in character (as a human), and something called "metaethics" shouldn't be.
People always write in character. If you try to use some different definition of "morality" than normal for talking about metaethics, you'll reach the wrong conclusions because, y'know, you're quite literally not talking about morality any more.
Replies from: komponisto↑ comment by komponisto · 2014-05-30T04:59:04.549Z · LW(p) · GW(p)
Language is different from metalanguage, even if both are (in) English.
You shouldn't be using any definition of "morality" when talking about metaethics, because on that level the definition of "morality" isn't fixed; that's what makes it meta.
My complaint about the sequence is that it should have been about the orthogonality thesis, but instead ended up being about rigid designation.
Replies from: TheAncientGeek, nshepperd↑ comment by TheAncientGeek · 2014-05-30T08:27:17.406Z · LW(p) · GW(p)
You should use a definition, but one that doesn't beg the question.
↑ comment by nshepperd · 2014-05-30T05:20:44.078Z · LW(p) · GW(p)
I can't make sense of that. Isn't the whole point of metaethics to create an account of what this morality stuff is (if it's anything at all) and how the word "morality" manages to refer to it? If metaethics wasn't about morality it wouldn't be called metaethics, it would be called, I dunno, "decision theory" or something.
And if it is about morality, it's unclear how you're supposed to refer to the subject matter (morality) without saying "morality". Or the other subject matter (the word "morality") to which you fail to refer if you start talking about a made-up word that's also spelled "m o r a l i t y" but isn't the word people actually use.
My complaint about the sequence is that it should have been about the orthogonality thesis, but instead ended up being about rigid designation.
I remember it as being about both. (exhibit 1, exhibit 2. The latter was written before EY had heard of rigid designators, though. It could probably be improved these days.)
↑ comment by blacktrance · 2014-05-29T19:23:11.238Z · LW(p) · GW(p)
Subjectivity is multiple truths about one thing
Agreed. What I should do is a separate thing from what you should do, even though they're the same type of thing and may be similar in many ways.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-29T20:11:25.217Z · LW(p) · GW(p)
What you morally should do to me has to take me into account, and vice versa. Otherwise you are using morality to mean hedonism.
Replies from: blacktrance↑ comment by blacktrance · 2014-05-29T20:50:39.956Z · LW(p) · GW(p)
In one sense, this is trivial. I have to take you into account when I do something to you, just like I have to take rocks into account when I do something to them. You're part of a state of the world. (It may be the case that after taking rocks into account, it doesn't affect my decision in any way. But my decision can still be formulated as taking rocks into account.)
In another sense, whether I should take your well-being into account depends on my values. If I'm Clippy, then I shouldn't. If I'm me, then I should.
Otherwise you are using morality to mean hedonism.
Hedonism makes action-guiding claims about what you should do, so it's a form of morality, but it doesn't by itself mean that I shouldn't take you into account - it only means that I should take your well-being into account instrumentally, to the degree it gives me pleasure. Also, the fulfillment of one's values is not synonymous with hedonism. A being incapable of experiencing pleasure, such as some form of Clippy, has values but acting to fulfill them would not be hedonism.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-29T21:45:35.328Z · LW(p) · GW(p)
Whether or not or you morally-should take me into account does not depend on your values, it depends on what the correct theory of morality is. "Should" is not an unambiguous term with a free variable for " to whom". It is an ambiguous term, and morally-should is not hedonistically-should, is not practically-should....etc.
Replies from: blacktrance↑ comment by blacktrance · 2014-05-29T21:52:10.369Z · LW(p) · GW(p)
Unless the correct theory of morality is that morally-should is the same thing as practically-should, in which case it would depend on your values.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-29T22:12:25.803Z · LW(p) · GW(p)
A sentence beginning "unless the correct theory is" does not refute a sentence including " depends on what the correct theory "....
Replies from: blacktrance↑ comment by blacktrance · 2014-05-29T22:16:04.033Z · LW(p) · GW(p)
If the correct theory of morality is that morally-should is the same as practically-should, then "whether or not you morally-should take me into account does not depend on your values" is false.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-29T22:30:19.727Z · LW(p) · GW(p)
Whether or not morality depends on your values depends on what the correct theory of morality is.
↑ comment by blacktrance · 2014-05-29T18:44:43.066Z · LW(p) · GW(p)
Saying it's true-for-me-but-not-for-you conflates two very different things: truth being agent-relative and descriptive statements about agents being true or false depending on the agent they're referring to. "X is 6 feet tall" is true when X is someone who's 6 feet tall and false when X is someone who's 4 feet tall, and in neither case is it subjective, even though the truth-value depends on who X is. Morality is similar - "X is the right thing for TheAncientGeek to do" is an objectively true (or false) statement, regardless of who's evaluating you. Encountering "X is the right thing to do if you're Person A and the wrong thing to do if you're Person B" and thinking moralitry subjective is the same sort of mistake as if you encountered the statement "Person A is 6 feet tall and Person B is not 6 feet tall" and concluded that height is subjective.
Replies from: TheAncientGeek, komponisto↑ comment by TheAncientGeek · 2014-05-29T19:12:13.352Z · LW(p) · GW(p)
See my other reply.
Indexing statements about individuals to individuals is harmless. Subjectivity comes in when you index statements about something else to individuals.
Morally relevant actions are actions which potentially affect others
Your morality machine is subjective because I don't need to feed in anyone else's preferences, even though my actions will affect them.
Replies from: blacktrance↑ comment by blacktrance · 2014-05-29T19:24:29.858Z · LW(p) · GW(p)
Other people's preferences are part of states of the world, and states of the world are fed into the machine.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-29T19:44:17.473Z · LW(p) · GW(p)
Not part of the original spec!!!
Replies from: blacktrance↑ comment by blacktrance · 2014-05-29T20:02:51.544Z · LW(p) · GW(p)
Fair enough. In that case, the machine would tell you something like "Find out expected states of the world. If it's A, do X. If it's B, do Y".
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-29T21:34:55.903Z · LW(p) · GW(p)
It may well, but that' is a less interesting and comtentious claim. It's fairly widely accepted that the sum total of ethi.cs is inferrable from (supervenes on) the sum total of facts.
↑ comment by komponisto · 2014-05-29T19:10:41.724Z · LW(p) · GW(p)
Morality is similar - "X is the right thing for TheAncientGeek to do" is an objectively true (or false) statement, regardless of who's evaluating you.
Not so! Rather, "X is the right thing for TheAncientGeek to do given TheAncientGeek's values" is an objectively true (or false) statement. But "X is the right thing for TheAncientGeek to do" tout court is not; it depends on a specific value system being implicitly understood.
Replies from: blacktrance, TheAncientGeek↑ comment by blacktrance · 2014-05-29T20:30:50.812Z · LW(p) · GW(p)
"X is the right thing for TheAncientGeek to do" is synonymous with "X is the right thing for TheAncientGeek to do according to his (reflectively consistent) values". You may not want him to act in accordance with his values, but that doesn't change the fact that he should - much like in the standard analysis of the prisoner's dilemma, each prisoner wants the other to cooperate, but has to admit that each of them should defect.
↑ comment by TheAncientGeek · 2014-05-29T19:26:26.412Z · LW(p) · GW(p)
Same mistake, Only actions that affect others are morally relevant, from which it follows that rightness cannot be evaluated from one person's values alone.
Maximizing ones values solipsitically is hedonism, not morality.
Replies from: komponisto↑ comment by komponisto · 2014-05-29T19:28:54.828Z · LW(p) · GW(p)
Notice I didn't use the term "morality" in the grandparent. Cf. my other comment.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-29T20:24:06.875Z · LW(p) · GW(p)
But the umpteenth grandparent was explicitly about morality.
↑ comment by Vladimir_Nesov · 2010-02-01T09:02:31.842Z · LW(p) · GW(p)
What makes the theory relativist is simply the fact that it refers explicitly to particular agents -- humans.
Unfortunately, it's not that easy. An agent, given by itself, doesn't determine preference. It probably does so to a large extent, but not entirely. There is no subject matter of "preference" in general. "Human preference" is already a specific question that someone has to state, that doesn't magically appear from a given "human". A "human" might only help (I hope) to pinpoint the question precisely, if you start in the general ballpark of what you'd want to ask.
I suspect that "Vague statement of human preference"+"human" is enough to get a question of "human preference", and the method of using the agent's algorithm is general enough for e.g. "Vague statement of human preference"+"babyeater" to get a precise question of "babyeater preference", but it's not a given, and isn't even expected to "work" for more alien agents, who are compelled by completely different kinds of questions (not that you'd have a way of recognizing such "error").
The reference to humans or babyeaters is in the method of constructing a preference-implementing machine, not in the concept itself. What humans are is not the info that compels you to define human preference in a particular way, although what humans are may be used as a tool in the definition of human preference, simply because you can pull the right levers and point to the chunks of info that go into the definition you choose.
[W]hy should we do what we prefer rather than what they prefer? The correct answer is, of course, "because that's what we prefer"
That's not a justification. They may turn out to do something right, where you were mistaken, and you'll be compelled to correct.
Replies from: komponisto↑ comment by komponisto · 2010-02-01T11:17:00.693Z · LW(p) · GW(p)
The reference to humans or babyeaters is in the method of constructing a preference-implementing machine, not in the concept itself.
Yes.
↑ comment by Unknowns · 2010-02-01T04:17:42.058Z · LW(p) · GW(p)
As it is commonly understood, Eliezer is definitely NOT a moral relativist.
Replies from: komponisto, TheAncientGeek, Kevin↑ comment by komponisto · 2010-02-01T04:22:14.829Z · LW(p) · GW(p)
(Downvoted for denying my claim without addressing my argument. That's very annoying.)
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2010-02-03T17:38:42.136Z · LW(p) · GW(p)
re: denying claim without addressing argument IMO, such comments are acceptable when the commenter is of high enough status in the community. Obviously I'd prefer they address the argument, but I consider myself better off just knowing that certain people agree or disagree.
ADDED: Note, I am merely stating my personal preference, not insisting that my personal preference become normatively binding on LW. I also happen to agree with Komponisto's judgment that Unknowns previous comment was unhelpful.
Replies from: komponisto, MrHen↑ comment by komponisto · 2010-02-03T17:50:36.485Z · LW(p) · GW(p)
I disagree.
ETA: Note that an implication of what you said is that replying in that manner constitutes an assertion of higher status than the other person; this is exactly why it is irritating.
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2010-02-03T18:31:13.404Z · LW(p) · GW(p)
I think assertions of higher status can sometimes be characterized as justifiable or even desirable. Eliezer does this all the time. The alternative to "stating disagreement while failing to address the details of the argument," is often to ignore the comment altogether. (Also, see edit to my previous comment before replying further.)
Replies from: komponisto↑ comment by komponisto · 2010-02-03T18:37:57.059Z · LW(p) · GW(p)
Well, if you agree with me about that particular comment, maybe it would have been preferable to wait for an occasion where you actually disagreed with my judgment to make this point?
(This would help cut down on "fake disagreements", i.e. disagreements arising out of misunderstanding.)
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2010-02-03T18:49:53.607Z · LW(p) · GW(p)
Agreed.
↑ comment by MrHen · 2010-02-03T18:06:27.743Z · LW(p) · GW(p)
I think the manner in which komponisto was calling Eliezer a moral relativist deserves a more thorough answer. If I make an off-handed remark and someone disagrees with me, I find an off-handed remark fair. If I spend three paragraphs and get, "No," as a response I will be annoyed.
In this case, I side with komponisto.
↑ comment by TheAncientGeek · 2014-05-29T15:12:29.228Z · LW(p) · GW(p)
Not ndividual level relativism, or not group level relativism?
↑ comment by Furcas · 2010-01-31T21:18:49.957Z · LW(p) · GW(p)
I think it would do us all a lot of good (and it would be a lot clearer) to use the word 'morality' to mean all the implications that follow from all terminal values, much as we use the word 'mathematics' to mean all the theorems that follow from all axioms. This would force us to specify which kind of morality we're talking about.
For example, it would be meaningless to ask if I should steal from the rich. It would only be meaningful to ask if I me-should steal from the rich (i.e. if it follows from my terminal values), or if I you-should steal from the rich (i.e. if it follows from your terminal values), or if I us-should steal from the rich (i.e. if it follows from the terminal values we share), or if I Americans-should steal from the rich (i.e. if it follows from the terminal values that Americans share), etc.
I know I'm not explaining anything you don't already know, Eliezer; my point is that your use of the words 'morality' and 'should' has been confusing quite a few people. Or perhaps it would be more accurate to say that your use of those words has failed to extirpate certain people from their pre-existing confusion.
Replies from: Eliezer_Yudkowsky, byrnema↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T23:13:57.169Z · LW(p) · GW(p)
But then morality does not have as its subject matter "Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."
Instead, it has primarily as its subject matter a list of ways to transform the universe into paperclips, cheesecake, needles, orgasmium, and only finally, a long way down the list, into eudaimonium.
I think this is not the subject matter that most people are talking about when they talk about morality. We should have a different name for this new subject, like "decision theory".
Replies from: Furcas, Matt_Simpson↑ comment by Furcas · 2010-01-31T23:42:16.466Z · LW(p) · GW(p)
I think this is not the subject matter that most people are talking about when they talk about morality.
True, as long as they're talking about the stuff that is implied by their terminal values.
However, when they start talking about the stuff that is implied by other people's (or aliens', or AIs') terminal values, the meaning they attach to the word 'morality' is a lot closer to the one I'm proposing. They might say things like, "Well, female genital mutilation is moral to Sudanese people. Um, I mean, errr, uh...", and then they're really confused. This confusion would vanish (or at least, would be more likely to vanish) if they were forced to say, "Well, female genital mutilation is Sudanese-moral but me-immoral."
Ideally, to avoid all confusion we should get rid of the word morality completely, and have everyone speak in terms of goals and desires instead.
Replies from: Jordan↑ comment by Jordan · 2010-02-01T06:39:11.935Z · LW(p) · GW(p)
Agreed. If it happened that there were only a few different sets of terminal values in existence, then I would be OK with assigning different words to the pursuit of those different sets. One of those words could be 'moral'. However, as is, the set of all terminal values represented by humans is too fractured and varied.
A large chunk of the list Eliezer provides in the above comment probably is nearly universal to humanity, but the entire list is not, and there are certainly many disputes on the relative ordering (especially as to what is on top).
↑ comment by Matt_Simpson · 2010-02-01T23:45:44.338Z · LW(p) · GW(p)
But then morality does not have as its subject matter....
I think you can keep that definition: define morality and morality-human. However, at least in the metaethics sequence, it would have done a lot of good to distinguish between morality-Joe and morality-Jane even if you were eventually going to argue that the two were equivalent. Once you're finished arguing that point, however, go on using the term "morality" the way you want to.
I only say this because of my own experience. I didn't really understand the metaethics sequence when I first read it. I was also struggling with Hume at the time, and it was actually that struggle that led me to make the connection between what an agent "should" do and decision theory. Only later I realized that was exactly what you were doing, and I chalk part of it up to confusing terminology. If you dig through some of the original posts, I was (one of many?) confusing your arguments for classical utilitarianism.
On the other hand, I may not be representative. I'm used to thinking of agent's utility functions through economics, so the leap to should-X/morality-X connected to X's utility function was a small one, relatively speaking.
↑ comment by byrnema · 2010-01-31T21:43:18.949Z · LW(p) · GW(p)
I thought there was no way I could ever understand what Eliezer had written, but you've provided a clue. Should I translate this:
Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating, and we would agree about the physical fact that we find different sorts of logical facts to be compelling.
as this?
Human-morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is human-moral, we would agree with them about what is babyeating-moral, and we would agree about the physical fact that we find different sorts of logical facts to be compelling.
Also, what was especially perplexing, translate:
"What should be done with the universe" invokes a criterion of preference, "should", which compels humans but not Babyeaters. If you look at the fact that the Babyeaters are out trying to make a different sort of universe [...] They do the babyeating thing, we do the right thing;
as:
Replies from: Furcas"What should be done with the universe" invokes a criterion of preference, "human-should", which compels humans but not Babyeaters. If you look at the fact that the Babyeaters are out trying to make a different sort of universe [...] They do the babyeating-right thing, we do the human-right thing; ?
↑ comment by Furcas · 2010-01-31T22:18:32.816Z · LW(p) · GW(p)
Should I translate this: [...] as this? [...]
Yes.
Also, what was especially perplexing, translate: "[...] as: [...] ?
Yes!
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T23:14:15.386Z · LW(p) · GW(p)
No. See other replies.
Replies from: Furcas, nolrai, Rain↑ comment by Furcas · 2010-02-01T00:05:39.509Z · LW(p) · GW(p)
I understand and agree with your point that the long list of terminal values that most humans share aren't the 'right' ones because they're values that humans have. If Omega altered the brain of every human so that we had completely different values, 'morality' wouldn't change.
Therefore, to be perfectly precise, byrnema would have to edit her comment to substitute the long list of values that humans happen to share for the word 'human', and the long list of values that Babyeaters happen to share for the word 'babyeating'.
So yeah, I get why someone who doesn't want to create this kind of confusion in his interlocutors would avoid saying "human-right" and "human-moral". The problem is that you're creating another kind of confusion.
Replies from: byrnema↑ comment by byrnema · 2010-02-01T00:37:40.051Z · LW(p) · GW(p)
If Omega altered the brain of every human so that we had completely different values, 'morality' wouldn't change.
Is this because morality is reserved for a particular list - the list we currently have -- rather than a token for any list that could be had?
Replies from: Furcas↑ comment by Furcas · 2010-02-01T00:49:32.611Z · LW(p) · GW(p)
It's because [long list of terminal values that current humans happen to share]-morality is defined by the long list of terminal values that current humans happen to share. It's not defined by the list of terminal values that post-Omega humans would happen to have.
Is arithmetic "reserved for" a particular list of axioms or for a token for any list of axioms? Neither. Arithmetic is its axioms and all that can be computed from them.
↑ comment by nolrai · 2010-02-02T22:22:59.724Z · LW(p) · GW(p)
See I think you miss understanding his response. I mean that is the only way I can interpret it to make sense.
Your insistence that it is not the right interpretation is very odd. I get that you don't want to trigger peoples cooperation instincts, but thats the only framework in which talking about other beings makes sense.
The morality you are talking about is the human-now-extended morality, (well closer to the less-wrong-now-extended morality) in that it is the morality that results from extending from the values humans currently have. Now you seem to have a categorization that need to categorize your own morality as different from others in order to feel right about imposing it? So you categorize it as simply morality, but your morality is is not necessarily my morality and so that categorization feels iffy to me. Now its certainly closer to mine then to the baby eaters, but I have no proof it is the same. Calling it simply Morality papers over this.
↑ comment by Rain · 2010-02-09T19:53:28.083Z · LW(p) · GW(p)
You're wrong. Despite how much I'd like to have a universal, ultimate, true morality, you can't create it out of whole cloth by defining it as "what-humans-value". That's pretending there's no reason to look up, because, "Look! It's right there in front of you. So be sure not to look up."
↑ comment by gregconen · 2010-02-01T22:30:17.225Z · LW(p) · GW(p)
Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating, and we would agree about the physical fact that we find different sorts of logical facts to be compelling.
This simply pushes the problem back one level, by making the word "morality" descriptive instead of normative. Morality is X and babyeating is Y. But how should one choose between morality and babyeating? Now, instead of a moral anti-realist, I'm a moral realist, a babyeating realist, and normative judgement anti-realist.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-02T05:59:09.386Z · LW(p) · GW(p)
But how should one choose between morality and babyeating?
Channeling my inner Eliezer, the answer is obviously that you should choose morality (since "should" is just "morality" as a verb).
Now, instead of a moral anti-realist, I'm a moral realist, a babyeating realist, and normative judgement anti-realist.
No, because normative judgement = morality.
This is almost starting to make sense, except... Suppose I say this to a babyeater: "We should sign a treaty banning the development and use of antimatter weapons." What could that possibly mean? Or if one murderer says to another "We should dump the body in the river." he is simply stating a factual falsehood?
I wonder if this is a good summary of our disagreement with Eliezer:
- His proposed definitions of "morality" and especially "should" and "ought" are objectionable. They are just not what we mean when we use those words.
- He classifies his metaethics as realism whereas we would classify it as anti-realism.
Out of these two, 1 is clearly both a bigger problem and where Eliezer is more obviously wrong. I really don't understand why he sticks to his position there.
Replies from: Vladimir_Nesov, Douglas_Knight, TheAncientGeek↑ comment by Vladimir_Nesov · 2010-02-02T22:44:57.351Z · LW(p) · GW(p)
This is almost starting to make sense, except... Suppose I say this to a babyeater: "We should sign a treaty banning the development and use of antimatter weapons." What could that possibly mean?
Supposedly the acceptable plan would both be the right thing to do and the babyeating thing to do at the same time: right given the presence and influence of babyeaters, and babyeating given the presence and influence of humans. So, when it is said, "Let us sign this treaty.", humans sign it, because it should be done, and babyeaters also do so, because it's a babyeating thing to do. The contract is chosen to compel both parties.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-03T04:25:41.823Z · LW(p) · GW(p)
I agree with your explanation of the intended semantics of the sentence, which is also my explanation. What I disagree with is the suggestion that we denote that meaning using "Let us sign this treaty." instead of "We should sign this treaty." I believe the intended meaning is more naturally expressed using the second sentence, and trying to redefine the word "should" so that the second sentence means something else and we're forced to use the first sentence to express the same meaning, is wrong.
Also, since the first sentence is imperative instead of declarative, I'm not sure that it doesn't mean something else already, so that now you're hijacking two words instead of one.
Replies from: torekp↑ comment by torekp · 2010-02-07T21:19:48.201Z · LW(p) · GW(p)
There can be a separable sense of "should" that indicates rationality. Thus, "we should sign the treaty" can be an interesting truth for both parties when the "should" is that of rationality, and true for both parties but only interesting from the human side when the "should" is a moral should.
This commits one to what philosophers call moral externalism, namely, the view that what is morally required is not necessarily rationally required. Which is not a reason to reject the view, but I expect it will be criticized.
↑ comment by Douglas_Knight · 2010-02-02T08:52:02.159Z · LW(p) · GW(p)
He characterizes his metaethics as realism whereas we would characterize it as anti-realism.
Where does he characterize it as realism? When he chooses the word, he always chooses "cognitivism"; if someone else says "realism," he doesn't object, but he makes sure to define it to match cognitivism and indicates that there other notions of realism that he doesn't endorse.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-02T10:59:29.968Z · LW(p) · GW(p)
Thanks for pointing out the error. I changed it to "classify".
↑ comment by TheAncientGeek · 2014-05-29T14:51:04.293Z · LW(p) · GW(p)
Should has many meanings. Which moral system I believe in is meta level, not object level and probably implies an epistemic-should or rational-should rather than moral-should.
Likewise, not all normative judgement is morality. What you should do to maximise personal pleasure, .lor make money, or "win" in some way , is generally not what you morally-should.
↑ comment by Roko · 2010-01-31T21:07:13.709Z · LW(p) · GW(p)
I agree that your view on meta-ethics differs from mine only in terminology, not in content.
You're using "moral" where I would use "my particular preferences". From a Rhetoric point of view, this has obvious advantages.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-31T23:08:59.108Z · LW(p) · GW(p)
No, as far as I can tell he's using "moral" to refer to CEV.
Which I think underestimates how parochial people are when they typically use the word "moral".
Replies from: Roko, ShardPhoenix↑ comment by Roko · 2010-02-01T11:42:34.908Z · LW(p) · GW(p)
CEV is not even guaranteed to output anything, if there is insufficient convergence.
So eliezer's rigid list must be defined by some other criterion, presumably decided by him. For example, he could use his own Individual extrapolated volition with a certain preferred measure over the free variables of extrapolation, unless CEV gave a coherent output, in which case use CEV.
Another problem with using CEV is that its output could be sensitive to minor changes to the human population of earth. But in Eliezer's metaethics, "Should" is a rigid designator, so he would have to pick a specific time, correct to the exact second, and that time had better be in the past, otherwise "Should" would be nonrigid - you would be able to change "Should" by doing something that affected people.
Replies from: wnoise↑ comment by ShardPhoenix · 2010-02-01T10:14:42.942Z · LW(p) · GW(p)
I'd assume he imagines CEV as being pretty similar to his own particular preferences, though - otherwise, shouldn't he adjust his preferences already?.
The main reason why I don't like they way Eliezer uses terms like "morality" is because it feels like he's trying to redefine "morality" to mean "what I, Eliezer Yudkowsky, personally want", which doesn't make for enlightening discussion.
↑ comment by Roko · 2010-01-31T21:04:25.044Z · LW(p) · GW(p)
Baby-Eating Aliens and humans have different views of the same subject matter
In the end, the subject matter that matters in an interaction between baby eaters and humans is what to do with the universe, and that's the one I'm talking about.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T21:06:59.609Z · LW(p) · GW(p)
But I just described two kinds of subject matter that are the only two kinds of subject matter I know about: physical facts and mathematical facts. "What should be done with the universe" invokes a criterion of preference, "should", which compels humans but not Babyeaters. If you look at the fact that the Babyeaters are out trying to make a different sort of universe, and the fact that the humans are out trying to make the universe make the way it should look, and you call these two facts a "disagreement", I don't understand what physical fact or logical fact is supposed to be the common subject matter which is being referred-to. They do the babyeating thing, we do the right thing; that's not a subject matter.
Replies from: Alicorn, ata, Wei_Dai, loqi, Roko, TheAncientGeek↑ comment by Alicorn · 2010-02-01T01:09:04.335Z · LW(p) · GW(p)
The rampant dismissal of so many restatements of your position has tempted me to try my own. Tell me if I've got it right or not:
There is a topic, which covers such subtopics as those listed here, which is the only thing in fact referred to by the English word "morality" and associated terms like "should" and "right". It is an error to refer to other things, like eating babies, as "moral" in the same way it would be an error to refer to black-and-white Asian-native ursine creatures as "lobsters": people who do it simply aren't talking about morality. Once the subject matter of morality is properly nailed down, and all other facts are known, there's no room for disagreement about morality, what ought to be done, what actions are wrong, etc. any more than there is about the bachelorhood of unmarried men. However, it happens that the vast majority kinds of possible minds don't give a crap about morality, and while they might agree with us about what they should do, they wouldn't find that motivating. Humans, as a matter of a rather lucky causal history, do care about morality, in much the same way that pebblesorters care about primes - it's just one of the things we're built to find worth thinking about and working towards. By a similar token, we are responsive to arguments about features of situations that give them moral character of one sort or another.
Replies from: Eliezer_Yudkowsky, aausch↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T01:26:08.882Z · LW(p) · GW(p)
...sounds mostly good so far. Except that there's plenty of justification for thinking about morality besides "it's something we happen to think about". They're just... well... there's no other way to put this... perfectly valid, moving, compelling, heartwarming, moral justifications. They're actually better justifications than being compelled by some sort of ineffable transcendent compellingness stuff - if I've got to respond to something, those are just the sort of (logical) facts I'd want to respond to! (I think this may be the part Roko still doesn't get.) Also, the "lucky causal history" isn't luck at all, of course.
It's also quite possible that human beings, from time to time, are talking about different subject matters when they have what looks like a moral disagreement; but this is a rather drastic assumption to make in our current state of ignorance, and I feel that a sort of courtesy should be extended, to the extent of hearing out each other's arguments and proceeding on the assumption that we actually are disagreeing about something.
Replies from: Zack_M_Davis, Alicorn, Unknowns, LauraABJ, Alicorn, byrnema, TheAncientGeek↑ comment by Zack_M_Davis · 2010-02-01T03:59:33.101Z · LW(p) · GW(p)
this is a rather drastic assumption to make in our current state of ignorance, and I feel that a sort of courtesy should be extended
Yes, but do you see why people get annoyed when you build that courtesy into your terminology?
↑ comment by Alicorn · 2010-02-02T06:00:31.521Z · LW(p) · GW(p)
I'm curious about how your idea handles an edge case. (I am merely curious - not to downplay curiosity, but you shouldn't consider it a reason to devote considerable brain-cycles on its own if it'd take considerable brain-cycles to answer, because I think your appropriation of moral terminology is silly and I won't find the answer useful for any specific purpose.)
The edge case: I have invented an alien species called the Zaee (for freeform roleplaying game purposes; it only recently occurred to me that they have bearing on this topic). The Zaee have wings, and can fly starting in early childhood. They consider it "loiyen" (the Zaee word that most nearly translates as "morally wrong") for a child's birth mother to continue raising her offspring (call it a son) once he is ready to take off for the first time; they deal with this by having her entrust her son to a friend, or a friend of the father, or, in an emergency, somebody who's in a similar bind and can just swap children with her. Someone who has a child without a plan for how to foster him out at the proper time (even if it's "find a stranger to swap with") is seen as being just as irresponsible as a human mother who had a child without a clue how she planned to feed him would be (even if it's "rely on government assistance").
There is no particular reason why a Zaee child raised to adulthood by his biological mother could not wind up within the Zaee-normal range of psychology (not that they'd ever let this be tested experimentally); however, they'd find this statement about as compelling as the fact that there's no reason a human child, kidnapped as a two-year-old from his natural parents and adopted by a duped but competent couple overseas, couldn't grow up to be a normal human: it still seems a dreadful thing to do, and to the child, not just to the parents.
When Zaee interact with humans they readily concede that this precept of their has no bearing on any human action whatever: human children cannot fly. And in the majority of other respects, Zaee are like humans in their - if you plopped a baby Zaee brain in a baby human body (and resolved the body dysphoria and aging rate issues) and he grew up on Earth, he'd be darned quirky, but wouldn't be diagnosed with a mental illness or anything.
Other possibly relevant information: when Zaee programmers program AIs (not the recursively self-improving kind; much more standard-issue sci-fi types), they apply the same principle, and don't "keep" the AIs in their own employ past a certain point. (A particular tradition of programming frequently has its graduates arrange beforehand to swap their AIs.) The AIs normally don't run on mobile hardware, which is irrelevant anyway, because the point in question for them isn't flight. However, Zaee are not particularly offended by the practice of human programmers keeping their own AIs indefinitely. The Zaee would be very upset if humans genetically engineered themselves to have wings from birth which became usable before adulthood and this didn't yield a change in human fostering habits. (I have yet to have cause to get a Zaee interacting with another alien species that can also fly in the game for which they were designed, but anticipate that if I did so, "grimly distasteful bare-tolerance" would be the most appropriate attitude for the Zaee in the interaction. They're not very violent.)
And the question: Are the Zaee "interested in morality"? Are we interested in ? Do the two referents mean distinct concepts that just happen to overlap some or be compatible in a special way? How do you talk about this situation, using the words you have appropriated?
↑ comment by Unknowns · 2010-02-01T07:28:36.221Z · LW(p) · GW(p)
Eliezer, I don't understand how you can say that the "lucky causal history" wasn't luck, unless you also say "if humans had evolved to eat babies, babyeating would have been right."
If it wouldn't have been right even in that event, then it took a stupendous amount of luck for us to evolve in just such a way that we care about things that are right, instead of other things.
Either that or there is a shadowy figure.
Replies from: aleksiL↑ comment by aleksiL · 2010-02-01T16:43:14.380Z · LW(p) · GW(p)
As I understand Eliezer's position, when babyeater-humans say "right", they actually mean babyeating. They'd need a word like "babysaving" to refer to what's right.
Morality is what we call the output of a particular algorithm instantiated in human brains. If we instantiated a different algorithm, we'd have a word for its output instead.
I think Eliezer sees translating babyeater word for babyeating as "right" as an error similar to translating their word for babyeaters as "human".
Replies from: Unknowns↑ comment by LauraABJ · 2010-02-01T03:32:52.178Z · LW(p) · GW(p)
Ah, so moral justifications are better justifications because they feel good to think about. Ah, happy children playing... Ah, lovers reuniting... Ah, the Magababga's chief warrior being roasted as dinner by our chief warrior who slew him nobly in combat...
I really don't see why we should expect 'morality' to extrapolate to the same mathematical axioms if we applied CEV to different subsets of the population. Sure, you can just define the word morality to include the sum total of all human brains/minds/wills/opinions, but that wouldn't change the fact that these people, given their druthers and their own algorithms would morally disagree. Evolutionary psychology is a very fine just-so story for many things that people do, but people's, dare I say, aesthetic sense of right and wrong is largely driven by culture and circumstance. What would you say if omega looked at the people of earth and said, "Yes, there is enough agreement on what 'morality' is that we need only define 80,000 separate logically consistent moral algorithms to cover everybody!"
↑ comment by Alicorn · 2010-02-01T02:28:12.592Z · LW(p) · GW(p)
They're actually better justifications
"Better" by the moral standard of betterness, or by a standard unconnected to morality itself?
if I've got to respond to something, those are just the sort of (logical) facts I'd want to respond to!
Want to respond to because you happen to be the sort of creature that likes and is interested in these facts, or for some reason external to morality and your interest therein?
It's also quite possible that human beings, from time to time, are talking about different subject matters when they have what looks like a moral disagreement; but this is a rather drastic assumption to make in our current state of ignorance
Why does this seem like a "drastic" assumption, even given your definition of "morality"?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T02:31:53.933Z · LW(p) · GW(p)
I don't see why I'd want to use an immoral standard. I don't see why I ought to care about a standard unconnected to morality. And yes, I'm compelled by the sort of logical facts we name "moral justifications" physically-because I'm the sort of physical creature I am.
It's drastic because it closes down the possibility of further discourse.
Replies from: Alicorn, Rain↑ comment by Alicorn · 2010-02-01T02:32:59.876Z · LW(p) · GW(p)
Is there some way in which this is not all fantastically circular?
Replies from: Psy-Kosh, Eliezer_Yudkowsky↑ comment by Psy-Kosh · 2010-02-01T03:31:30.605Z · LW(p) · GW(p)
How about something like this: There's a certain set of semi abstract criteria that we call 'morality'. And we happen to be the sorts of beings that (for various reasons) happen to care about this morality stuff as opposed to caring about something else. should we care about morality? Well, what is meant by "should"? It sure seems like that's a term that we use to simply point to the same morality criteria/computation. In other words, "should we care about morality" seems to translate to "is it moral to care about morality" or "apply morality function to 'care about morality' and check the output"
It would seem also that the answer is yes, it is moral to care about morality.
Some other creatures might somewhere care about something other than morality. That's not a disagreement about any facts or theory or anything, it's simply that we care about morality and they may care about something like "maximize paperclip production" or whatever.
But, of course, morality is better than paper-clip-ality. (And, of course, when we say "better", we mean "in terms of those criteria we care about"... ie, morality again.)
It's not quite circular. Us and the paperclipper creatures wouldn't really disagree about anything. They'd say "turning all the matter in the solar system into paperclips is paperclipish", and we'd agree. We'd say "it's more moral not to do so", and they'd agree.
The catch is that they don't give a dingdong about morality, and we don't give a dingdong about paperclipishness. And indeed that does make us better. And if they scanned our minds to see what we mean by "better", they'd agree. But then, that criteria that we were referring to by the term "better" is simply not something the paperclippers care about.
"we happen to care about it" is not the justification. It's moral is the justification. It's just that our criteria for valid moral justification is, well... morality. Which is as it should be. etc etc.
Morality is seems to be an objective criteria. Actions can be judged good or bad in terms of morality. We simply happen to care about morality instead of something else. And this is indeed a good thing.
Replies from: byrnema, Alicorn, RomanDavis↑ comment by byrnema · 2010-02-01T04:02:53.919Z · LW(p) · GW(p)
I don't understand two sentences in a row. Not here, not in the meta-ethics sequence, not anywhere where you guys talk about morality.
I don't understand why I seem to be cognitively fine on other topics on Less Wrong, but then all of a sudden am Flowers for Algernon here.
I'm not going to comment anymore on this topic; it just so happens meta-morality or meta-ethics isn't something I worry about anyway. But I would like to part with the admonition that I don't see any reason why LW should be separating so many words from their original meanings -- "good", "better", "should", etc. It doesn't seem to be clarifying things even for you guys.
I think that when something is understood -- really understood -- you can write it down in words. If you can't describe an understanding, you don't own it.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-02-01T04:17:35.253Z · LW(p) · GW(p)
Huh? I'm asserting that most people, when they use words like "morality", "should"(in a moral context), "better"(ditto), etc, are pointing at the same thing. That is, we think this sort of thing partly captures what people actually mean by the terms. Now, we don't have full self knowledge, and our morality algorithm hasn't finished reflecting (that is, hasn't finished reconsidering itself, etc), so we have uncertainty about what sorts of things are or are not moral... But that's a separate issue.
As far as the rest... I'm pretty sure I understand the basic idea. Anything I can do to help clarify it?
How about this: "morality is objective, and we simply happen to be the sorts of beings that care about morality as opposed to, say, evil psycho alien bots that care about maximizing paperclips instead of morality"
Does that help at all?
↑ comment by Alicorn · 2010-02-01T03:36:41.183Z · LW(p) · GW(p)
It looks circular to me. Of course, if you look hard enough at any views like this, the only choices are circles and terminating lines, and it seems almost an aesthetic matter which someone goes with, but this is such a small circle. It's right to care about morality and to be moral because morality says so and morality possesses the sole capacity to identify "rightness", including the rightness of caring about morality.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-02-01T03:54:53.753Z · LW(p) · GW(p)
It's more almost, well, I hate to say this, but more a matter of definitions.
ie, what do you MEAN by the term "right"?
Just keep poking your brain about that, and keep poking your brain about what you mean by "should" and what you actually mean by terms like "morality" and I think you'll find that all those terms are pointing at the same thing.
It's not so much "there's this criteria of 'rightness' that only morality has the ability to measure" but rather an appeal to morality is what we mean when we say stuff like "'should' we do this? is it 'right'?" etc...
The situation is more, well, like this:
Humans: "Morality says that, among other things, it's more better and moral to be, well, moral. It is also moral to save lives, help people, bring joy, and a whole lot of other things"
Paperclipers: "having scanned your brains to see what you mean by these terms, we agree with your statement."
Paperclippers: "Converting all the matter in your system into paperclips is paperclipish. Further, it is better and paperclipish to be paperclipish."
Humans: "having scanned your minds to determine what you actually mean by those terms, we agree with your statement."
Humans: "However, we don't care about paperclipishness. We care about morality. Turning all the matter of our solar system (including the matter we are composed of) into paperclips is bad, so we will try to stop you."
Paperclippers: "We do not care about morality. We care about paperclipishness. Resisting the conversion to paperclips is unpaperclipish. Therefore we will try to crush your resistance."
This is very different from what we normally think of as circular arguments, which would be of the form of "A, therefore B, therefore A, QED", while the other side would be "no! not A"
Here, all sides agree about stuff. It's just that they value different things. But the fact of humans valuing the stuff isn't the justification for valuing that stuff. The justification is that it's moral. But the fact is that we happen to be moved by arguments like "it's moral", rather than the wicked paperclippers that only care about whether it's paperclipish or not.
Replies from: Breakfast↑ comment by Breakfast · 2010-02-01T05:56:54.095Z · LW(p) · GW(p)
But why should I feel obliged to act morally instead of paperclippishly? Circles seem all well and good when you're already inside of them, but being inside of them already is kind of not the point of discussing meta-ethics.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-02-01T06:05:32.888Z · LW(p) · GW(p)
"should"
What do you mean by "should"? Do you actually mean anything by it other than an appeal to morality in the first place?
Replies from: Breakfast↑ comment by Breakfast · 2010-02-01T06:37:15.917Z · LW(p) · GW(p)
Well, that's not necessarily a moral sense of 'should', I guess -- I'm asking whether I have any sort of good reason to act morally, be it an appeal to my interests or to transcendent moral reasons or whatever.
It's generally the contention of moralists and paperclipists that there's always good reason for everyone to act morally or paperclippishly. But proving that this contention itself just boils down to yet another moral/paperclippy claim doesn't seem to help their case any. It just demonstrates what a tight circle their argument is, and what little reason someone outside of it has to care about it if they don't already.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-02-01T06:46:51.782Z · LW(p) · GW(p)
What do you mean by "should" in this context other than a moral sense of it? What would count as a "good reason"?
As far as your statement about both moralists and paperclippers thinking there are "good reasons"... the catch is that the phrase "good reasons" is being used to refer to two distinct concepts. When a human/moralist uses it, they mean, well... good, as opposed to evil.
A paperclipper, however, is not concerned at all about that standard. A paperclipper cares about what, well, maximizes paperclips.
It's not that it should do so, but simply that it doesn't care what it should do. Being evil doesn't bother it any more than failing to maximize paperclips bothers you.
Being evil is clearly worse (where by "worse" I mean, well, immoral, bad, evil, etc...) that being good. But the paperclipper doesn't care. But you do (as far as I know. If you don't, then... I think you scare me). What sort of standard other than morality would you want to appeal to for this sort of issue in the first place?
Replies from: Breakfast↑ comment by Breakfast · 2010-02-01T07:05:18.510Z · LW(p) · GW(p)
What do you mean by "should" in this context other than a moral sense of it? What would count as a "good reason"?
By that I mean rationally motivating reasons. But I'd be willing to concede, if you pressed, that 'rationality' is itself just another set of action-directing values. The point would still stand: if the set of values I mean when I say 'rationality' is incongruent with the set of values you mean when you say 'morality,' then it appears you have no grounds on which to persuade me to be directed by morality.
This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being. So I'm not sure if the position you're espousing is just a complicated way of expressing surrender, or an attempt to reframe the question, or what, but it doesn't seem to get us any more traction when it comes to answering "Why should I be moral?"
But you do (as far as I know. If you don't, then... I think you scare me).
Duly noted, but is what I happen to care about relevant to this issue of meta-ethics?
Replies from: Psy-Kosh, Douglas_Knight↑ comment by Psy-Kosh · 2010-02-01T14:59:02.657Z · LW(p) · GW(p)
Rationality is basically "how to make an accurate map of the world... and how to WIN (where win basically means getting what you "want" (where want includes all your preferences, stuff like morality, etc etc...)
Before rationality can tell you what to do, you have to tell it what it is you're trying to do.
If your goal is to save lives, rationality can help you find ways to do that. If your goal is to turn stuff into paperclips, rationality can help you find ways to do that too.
I'm not sure I quite understand you mean by "rationally motivating" reasons.
As far as objectively compelling to any sentient (let me generalize that to any intelligent being)... Why should there be any such thing? "Doing this will help ensure your survival" "But... what if I don't care about this?"
"doing this will bring joy" "So?"
etc etc... There are No Universally Compelling Arguments
↑ comment by Douglas_Knight · 2010-02-01T07:44:32.308Z · LW(p) · GW(p)
This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being.
According to the original post, strong moral realism (the above) is not held by most moral realists.
Replies from: Breakfast↑ comment by Breakfast · 2010-02-01T14:41:15.284Z · LW(p) · GW(p)
Well, my "moral reasons are to be..." there was kind of slippery. The 'strong moral realism' Roko outlined seems to be based on a factual premise ("All...beings...will agree..."), which I'd agree most moral realists are smart enough not to hold. The much more commonly held view seems to amount instead to a sort of ... moral imperative to accept moral imperatives -- by positing a set of knowable moral facts that we might not bother to recognize or follow, but ought to. Which seems like more of the same circular reasoning that Psy-Kosh has been talking about/defending.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-02-01T15:18:49.527Z · LW(p) · GW(p)
What I'm saying is that when you say the word "ought", you mean something. Even if you can't quite articulate it, you have some sort of standard for saying "you ought do this, you ought not do that" that is basically the definition of ought.
I'm saying"this oughtness, whatever it is, is the same thing that you mean when you talk about 'morality'. So "ought I be moral?" directly translates to "is it moral to be moral?"
I'm not saying "only morality has the authority to answer this question" but rather "uh... 'is X moral?' is kind of what you actually mean by ought/should/etc, isn't it? ie, if I do a bit of a trace in your brain, follow the word back to its associated concepts, isn't it going to be pointing/labeling the same algorithms that "morality" labels in your brain?
So basically it amounts to "yes, there're things that one ought to do... and there can exist beings that know this but simply don't care about whether or not they 'ought' to do something."
It's not that another being refuses to recognize this so much as they'd be saying "So what? we don't care about this 'oughtness' business." It's not a disagreement, it's simply failing to care about it.
Replies from: Breakfast↑ comment by Breakfast · 2010-02-01T16:29:11.012Z · LW(p) · GW(p)
What I'm saying is that when you say the word "ought", you mean something. Even if you can't quite articulate it, you have some sort of standard for saying "you ought do this, you ought not do that" that is basically the definition of ought.
I'd object to this simplification of the meaning of the word (I'd argue that 'ought' means lots of different things in different contexts, most of which aren't only reducible to categorically imperative moral claims), but I suppose it's not really relevant here.
I'm pretty sure we agree and are just playing with the words differently.
There are certain things one ought to do -- and by 'ought' I mean you will be motivated to do those things, provided you already agree that they are among the 'things one ought to do'
and
There is no non-circular answer to the question "Why should I be moral?", so the moral realists' project is sunk
seem to amount to about the same thing from where I sit. But it's a bit misleading to phrase your admission that moral realism fails (and it does, just as paperclip realism fails) as an affirmation that "there are things one ought to do".
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-02-02T04:50:23.404Z · LW(p) · GW(p)
What's failing?
"what is 2+3?" has an objectively true answer.
The fact that some other creature might instead want to know the answer to the question "what is 6*7?" (which also has an objectively true answer) is irrelevant.
How does that make "what is 2+3?" less real?
Similarly, how does the fact that some other beings might care about something other than morality make questions of the form "what is moral? what should I do?" non objective?
It's nothing to do with agreement. When you ask "ought I do this?", well... to the extent that you're not speaking empty words, you're asking SOME specific question.
There is some criteria by which "oughtness" can be judged... that is, the defining criteria. It may be hard for you to articulate, it may only be implicitly encoded in your brain, but to the extent that word is a label for some concept, it means something.
I do not think you'd argue too much against this.
I make an additional claim: That that which we commonly refer to in these contexts by words like "Should", "ought" and so on is the same thing we're referring to when we say stuff like "morality".
To me "what should I do?" and "what is the moral thing to do?" are basically the same question, pretty much.
"Ought I be moral?" thus would translate to "ought I be the sort of person that does what I ought to do?"
I think the answer to that is yes.
There may be beings that agree with that completely but take the view of "but we simply don't care about whether or not we ought to do something. It is not that we disagree with your claims about whether one ought to be moral. We agree we ought to be moral. We simply place no value in doing what one 'ought' to do. Instead we value certain other things." But screw them... I mean, they don't do what they ought to do!
(EDIT: minor changes to last paragraph.)
Replies from: Alicorn↑ comment by Alicorn · 2010-02-02T04:53:53.496Z · LW(p) · GW(p)
"what is 2+3?" has an objectively true answer. The fact that some other creature might instead want to know the answer to the question "what is 6*7?" (which also has an objectively true answer) is irrelevant.
I just want to know, what is six by nine?
Replies from: Psy-Kosh↑ comment by RomanDavis · 2010-05-24T16:22:55.938Z · LW(p) · GW(p)
Oh shit. I get it. Morality exists outside of ourselves in the same way that paperclips exists outside clippies.
Babyeating is justified by some of the same impulses as baby saving: protecting ones own genetic line.
It's not necessarily as well motivated by the criteria of saving sentient creatures from pain, but you might be able to make an argument for it. Maybe if you took thhe opposite path and said not that pain was bad, but that sentience / long life/ grandchildren was good and baby eating was a "moral decision" for having grand children.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2010-05-24T16:45:44.649Z · LW(p) · GW(p)
First part yes, rest... not quite. (or maybe I'm misunderstanding you?)
"Protecting one's own genetic line" would be more the evolutionary reason. ie, part of the process that led to us valuing morality as opposed to valuing paperclips. (or, hypothetically fictionally alternately, part of the process that led to the Babyeaters valuing babyeating instead of valuing morality.)
But that's not exactly a moral justification as much as it is part of an explanation of why we care about morality. We should save babies... because! ie, Babies (or people in general, for that matter) dying is bad. Killing innocent sentients, especially those that have had the least opportunity to live, is extra bad. The fact that I care about this is ultimately in part explained via evolutionary processes, but that's not the justification.
The hypothetical Babyeaters do not care about morality. That's kind of the point. It's not that they've come to different conclusions about morality as much as the thing that they value isn't quite morality in the first place.
Replies from: RomanDavis↑ comment by RomanDavis · 2010-05-29T16:45:00.426Z · LW(p) · GW(p)
I... don't think so. One theory of morality is that killing death is bad. Sure, that's at least a component of most moral systems, but there are certain circumstance under which killing is good or okay. Such as if the person you're killing is a Nazi or a werewolf or if they are a fetus you could not support to adulthood or trying to kill you or a death row inmate guilty of a crime by rule of law.
Justifications for killing are often moral.
Babyeaters are, in a way at least possessing similarities to human morality, justified by giving the fewer remaining children a chance at a life with the guidance of adult babyeaters, and more resources since they don't have to compete against millions of their siblings.
This allows babyeaters to develop something like empathy, affection, bonding, love and happiness for the surviving babyeater kind. Without this, babyeaters would be unable to make a babyeater society, and it's really easy to apply utilitarianism to it in the same way utilitarian theory can apply utilitarian theory to human morality.
It's also justified because it's an individual sacrifice to your own genetic line, rather than the eating other babyeater's children, which is the type of a grandchildren maximizer would do. The need of the many > The wants of the few, which also plays a part in various theories of morality.
I'd say they reached the same conclusion that we did about most things, it's just they took necessary and important moral sacrifice, and turned it into a ritual that is now detached from morality.
It damn well sounds like we're talking about the same thing. The only objection I can think of is that they re aliens and that that would be highly improbable, but if morality is just an evolutionary optimization strategy among intelligent minds, even something that could be computed mathematically, then it isn't necessarily any more unlikely than that certain parts of human and plant anatomy would follow the Fibonacci sequence.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T03:35:12.566Z · LW(p) · GW(p)
Only in the sense that "2 + 2 = 4" is not fantastically circular.
Replies from: prase↑ comment by prase · 2010-02-03T13:04:31.356Z · LW(p) · GW(p)
In some sense, the analogy between morality and arithmetics is right. On the other hand, the meaning of arithmetics can be described enough precisely, so that everybody means the same thing by using that word. Here, I don't know exactly what you mean by morality. Yes, saving babies, not comitting murder and all that stuff, but when it comes to details, I am pretty sure that you will often find yourself disagreeing with others about what is moral. Of course, in your language, any such disagreement means that somebody is wrong about the fact. What I am uncomfortable with is the lack of unambiguous definition.
So, there is a computation named "morality", but nobody knows what it exactly is, and nobody gives methods how to discover new details of the yet incomplete definition. Fair, but I don't see any compelling argument why to attach words to only partly defined objects, or why to care too much about them. Seems to me that this approach pictures morality as an ineffable stuff, although of different kind than the standard bad philosophy does.
↑ comment by Rain · 2010-02-09T20:44:39.558Z · LW(p) · GW(p)
It seems you've encountered a curiosity-stopper, and are no longer willing to consider changes to your thoughts on morality, since that would be immoral. Is this the case?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-10T00:53:33.837Z · LW(p) · GW(p)
Wha? No. But you'd have to offer me a moral reason, as opposed to an immoral one.
Replies from: Alicorn↑ comment by Alicorn · 2010-02-10T01:00:52.475Z · LW(p) · GW(p)
How about amoral reasons? Are those okay?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-10T12:51:31.045Z · LW(p) · GW(p)
...I'd like to see an example?
Replies from: Alicorn↑ comment by byrnema · 2010-02-01T01:43:09.660Z · LW(p) · GW(p)
However, it happens that the vast majority kinds of possible minds don't give a crap about morality, and while they might agree with us about what they should do, they wouldn't find that motivating.
What about the minds that disagree with us about what they should do, and yet do care about doing what they think they should? Would your position hold that it is unlikely for them to have a different list or that they must be mistaken about the list -- that caring about what you "should" do means having the list we have?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T01:48:21.481Z · LW(p) · GW(p)
What about the minds that disagree with us about what they should do, and yet do care about doing what they think they should?
How'd they end up with the same premises and different conclusions? Broken reasoning about implications, like the human practice of rationalization? Bad empirical pictures of the physical universe leading to poor policy? If so, that all sounds like a perfectly ordinary situation.
Replies from: byrnema↑ comment by byrnema · 2010-02-01T02:03:59.101Z · LW(p) · GW(p)
How'd they end up with the same premises and different conclusions?
They care about doing what is morally right, but they have different values. The baby-eaters, for example, thought it was morally right to optimize whatever they were optimizing with eating the babies, but didn't particularly value their babies' well-being.
Replies from: orthonormal↑ comment by orthonormal · 2010-02-01T02:40:53.511Z · LW(p) · GW(p)
Er, you might have missed the ancestor of this thread. In the conflict between fundamentally different systems of preference and value (more different than those of any two humans), it's probably more confusing than helpful to use the word "should" with the other one. Thus we might introduce another word, should2, which stands in relation to the aliens' mental constitution (etc) as should stands to ours.
This distinction is very helpful, because we might (for example) conclude from our moral reasoning that we should respect their moral values, and then be surprised that they don't reciprocate, if we don't realize that that aspect of should needn't have any counterpart in should2. If you use the same word, you might waste time trying to argue that the aliens should do this or respect that, applying the kind of moral reasoning that is valid in extrapolating should; when they don't give a crap for what they should do, they're working out what they should2 do.
(This is more or less the same argument as in Moral Error and Moral Disagreement, I think.)
Replies from: byrnema↑ comment by byrnema · 2010-02-01T02:52:07.878Z · LW(p) · GW(p)
I'm not sure. How can there be any confusion when I say they "do care about doing what they think they should?" I clearly mean should2 here.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-02-01T05:11:16.203Z · LW(p) · GW(p)
I'm not sure. How can there be any confusion when I say they "do care about doing what they think they should?" I clearly mean should2 here.
I think it's perfectly clear. Eliezer seems to disapprove of this usage and I think he claims that it is not clear, but I'm less sure of that.
I propose that a moral relativist is someone who like this usage.
↑ comment by TheAncientGeek · 2014-05-29T15:35:29.044Z · LW(p) · GW(p)
There remains a third option in addition to evolutionary hardwired stuff and ineffable, transcendent stuff.
↑ comment by aausch · 2010-02-01T01:42:39.650Z · LW(p) · GW(p)
This is the interpretation I also have of Eliezer's view, and it confuses me, as it applies to the story.
For example, I would expect aliens which do not value morality would be significantly more difficult to communicate with.
Also, the back story for the aliens gives a plausible argument for their actions as arising from a different path towards the same ultimate morality.
I interpreted the story as showing aliens which, as a quirk of their history and culture, have significant holes in their morality - holes which, given enough time, I would expect will disappear.
Replies from: orthonormal↑ comment by orthonormal · 2010-02-01T02:48:49.155Z · LW(p) · GW(p)
Also, the back story for the aliens gives a plausible argument for their actions as arising from a different path towards the same ultimate morality.
Really? Although babyeater_should coincides with akon_should on the notion of "toleration of reasonable mistakes" and on the Prisoner's Dilemma, it seems clear from the story that these functions wouldn't converge on the topic of "eating babies". (If the Superhappies had their way, both functions would just be replaced by a new "compromise" function, but neither the Babyeaters nor the humans want that, and it appears to be the wrong choice according to both babyeater_should and akon_should.)
↑ comment by ata · 2010-02-01T10:47:36.330Z · LW(p) · GW(p)
I haven't finished reading your meta-ethics sequence, so I apologize in advance if this is something that you've already addressed, but just from this exchange, I'm wondering:
Suppose that instead of talking about humans and Babyeaters, we talk about groups of humans with equally strong feelings of morality but opposite ideas about it. Suppose we take one person who feels moral when saving a little girl from being murdered, and another person who feels moral when murdering a little girl as punishment for having being raped. This seems closely analogous to your "Morality is about how to save babies, not eat them, everyone knows that and they happen to be right." It would sound just as reasonable to say that everybody knows that morality is about saving children rather than murdering them, but sadly, it's not the case that "everybody knows" this: as you know, there are cultures existing right now where a girl would be put to death by honestly morally-outraged elders for the abominable sin of being raped, horrifying though this fact is.
So let's take two people (or two larger groups of people, if you prefer) from each of these cultures. We could have them imagine these actions as intensely as possible, and scan their brains for relevant electrical and chemical information, find out what parts of the brain are being used and what kinds of emotions are active. (If a control is needed, we could scan the brain of someone intensely imagining some action everyone would consider irrelevant to morality, such as brushing one's teeth. I don't think there are any cultures that deem that evil, are there?) If the child-rescuer and child-murderer seem to be feeling the same emotions, having the same experience of righteousness, when imagining their opposite acts, would you still conclude that it is a mistranslation/misuse to identify our word "morality" with whatever word the righteous-feeling child-murderer is using for what appears to be the same feeling? Or would you conclude that this is a situation where two people are talking about the same subject matter but have drastically opposing ideas about it?
If the latter is the case, then I do think I get the point of the Babyeater thought experiments: although they appear to us to have some mechanism of making moral judgments (judgments that we find horrible), this mechanism serves different cognitive functions for them than our moral intuition does for us, and it originated in them for different reasons. Therefore, they cannot be reasonably considered to be differently-calibrated versions of the same feature. Is that right?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T18:14:23.487Z · LW(p) · GW(p)
If the child-rescuer and child-murderer seem to be feeling the same emotions, having the same experience of righteousness, when imagining their opposite acts, would you still conclude that it is a mistranslation/misuse to identify our word "morality" with whatever word the righteous-feeling child-murderer is using for what appears to be the same feeling?
Depends. If the child-murderer knew everything about the true state of affairs and everything about the workings of their own inner mind, would they still disagree with the child-rescuer? If so, then it's pretty futile to pretend that they're talking about the same subject matter when they talk about that-which-makes-me-experience-a-feeling-of-being-justified. It would be like if one species of aliens saw green when contemplating real numbers and another species of aliens saw green when contemplating ordinals; attempts to discuss that-which-makes-me-see-green as if it were the same mathematical subject matter are doomed to chaos. By the way, it looks to me like a strong possibility is that reasonable methods of extrapolating volitions will give you a spread of extrapolated-child-murderers some of which are perfectly selfish hedonists, some of which are child-rescuers, and some of which are Babyeaters.
And yes, this was the approximate point of the Babyeater thought experiment.
↑ comment by Wei Dai (Wei_Dai) · 2010-01-31T21:25:33.688Z · LW(p) · GW(p)
But I just described two kinds of subject matter that are the only two kinds of subject matter I know about: physical facts and mathematical facts.
Suppose I ask
- What is rationality?
- Is UDT the right decision theory?
- What is the right philosophy of mathematics?
Am I asking about physical facts or logical/mathematical facts? It seems like I'm asking about a third category of "philosophical facts".
We could say that the answer to "what is rationality" is whatever my meta-rationality computes, and hence reduce it to a physical+logical fact, but that really doesn't seem to help at all.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T22:49:02.176Z · LW(p) · GW(p)
These all sound to me like logical questions where you don't have conscious access to the premises you're using, and can only try to figure out the premises by looking at what seem like good or bad conclusions. But with respect to the general question of whether we are talking about (a) the way events are or (b) which conclusions follow from which premises, it sounds like we're doing the latter. Other "philosophical" questions (like 'What's up with the Born probabilities?' or 'How should I compute anthropic probabilities?') may actually be about (a).
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-01T09:24:46.234Z · LW(p) · GW(p)
Your answer seemed wrong to me, but it took me a long time to verbalize why. In the end, I think it's a map/territory confusion.
For comparison, suppose I'm trying to find the shortest way from home to work by visualizing a map of the city. I'm doing a computation in my mind, which can also be viewed as deriving implications from a set of premises. But that computation is about something external; and the answer isn't just a logical fact about what conclusions follow from certain premises.
When I ask myself "what is rationality?" I think the computation I'm doing in my head is also about something external to me, and it's not just a logical question where I don't have conscious access to the premises that I'm using, even though that's also the case.
So my definition of moral realism would be that when I do the meta-moral computation of asking "what moral premises should I accept?", that computation is about something that is not just inside my head. I think this is closer to what most people mean by the phrase.
Given the above, I think your meta-ethics is basically a denial of moral realism, but in such a way that it causes more confusion than clarity. Your position, if translated into the "shortest way to work" example, would be if someone told you that there is no fact of the matter about the shortest way to work because the whole city is just a figment of your imagination, and you reply that there is a fact of the matter about the computation in your mind, and that's good enough for you to call yourself a realist.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T09:47:22.619Z · LW(p) · GW(p)
When I ask myself "what is rationality?" I think the computation I'm doing in my head is also about something external to me
Well, if you're asking about human rationality, then the prudent-way-to-think involves lots of empirical info about the actual flaws in human cognition, and so on. If you're asking about rationality in the sense of probability theory, then the only reference to the actual that I can discern is about anthropics and possibly prudent priors - things like the Dutch Book Argument are math, which we find compelling because of our values.
If you think that we're referring to something else - what is it, where is it stored? Is there a stone tablet somewhere on which these things are written, on which I can scrawl graffiti to alter the very fabric of rationality? Probably not - so where are the facts that the discourse is about, in your view?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-01T10:32:11.001Z · LW(p) · GW(p)
I think "what is rationality" (and by that I mean ideal rationality) is like "does P=NP". There is some fact of the matter about it that is independent of what premises we choose to, or happen to, accept. I wish I knew where these facts live, or exactly how it is that we have any ability to determine them, but I don't. Fortunately, I don't think that really weakens my argument much.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T10:42:58.222Z · LW(p) · GW(p)
This is exactly what I refer to as a "logical fact" or "which conclusions follow from which premises". Wasn't that clear?
Actually, I guess it could be a bit less clear if you're not already used to thinking of all math as being about theorems derived from axioms which are premise-conclusion links, i.e., if the axioms are true of a model then the theorem is true of that model. Which is, I think, conventional in mathematics, but I suppose it could be less obvious.
In the case of P!=NP, you'll still need some axioms to prove it, and the axioms will identify the subject matter - they will let you talk about computations and running time, just as the Peano axioms identify the subject matter of the integers. It's not that you can make 2 + 2 = 5 by believing differently about the same subject matter, but that different axioms would cause you to be talking about a different subject matter than what we name the "integers".
Is this starting to sound a little familiar?
Replies from: Wei_Dai, Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-01T12:22:10.323Z · LW(p) · GW(p)
Actually, I guess it could be a bit less clear if you're not already used to thinking of all math as being about theorems derived from axioms which are premise-conclusion links
But that's not all that math is. Suppose we eventually prove that P!=NP. How did we pick the axioms that we used to prove it? (And suppose we pick the wrong axioms. Would that change the fact that P!=NP?) Why are we pretty sure today that P!=NP without having a chain of premise-conclusion links? These are all parts of math; they're just parts of math that we don't understand.
ETA: To put it another way, if you ask someone who is working on the P!=NP question what he's doing, he is not going to answer that he is trying to determine whether a specific set of axioms proves or disproves P!=NP. He's going to answer that he's trying to determine whether P!=NP. If those axioms don't work out, he'll just pick another set. There is a sense that the problem is about something that is not identified by any specific set of axioms that he happens to hold in his brain, that any set of axioms he does pick is just a map to a territory that's "out there". But according to your meta-ethics, there is no "out there" for morality. So why does it deserve to be called realism?
Perhaps more to the point, do you agree that there is a coherent meta-ethical position that does deserve to be called moral realism, which asserts that moral and meta-moral computations are about something outside of individual humans or humanity as a whole (even if we're not sure how that works)?
Replies from: Eliezer_Yudkowsky, nolrai↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-01T17:43:48.391Z · LW(p) · GW(p)
I don't see anything here that is not a mixture of physical facts and logical facts (that is, truths about causal events and truths about premise-conclusion links). Physical computers within our universe may be neatly described by compact axioms. Logic (in my not-uncommon view) deals with semantic implication: what is true in a model given that the axioms are true of it. If you prove P!=NP using axioms that happen to apply to the computers of this universe then P!=NP for them as well, and the axioms will have been picked out to be applicable to real physics - a mixture of physical fact and logical fact. I don't know where logical facts are stored or what they are, just as I don't yet know what makes the universe real, although I repose some confidence that the previous two questions are wrong - but so far I'm standing by my view that truths are about causal events, logical implications, or some mix of the two.
Axioms are that which mathematicians use to talk about integers instead of something else. You could also take the perspective of trying to talk about groups of two pebbles as they exist in the real world, and wanting your axioms to correspond to their behavior. But when you stop looking at the real world and close your eyes and try to do math, then in order to do math about something, like about the integers, about these abstract objects of thought that you abstracted away from the groups of pebbles, you need axioms that identify the integers in mathspace. And having thus gained a subject of discourse, you can use the axioms to prove theorems that are about integers because the theorems hold wherever the axioms hold. And if those axioms are true of physical reality from the appropriate standpoint, your conclusions will also hold of groups of pebbles.
Perhaps more to the point, do you agree that there is a coherent meta-ethical position that does deserve to be called moral realism, which asserts that moral and meta-moral computations are about something outside of individual humans or humanity as a whole (even if we're not sure how that works)?
That depends; is morality a subject matter that we need premises to identify in subjectspace, in order to talk about morality rather than something else, stored in that same mysterious place as 2 + 2 = 4 being true of the integers but needing axioms to talk about the integers in the first place? Or are we talking about transcendent ineffable compelling stuff? The first view is, I think, coherent; I should think so, it's my own. The second view is not.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-02-02T02:04:54.789Z · LW(p) · GW(p)
I don't see anything here that is not a mixture of physical facts and logical facts (that is, truths about causal events and truths about premise-conclusion links).
Eliezer, a couple of comments ago I switched my focus from whether there is more than just physical and logical facts to whether "morality" refers to something independent of humanity, like (as I claimed) "rationality", "integer" and "P!=NP" do. Sorry if I didn't make that clear, and I hope I'm not being logically rude here, but the topic is confusing to me and I'm trying different lines of thought. (BTW, what kind of fact is it that there are only two kinds of facts?)
Quoting some background from Wikipedia:
When the Peano axioms were first proposed, Bertrand Russell and others agreed that these axioms implicitly defined what we mean by a "natural number". Henri Poincaré was more cautious, saying they only defined natural numbers if they were consistent; if there is a proof that starts from just these axioms and derives a contradiction such as 0 = 1, then the axioms are inconsistent, and don't define anything.
My question is, how can these questions even arise in our minds, unless we already had a notion of "natural number" that is independent of Peano axioms? There is something about integers that compels us to think about them, and the compelling force is not a set of axioms that is stored in our minds or spread virally from one mathematician to another.
Maybe the compelling force is that in the world that we live in, there are objects (like pebbles) whose behaviors can be approximated by the behavior of integers. I (in apparent disagreement with you) think this isn't the only compelling force (i.e., aliens who live in a world with no discrete objects would still invent integers), but it's enough to establish that when we talk about integers we're talking about something at least partly outside of ourselves.
To restate my position, I think it's unlikely that "morality" refers to anything outside of us, but many people do believe that, and I can't rule it out conclusively myself (especially given Toby Ord's recent comments).
↑ comment by Wei Dai (Wei_Dai) · 2010-02-01T10:50:43.801Z · LW(p) · GW(p)
Actually, I guess it could be a bit less clear if you're not already used to thinking of all math as being about theorems derived from axioms which are premise-conclusion links
But that's not all that math is. Suppose we eventually prove that P!=NP. How did we pick the axioms that we used to prove it? (And suppose we pick the wrong axioms. Would that change the fact that P!=NP?) Why are we pretty sure today that P!=NP without having a chain of premise-conclusion links? These are all parts of math; they're just parts of math that we don't understand.
ETA: To put it another way, if you ask someone who is working on the P!=NP question, he is not going to answer that he is trying to determine whether a specific set of axioms proves or disproves P!=NP. He's going to answer that he's trying to determine whether P!=NP. If those axioms don't work out, he'll just pick another set. There is a sense that the problem is about something that is not identified by any specific set of axioms that he happens to hold in his brain, that any set of axioms he does pick is just a map to a territory that's "out there". But according to your meta-ethics, there is no "out there" for morality. So why does it deserve to be called realism?
ETA2: Perhaps more to the point, do you agree that there is a coherent meta-ethical position that does deserve to be called moral realism, which asserts that moral and meta-moral computations are about something outside of individual humans or humanity as a whole (even if we're not sure how that works)?
↑ comment by loqi · 2010-02-01T10:01:13.920Z · LW(p) · GW(p)
The problem I have with this use of the words "should" and "good" is that it treats the them like semantic primitives, rather than functions of context. We use them in explicitly delimited contexts all the time:
- "If you want to see why the server crashed, you should check the logs."
- "You should play Braid, if platformers are your thing."
- "You should invest in a quality fork, if you plan on eating many babies."
- "They should glue their pebble heaps together, if they want them to retain their primality."
Since I'm having a hard time parting with the "should" of type "Goal context -> Action on causal path to goal", the only sense I can make out of your position is that "if your goal is [extensional reference to the stuff that compels humans]" is a desirable default context.
If you agree that "What should be done with the universe" is a different question than "What should be done with the universe if we want to maximize entropy as quickly as possible", then either you're agreeing that what we want causally affects should-ness, or you're agreeing that the issue isn't really "should"'s meaning, it's what the goal context should be when not explicitly supplied. And you seem to be saying that it should be an extensional reference to commonplace human morality.
↑ comment by Roko · 2010-01-31T21:35:36.906Z · LW(p) · GW(p)
I didn't say that the babyeaters/humans have a factual disagreement. They have a war or a treaty, which is the really important prediction here.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T22:46:53.180Z · LW(p) · GW(p)
Mm... I can agree that a treaty has subject matter and is talked about by both parties, and refers to subsequent physical events. It has a treaty-kept-condition which is not quite the same thing as its being "true". (Note: in the original story, no treaty was actually discussed with the Babyeaters.) Where does that put it on a fact/opinion chart?
↑ comment by TheAncientGeek · 2014-05-29T15:22:45.898Z · LW(p) · GW(p)
It looks like you can disagree about values as well as facts.
comment by ShardPhoenix · 2010-02-01T10:28:29.157Z · LW(p) · GW(p)
This would all be a lot clearer if, in these sorts of discussions, we avoided using the dangling "should".
In other words, don't just say that "X should do Y", say that "X should do Y, in order for some specifiable condition to be fulfilled". That condition could be their preferences, your preferences, CEV's preferences if you believe in such a thing, or whatever. Oh yeah, and "...in order to be moral" is ambiguous and thus doesn't count.
comment by Vladimir_Nesov · 2010-01-31T20:48:13.893Z · LW(p) · GW(p)
Now that was confusing...
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-31T23:05:30.432Z · LW(p) · GW(p)
I'm very surprised to hear you say that - the subject matter seems clearer to me now than it was before.
comment by TheAncientGeek · 2014-05-29T16:11:55.559Z · LW(p) · GW(p)
Metaethical realism/objectivism makes the prediction that under some conditions, ants will converge on ethical beliefs. The .TP by [deleted] seems to be arguing that realism doesn't have any object level consequences. Which is half true. Absent a method of arriving at object level truth, it doesn't. With one, it does.
comment by TheAncientGeek · 2014-05-29T15:49:25.127Z · LW(p) · GW(p)
Should has many meanings. Which moral system I believe in is meta level, not object level and probably implies an epistemic-should or rational-should rather than moral-should.
Likewise, not all normative judgement is morality. What you should do to maximise personal pleasure, .lor make money, or "win" in some way , is generally not what you morally-should.
comment by [deleted] · 2010-02-02T18:17:44.706Z · LW(p) · GW(p)
If morality is encapsulated by a formal system, by Godel's second theorem there will exist statements--moral statements--which are simultaneously true and not true. Can such a system reject either moral relativism or moral absolutism without contradicting itself?
Replies from: ata↑ comment by ata · 2010-02-06T03:00:05.822Z · LW(p) · GW(p)
If a formal system has a single statement that is simultaneously true and not true, then you can prove any statement (and its opposite) in that system, and it is therefore useless. This was known before Gödel. His insight was that in a system that is not inconsistent (and that is complex enough to represent arithmetic), there will be some situations where given some proposition x, you can neither prove x nor ~x. That's not "simultaneously true and not true" ("true" is in the territory, a formal system is the map), it just means the truth value is unknowable within the system.
In any case, I think this is fairly irrelevant to moral philosophy, because Gödel's theorems are about formal systems representing number theory. I suppose you could somehow represent empirical statements (including moral statements, if we agreed on exactly what facts about reality they signify) in that form — take a structure representing the entire universe as an axiom, and deduce theorems from there — but that's rather impractical for obvious reasons, and there's nothing that really suggests that this provides any analogous insights about simpler and more possible modes of reasoning. In fact, you could change your statement to talk about any area of knowledge (say, "If science is encapsulated by a formal system..." "If aesthetics is encapsulated...") and it would make just as much sense (or just as litte, rather).