Why didn't people (apparently?) understand the metaethics sequence?
post by ChrisHallquist · 2013-10-29T23:04:25.408Z · LW · GW · Legacy · 231 commentsContents
231 comments
There seems to be a widespread impression that the metaethics sequence was not very successful as an explanation of Eliezer Yudkowsky's views. It even says so on the wiki. And frankly, I'm puzzled by this... hence the "apparently" in this post's title. When I read the metaethics sequence, it seemed to make perfect sense to me. I can think of a couple things that may have made me different from the average OB/LW reader in this regard:
- I read Three Worlds Collide before doing my systematic read-through of the sequences.
- I have a background in academic philosophy, so I had a similar thought to Richard Chapell's linking of Eliezer's metaethics to rigid designators independently of Richard.
231 comments
Comments sorted by top scores.
comment by Ishaan · 2013-10-30T03:36:48.930Z · LW(p) · GW(p)
I think what confuses people is that he
1) claims that morality isn't arbitrary and we can make definitive statements about it
2) Also claims no universally compelling arguments.
The confusion is resolved by realizing that he defines the words "moral" and "good" as roughly equivalent to human CEV.
So according to Eliezer, it's not that Humans think love, pleasure, and equality is Good and paperclippers think paperclips are Good. It's that love, pleasure, and equality are part of the definition of good, while paperclips are just part of the definition of paperclippy. The Paperclipper doesn't think paperclips are good...it simply doesn't care about good, instead pursuing paperclippy.
Thus, moral relativism can be decried while "no universally compelling arguments" can be defended. Under this semantic structure, Paperclipper will just say "okay, sure...killing is immoral, but I don't really care as long as it's paperclippy."
Thus, arguments about morality among humans are analogous to Pebblesorter arguments about which piles are correct. In both cases, there is a correct answer.
It's an entirely semantic confusion.
I suggest that ethicists aught to have different words for the various different rigorized definitions of Good to avoid this sort of confusion. Since Eliezer-Good is roughly synonymous to CEV, maybe we can just call it CEV from now on?
Edit: At the very least, CEV is one rigorization of Eliezer-Good, even if it doesn't articulate everything about it. There are multiple levels of rigor and naivety that may be involved here. Eliezer-good is more rigorous than "good" but might not capture all the subtleties of the naive conception. CEV is more rigorous than Eliezer-good, but it might not capture the full range of subtleties within Eliezer-good (and it's only one of multiple ways to rigorize Eliezer-good...consider Coherent Aggregate Volition, for example, as an alternative rigorization of Eliezer-good).
Replies from: RobbBB, Tyrrell_McAllister, Eugine_Nier, TheAncientGeek, TheAncientGeek, passive_fist, TAG↑ comment by Rob Bensinger (RobbBB) · 2013-10-30T07:29:06.841Z · LW(p) · GW(p)
I think what confuses people is that he 1) claims that morality isn't arbitrary and we can make definitive statements about it 2) Also claims no universally compelling arguments.
How does this differ from gustatory preferences?
1a) My preference for vanilla over chocolate ice cream is not arbitrary -- I really do have that preference, and I can't will myself to have a different one, and there are specific physical causes for my preference being what it is. To call the preference 'arbitrary' is like calling gravitation or pencils 'arbitrary', and carries no sting.
1b) My preference is physically instantiated, and we can make definitive statements about it, as about any other natural phenomenon.
2) There is no argument that could force any and all possible minds to like vanilla ice cream.
I raise the analogy because it seems an obvious one to me, so I don't see where the confusion is. Eliezer views ethics the same way just about everyone intuitively view aesthetics -- as a body of facts that can be empirically studied and are not purely a matter of personal opinion or ad-hoc stipulation -- facts, though, that make ineliminable reference to the neurally encoded preferences of specific organisms, facts that are not written in the sky and do not possess a value causally independent of the minds in question.
It's an entirely semantic confusion.
I don't know what you mean by this. Obviously semantics matters for disentangling moral confusions. But the facts I outlined above about how ice cream preference works are not linguistic facts.
Replies from: Ishaan, buybuydandavis↑ comment by Ishaan · 2013-10-30T18:25:58.821Z · LW(p) · GW(p)
Good [1] : The human consensus on morality, the human CEV, the contents of Friendly AI's utility function, "sugar is sweet, love is good". There is one correct definition of Good. "Pebblesorters do not care about good or evil, they care about grouping things into primes. Paperclippers do not care about good or evil, they care about paperclips".
Good[2] : An individual's morality, a special subset of an agent's utility function (especially the subset that pertains to how everyone aught to act). "I feel sugar is yummy, but I don't mind if you don't agree. However, I feel love is good, and if you don't agree we can't be friends."... "Pebblesorters think making prime numbered pebble piles is good. Paperclippers think making paperclips is good". (A pebblesorter might selfishly prefer maximize the number of pebble piles that they make themselves, but the same pebblesorter believes everyone aught to act to maximize the total number of pebble piles, rather than selfishly maximizing their own pebble piles. A perfectly good pebblesorter seeks only to maximize pebbles. Selfish pebblesorters hoard resources to maximize their own personal pebble creation. Evil pebblesorters knowingly make non-prime abominations.)
so I don't see where the confusion is.
Do you see what I mean by "semantic" confusion now? Eliezer (like most moral realists, universalists, etc) is using Good[1]. Those confused by his writing (who are accustomed to descriptive moral relativism, nihilism, etc) are using Good[2]. The maps are actually nearly identical in meaning, but because they are written in different languages it's difficult to see that the maps are nearly identical.
I'm suggesting that Good[1] and Good[2] are sufficiently different that people who talk about morality often aught to have different words for them. This is one of those "If a tree falls in the forest does it make a sound" debates, which are utterly useless because they center entirely around the definition of sound.
Eliezer views ethics the same way just about everyone intuitively view aesthetics -- as a body of facts that can be empirically studied and are not purely a matter of personal opinion or ad-hoc stipulation -- facts, though, that make ineliminable reference to the neurally encoded preferences of specific organisms, facts that are not written in the sky and do not possess a value causally independent of the minds in question.
Yup, I agree completely, that's exactly the correct way to think about it. The fact that you are able to give a definition of what ethics are while tabooing words like good and bad and moral, is the reason that you can simultaneously uphold Good[2] with your gustatory analogy and still understand that Eliezer doesn't disagree with you even though he uses Good[1].
Most people's thinking is too attached to words to do that, so they get confused. Being able to think about what things are without referencing any semantic labels is a skill.
↑ comment by buybuydandavis · 2013-10-30T08:51:44.716Z · LW(p) · GW(p)
I raise the analogy because it seems an obvious one to me, so I don't see where the confusion is.
Your analysis clearly describes some of my understanding of what EY says. I use "yummy" as a go to analogy for morality as well. But, EY also seems to be making a universalist argument, as least for "normal" humans. Because he talks about abstract computation, leaving particular brains behind, it's just unclear to me whether he's a subjectivist or a universalist.
The "no universally compelling argument" applies to Clippy versus us, but is there also no universally compelling argument with all of "us" as well?
Replies from: Jack, RobbBB↑ comment by Jack · 2013-10-30T11:29:11.678Z · LW(p) · GW(p)
"Universalist" and "Subjectivist" aren't opposed or conflicting terms. "Subjective" simply says that moral statements are really statements about the attitudes or opinions of people (or something else with a mind). The opposing term is "objective". "Universalist" and "relativist" are on a different dimension from subjective and objective. Universal vs. relative is about how variable or not variable morality is.
You could have a metaethical theory that morality is both objective and relative. For example, you could define morality as what the law says and it will be relative from country to country as laws differ. You could also have a subjective and universal meta-ethics. Morality judgments could be statements about the attitudes of people but all people could have the same attitudes.
I take Eliezer to hold something like the latter-- moral judgments aren't about people's attitudes simpliciter: they're about what they would be if people were perfectly rational and had perfect information (he's hardly the first among philosophers, here). It is possible that the outcome of that would be more or less universal among humans or even a larger group. Or at least it some subset of attitudes might be universal. But I could be wrong about his view: I feel like I just end up reading my view into it whenever I try to describe his.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-04T19:02:53.997Z · LW(p) · GW(p)
"Universalist" and "Subjectivist" aren't opposed or conflicting terms. "Subjective" simply says that moral statements are really statements about the attitudes or opinions of people (or something else with a mind). The opposing term is "objective". "Universalist" and "relativist" are on a different dimension from subjective and objective. Universal vs. relative is about how variable or not variable morality is.
If morality varies with individuals, as required by subjectivism, it is not at all universal, so the two are not orthogonal.
You could have a metaethical theory that morality is both objective and relative. For example, you could define morality as what the law says and it will be relative from country to country as laws differ.
If morality is relative to groups rather than individuals, it is still relative, Morality is objective when the truth values of moral statements don't vary with individuals or groups, not when it varies with empirically discoverable facts.
Replies from: JackYou could also have a subjective and universal meta-ethics. Morality judgments could be statements about the attitudes of people but all people could have the same attitudes.
↑ comment by Jack · 2013-11-05T03:31:27.290Z · LW(p) · GW(p)
If morality varies with individuals, as required by subjectivism, it is not at all universal, so the two are not orthogonal.
Subjectivism does not require that morality varies with individuals.
Morality is objective when the truth values of moral statements don't vary with individuals or groups
No, see the link above.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-05T09:08:07.563Z · LW(p) · GW(p)
The link supports what I said. Subjectivism requires that moral claims have truth values which , in principle, dpened on the individual making them. It doesn't mean that any two people will necessarily have a different morality, but why would I assert that?
Replies from: Jack↑ comment by Jack · 2013-11-05T10:22:54.876Z · LW(p) · GW(p)
Subjectivism requires that moral claims have truth values which , in principle, dpened on the individual making them
This is not true of all subjectivisms, as the link makes totally clear. Subjective simply means that something is mind-dependent; it need not be the mind of the person making the claim-- or not only the mind of the person making the claim. For instance, the facts that determine whether or not a moral claim is true could consist in just the moral opinions and attitudes where all humans overlap.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-05T10:38:23.742Z · LW(p) · GW(p)
There are people who use "subjective" to mean "mental", but they sholudn't.
↑ comment by Rob Bensinger (RobbBB) · 2013-10-30T10:34:16.688Z · LW(p) · GW(p)
But, EY also seems to be making a universalist argument, as least for "normal" humans.
If you have in mind 'human universals' when you say 'universality', that's easily patched. Morality is like preferring ice cream in general, rather than like preferring vanilla ice cream. Just about every human likes ice cream.
Because he talks about abstract computation, leaving particular brains behind, it's just unclear to me whether he's a subjectivist or a universalist.
The brain is a computer, hence it runs 'abstract computations'. This is true in essentially the same sense that all piles of five objects are instantiating the same abstract 'fiveness'. If it's mysterious in the case of human morality, it's not only equally mysterious in the case of all recurrent physical processes; it's equally mysterious in the case of all recurrent physical anythings.
Some philosophers would say that brain computations are both subjective and objective -- metaphysically subjective, because they involve our mental lives, but epistemically objective, because they can be discovered and verified empirically. For physicalists, however, 'metaphysical subjectivity' is not necessarily a joint-carving concept. And it may be possible for a non-sentient AI to calculate our moral algorithm. So there probably isn't any interesting sense in which morality is subjective, except maybe the sense in which everything computed by an agent is 'subjective'.
I don't know anymore what you mean by 'universalism'.
is there also no universally compelling argument with all of "us" as well?
There are universally compelling arguments for all adolescent or adult humans of sound mind. (And many pre-adolescent humans, and many humans of unsound mind.)
↑ comment by Tyrrell_McAllister · 2013-10-31T22:45:32.147Z · LW(p) · GW(p)
Since Eliezer-Good is roughly synonymous to CEV, maybe we can just call it CEV from now on?
This leaves out the "rigid designator" bit that people are discussing up-thread. Your formulation invites the response, "So, if our CEV were different, then different things would be good?" Eliezer wants the answer to this to be "No."
Perhaps we can say that "Eliezer-Good" is roughly synonymous to "Our CEV as it actually is in this, the actual, world as this world is right now."
Thus, if our CEV were different, we would be in a different possible world, and so our CEV in that world would not determine what is good. Even in that different, non-actual, possible world, what is good would be determined by what our actual CEV says is good in this, the actual, world.
↑ comment by Eugine_Nier · 2013-10-31T03:33:17.414Z · LW(p) · GW(p)
1) claims that morality isn't arbitrary and we can make definitive statements about it
2) Also claims no universally compelling arguments.
Both these statements are also true about physics, yet nobody seems to be confused about it in that case.
Replies from: Ishaan↑ comment by Ishaan · 2013-10-31T03:59:53.504Z · LW(p) · GW(p)
What do you mean? Rational agents aught to converge upon what physics is.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-10-31T04:11:05.897Z · LW(p) · GW(p)
Rational agents aught to converge upon what physics is.
Only because that's considered part of the definition of "rational agent".
Replies from: Ishaan↑ comment by Ishaan · 2013-10-31T05:03:58.831Z · LW(p) · GW(p)
Yes? But the recipient of an "argument" is implicitly an agent who at least partially understands epistemology. There is not much point in talking about agents which aren't rational or at least partly-bounded-rational-ish. Completely insensible things are better modeled as objects, not agents, and you can't argue with an object.
↑ comment by TheAncientGeek · 2013-11-04T19:47:09.923Z · LW(p) · GW(p)
It's that love, pleasure, and equality are part of the definition of good, while paperclips are just part of the definition
And can aliens have love and pleasure, or is Good a purely human concept?
Replies from: Ishaan↑ comment by Ishaan · 2013-11-04T22:24:41.708Z · LW(p) · GW(p)
By Eliezer's usage? I'd say aliens might have love and pleasure in the same way that aliens might have legs...they just as easily might not. Think "wolf" vs "snake" - one has legs and feels love while the other does not.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-04T23:27:04.012Z · LW(p) · GW(p)
Let's say they have love and pleasure. Then why would want to define morality in a human centric way?
↑ comment by TheAncientGeek · 2013-11-04T18:51:53.732Z · LW(p) · GW(p)
1) claims that morality isn't arbitrary and we can make definitive statements about it
That isn't non-relativism. Subjectivism is the claim that the truth of moral statements varies with the person making them. That is compatible with the claim that they are non-arbitrary, since they may be fixed by features of persons that they cannot change, and which can be objectively discovered. It isn't a particularly strong version of subjectivism, though.
2) Also claims no universally compelling arguments.
That is;'t non-realism. Non-realism means that there are no arguments or evidence that will compel suitably equipped and motivated agents.
The confusion is resolved by realizing that he defines the words "moral" and "good" as roughly equivalent to human CEV.
The CEV of individual humans, or humanity? You have been ambiguous about an important subject EY is also ambiguous about.
Replies from: Ishaan↑ comment by Ishaan · 2013-11-04T22:32:41.436Z · LW(p) · GW(p)
I'm ambiguous about it because I'm describing EY's usage of the word, and he's been ambiguous about it.
I typically adapt my usage to the person who I'm talking to, but the way that I typically define "good" in my own head is: "The subset of my preferences which do not in any way reference myself as a person"...or in other words, the behavior which I would prefer if I cared about everyone equally (If I was not selfish and didn't prefer my in-group).
Under my usage, different people can have different conceptions of good. "Good" is a function of the agent making the judgement.
A pebble-sorter might selfishly want to make every pebble pile themselves, but they also might think that increasing the total number of pebble piles in general is "good". Then, according to the Pebblesorters, a "good" pebble-sorter would put overall-prime-pebble-pile-maximization above their own personal -prime-pebble-pile-productivity. According to the Babyeaters, "good" baby-eater would eat babies indiscriminately, even if they selfishly might want to spare their own. According to humans, Pebble sorter values are alien and baby-eater values are evil.
↑ comment by passive_fist · 2013-10-30T05:08:00.959Z · LW(p) · GW(p)
I think you're right here. He's saying, in a way, that moral absolutism only makes sense within context. Hence metaethics. It's kinda hard to wrap one's head around but it does make sense.
↑ comment by TAG · 2021-05-08T17:38:35.314Z · LW(p) · GW(p)
The question of what EY means is entangled with the question of why he thinks it's true.
This account of his meaning
It’s that love, pleasure, and equality are part of the definition of good
..is pretty incredible as an argument, because it appears to be an argument by definition...in fact, an argument by normative and novel definition...and he hates arguments by definition. [LW · GW]
Well, even if they are not all bad , his argument-by-definition is not one of the good ones, because it's not based on an accepted or common definition. Inasmuch as it's both a novel theory, and based on a definition, it's based on a novel definition.
comment by lukeprog · 2013-10-30T00:37:44.308Z · LW(p) · GW(p)
I remain confused by Eliezer's metaethics sequence.
Both there and in By Which It May Be Judged, I see Eliezer successfully arguing that (something like) moral realism is possible in a reductionist universe (I agree), but he also seems to want to say that in fact (something like) moral realism actually obtains, and I don't understand what the argument for that is. In particular, one way (the way?) his metaethics might spit up something that looks a lot like moral realism is if there is strong convergence of values upon (human-ish?) agents receiving better information, time enough to work out contradictions in their values, etc. But the "strong convergence of values" thesis hasn't really been argued, so I remain unclear as to why Eliezer finds it plausible.
Basically, I read the metaethics sequence as asserting both things but arguing only for the first.
But I'm not sure about this. Perhaps because I was already familiar with the professional metaethics vocabulary when I read the sequence, I found Eliezer's vocabulary for talking about positions in metaethics confusing.
I meant to explore these issues in a vocabulary I find more clear, in my own metaethics sequence, but I still haven't got around to it. :(
Replies from: komponisto, Ishaan, ChrisHallquist, buybuydandavis, Carinthium↑ comment by komponisto · 2013-11-01T11:55:20.866Z · LW(p) · GW(p)
(I'm putting this as a reply to your comment because your comment is what made me think of it.)
In my view, Eliezer's "metaethics" sequence, despite its name, argues for his ethical theory, roughly
(1) morality[humans] = CEV[humans]
(N.B.: this is my terminology; Eliezer would write "morality" where I write "morality[humans]") without ever arguing for his (implied) metaethical theory, which is something like
(2) for all X, morality[X] = CEV[X].
Worse, much of his effort is spent arguing against propositions like
(3) (1) => for all X, morality[X] = CEV[humans] (The Bedrock of Morality: Arbitrary?)
and
(4) (1) => morality[humans] = CEV["humans"] (No License To Be Human)
which, I feel, are beside the point.
Replies from: Douglas_Knight, TheOtherDave↑ comment by Douglas_Knight · 2013-11-04T20:39:14.950Z · LW(p) · GW(p)
Eliezer's "metaethics" sequence, despite its name, argues for his ethical theory
Yes; what else would you do in metaethics?
Isn't its job to point to ethical theories, while the job of ethics is to assume you have agreed on a theory (an often false assumption)?
↑ comment by komponisto · 2013-11-05T07:42:49.088Z · LW(p) · GW(p)
Ethics is the subject in which you argue about which ethical theory is correct. In meta-ethics, you argue about how you would know if an ethical theory were correct, and/or what it would mean for an ethical theory to be correct, etc.
See here for a previous comment of mine on this.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-11-06T17:30:27.324Z · LW(p) · GW(p)
First, is ethics only about decision procedures? The existence of the concept of moral luck suggests not. Sure, you can say lots of people are wrong, but to banish them from the field of ethics is ridiculous. Virtue ethics is another example, less clearly a counterexample, but much more central.
The three level hierarchy at your link does nothing to tell what belongs in meta-ethics and what belongs in ethics. I don't think your comment here is consistent with your comment there and I don't think either comment has much to do with the three level hierarchy.
Meta-ethics is about issues that are logically prior to ethics. I reject your list. If there are disagreements about the logical priority of issues, then there should be disagreements about what constitutes meta-ethics. You could have a convention that meta-ethics is defined as a certain list of topics by tradition, but that's stupid. In particular, I think consequentialism vs deontology has high logical priority. Maybe you disagree with me, but to say that I am wrong by definition is not helpful.
Going back to Eliezer, I think that he does only cover meta-ethical claims and that they do pin down an ethical theory. Maybe other meta-ethical stances would not uniquely do so (contrary to my previous comment), but his do.
Replies from: komponisto↑ comment by komponisto · 2013-11-07T09:32:53.917Z · LW(p) · GW(p)
First, is ethics only about decision procedures? The existence of the concept of moral luck suggests not.
It may not surprise you to learn that I am of the school that rejects the concept of moral luck. (In this I think I align with Eliezer.)
Meta-ethics is about issues that are logically prior to ethics
This is unobjectionable provided that one agrees about what ethics consists of. As far as I am aware, standard philosophical terminology labels utilitarianism (for example) as an ethical theory; yet I have seen people on LW refer to "utilitarian meta-ethics". This is the kind of usage I mean to disapprove of, and I hold Eliezer under suspicion of encouraging it by blurring the distinction in his sequence.
I should be clear about the fact that this is a terminological issue; my interest here is mainly in preserving the integrity of the prefix "meta", which I think has suffered excessive abuse both here and elsewhere. For whatever reason, Eliezer's use of the term felt abusive to me.
Part of the problem may be that Eliezer seemed to think the concept of rigid designation was the important issue, as opposed to e.g. the orthogonality thesis, and I found this perplexing (and uncharacteristic of him). Discomfort about this may have contributed to my perception that meta-ethics wasn't really the topic of his sequence, so that his calling it that was "off". But this is admittedly distinct from my claim that his thesis is ethical rather than meta-ethical.
Going back to Eliezer, I think that he does only cover meta-ethical claims and that they do pin down an ethical theory. Maybe other meta-ethical stances would not uniquely do so (contrary to my previous comment), but his do.
This is again a terminological point, but I think a sequence should be named after the conclusion rather than the premises. If his meta-ethical stance pins down an ethical theory, he should have called the sequence explaining it his "ethics" sequence; just as if I use my theory of art history to derive my theory of physics, then my sequence explaining it should be my "physics" sequence rather than my "art history" sequence.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-11-07T16:26:34.396Z · LW(p) · GW(p)
You demand that everyone accept your definition of ethics, excluding moral luck from the subject, but you simultaneously demand that meta-ethics be defined by convention.
I said both of those points (but not their conjunction) in my previous comment, after explicitly anticipating what you say here and I'm rather annoyed that you ignored it. I guess the lesson is to say as little as possible.
Replies from: komponisto↑ comment by komponisto · 2013-11-07T19:48:20.096Z · LW(p) · GW(p)
Now just hold on a second. You are arguing by uncharitable formulation, implying that there is tension between two claims when, logically, there is none. (Forgive me for not assuming you were doing that, and thereby, according to you, "ignoring" your previous comment.) There is nothing contradictory about holding that (1) ethical theories that include moral luck are wrong; and (2) utilitarianism is an ethical theory and not a meta-ethical theory.
(1) is an ethical claim. (2) is the conjunction of a meta-ethical claim ("utilitarianism is an ethical theory") and a meta-meta-ethical claim ("utilitarianism is not a meta-ethical theory").
( I hereby declare this comment to supersede all of my previous comments on the subject of the distinction between ethics and meta-ethics, insofar as there is any inconsistency; and in the event there is any inconsistency, I pre-emptively cede you dialectical victory except insofar as doing so would contradict anything else I have said in this comment.)
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-11-07T20:50:02.554Z · LW(p) · GW(p)
OK, if you've abandoned your claim that I "consequentialism is not a meta-ethical attribute," is true by convention, then that's fine. I'll just disagree with it and keep including consequentialism vs deontology in meta-ethics, just as I'll keep including moral luck in ethics.
↑ comment by TheAncientGeek · 2013-11-04T20:50:50.298Z · LW(p) · GW(p)
"In philosophy, meta-ethics is the branch of ethics that seeks to understand the nature of ethical properties, statements, attitudes, and judgments. Meta-ethics is one of the three branches of ethics generally recognized by philosophers, the others being normative ethics and applied ethics.
While normative ethics addresses such questions as "What should one do?", thus endorsing some ethical evaluations and rejecting others, meta-ethics addresses questions such as "What is goodness?" and "How can we tell what is good from what is bad?", seeking to understand the nature of ethical properties and evaluations."
↑ comment by TheOtherDave · 2013-11-01T15:02:58.934Z · LW(p) · GW(p)
I would be surprised if Eliezer believed (1) or (2), as distinct from believing that CEV[X] is the most viably actionable approximation of morality[X] (using your terminology) we've come up with thus far.
This reminds me somewhat of the difference between believing that 2013 cryonics technology reliably preserves the information content of a brain on the one hand, and on the other believing that 2013 cryonics technology has a higher chance of preserving the information than burial or cremation.
I agree that that he devotes a lot of time to arguing against (3), though I've always understood that as a reaction to the "but a superintelligent system would be smart enough to just figure out how to behave ethically and then do it!" crowd.
I'm not really sure what you mean by (4).
Replies from: komponisto↑ comment by komponisto · 2013-11-02T02:24:24.009Z · LW(p) · GW(p)
I would be surprised if Eliezer believed (1) or (2), as distinct from believing that CEV[X] is the most viably actionable approximation of morality[X] (using your terminology) we've come up with thus far.
I didn't intend to distinguish that finely.
I'm not really sure what you mean by (4).
(4) is intended to mean that if we alter humans to have a different value system tomorrow, we would also be changing what we mean (today) by "morality". It's the negation of the assertion that moral terms are rigid designators, and is what Eliezer is arguing against in No License To Be Human.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-02T13:08:41.215Z · LW(p) · GW(p)
Ah, gotcha. OK, thanks for clarifying.
↑ comment by Ishaan · 2013-10-30T19:21:49.563Z · LW(p) · GW(p)
In particular, one way (the way?) his metaethics might spit up something that looks a lot like moral realism is if there is strong convergence of values upon (human-ish?) agents receiving better information, time enough to work out contradictions in their values, etc. But the "strong convergence of values" thesis hasn't really been argued, so I remain unclear as to why Eliezer finds it plausible.
I don't think you're "confused" about what was meant. I think you understood exactly what was meant, and have identified a real (and, I believe, acknowledged?) problem with the moral realist definition of Good.
The assumption is that "if we knew more, thought faster, were more the people we wished we were, had grown up farther together” then a very large number of humans would converge onto moral agreement.
The assumption is that if you take a culture that practiced, say, human torture and sacrifice, into our economy, and give them the resources to live at a level of luxury similar to what we experience today and all of our knowledge, they would grow more intelligent, more globally aware, and their morality would slowly shift to become more like ours even in the absence of outside pressure. Our morality, however, would not shift to become more like theirs. It seems like an empirical question.
Alternatively, we could bite the bullet and just say that some humans simply end up with alien values that are not "good",
Replies from: TheAncientGeek, Moss_Piglet↑ comment by TheAncientGeek · 2013-11-04T16:46:41.081Z · LW(p) · GW(p)
I don't think you're "confused" about what was meant. I think you understood exactly what was meant, and have identified a real (and, I believe, acknowledged?) problem with the moral realist definition of Good.
The assumption is that "if we knew more, thought faster, were more the people we wished we were, had grown up farther together” then a very large number of humans would converge onto moral agreement.
It's not the assumption that is good or bad, but the quality of argument provided for it.
↑ comment by Moss_Piglet · 2013-10-31T00:04:21.174Z · LW(p) · GW(p)
Alternatively, we could bite the bullet and just say that some humans simply end up with alien values that are not "good",
Seeing as about 1% of the population are estimated to be psychopaths, not to mention pathological narcissists megalomaniacs etc, it seems hard to argue that there isn't a large (if statistically insignificant) portion of the population who are natural ethical egoists rather than altruists. You could try to weasel around it like Mr Yudkowski does, saying that they are not "neurologically intact," except that there is evidence that psychopathy at least is a stable evolutionary strategy rather than a malfunction of normal systems.
I'm usually not one to play the "evil psychopaths" card online, mainly because it's crass and diminishes the meaning of a useful medical term, but it's pretty applicable here. What exactly happens to all the psychopaths and people with psychopathic traits when you start extrapolating human values?
Replies from: Ishaan, Viliam_Bur↑ comment by Ishaan · 2013-10-31T06:54:16.901Z · LW(p) · GW(p)
Why even stop at psychopaths? There are perfectly neurotypical people with strong desires for revenge-based justice, purity norms that I strongly dislike, etc. I'm not extremely confident that extrapolation will dissolve these values into deeper-order values, although my perception that intelligence in humans does at least seem to be correlated to values similar to mine is comforting in this respect.
Although really, I think this is reaching the point where we have to stop talking in terms of idealized agents with values and start thinking about how these models can be mapped to actual meat brains.
What exactly happens to all the psychopaths and people with psychopathic traits when you start extrapolating human values?
Well, under the shaky assumption that we have the ability to extrapolate in the first place, in practice what happens is that whoever controls the extrapolation sets which values are to be extrapolated, and they have a very strong incentive to put in only their own values.
By definition, no one wants to implement the CEV of humanity more than they want to implement their own CEV. But I would hope that most of the worlds impacted by the various human's CEVs would be a pretty nice places to live.
Replies from: None↑ comment by [deleted] · 2013-11-10T14:37:29.366Z · LW(p) · GW(p)
By definition, no one wants to implement the CEV of humanity more than they want to implement their own CEV.
That depends. The more interconnected our lives become, the harder it gets to enhance the life of myself or my loved ones through highly localized improvements. Once you get up to a sufficiently high level (vaccination programs are an obvious example), helping yourself and your loved ones is easiest to accomplish by helping everyone all together, because of the ripple effects down to my loved ones' loved ones thus having an effect on my loved ones, whom I value unto themselves.
Favoring individual volition versus a group volition could be a matter of social-graph connectedness and weighting: it could be that for a sufficiently connected individual with sufficiently strong value-weight placed on social ties, that individual will feel better about sacrificing some personal preferences to admit their connections' values rather than simply subjecting their own close social connections to their personal volition.
Replies from: Ishaan↑ comment by Ishaan · 2013-11-10T18:37:30.363Z · LW(p) · GW(p)
Then they have an altruistic EV. That's allowed.
But as far as your preference goes, your EV >= any other CEV. It has to be that way, tautologically. Extrapolated Volition is defined as what you would choose to do in the counter-factual scenario where you have more intelligence, knowledge, etc than you do now.
If you're totally altruistic, it might be that your EV is the CEV of humanity, but that means that you have no preference, not that you prefer humanity's CEV over your own. Remember, all your preferences, including the moral and altruistic ones, are included in your EV.
Replies from: None↑ comment by [deleted] · 2013-11-10T19:53:22.694Z · LW(p) · GW(p)
Sorry, I don't think I'm being clear.
The notion I'm trying to express is not an entirely altruistic EV, or even a deliberately altruistic EV. Simply, this person has friends and family and such, and thus has a partially social EV; this person is at least altruistic towards close associates when it costs them nothing.
My claim, then, is that if we denote the n = number of hops from any one person to any other in the social graph of such agents:
lim_{n->0} Social Component of Personal EV = species-wide CEV
Now, there may be special cases, such as people who don't give a shit about anyone but themselves, but the idea is that as social connectedness grows, benefitting only myself and my loved ones becomes more and more expensive and unwieldly (for instance, income inequality and guard labor already have sizable, well-studied economic costs, and that's before we're talking about potential improvements to the human condition from AI!) compared to just doing things that are good for everyone without regard to people's connection to myself (they're bound to connect through a mutual friend or relative with some low degree, after all) or social status (because again, status enforcement is expensive).
So while the total degree to which I care about other people is limited (Social Component of Personal EV <= Personal EV), eventually that component should approximate the CEV of everyone reachable from me in the social graph.
The question, then, becomes whether that Social Component of my Personal EV is large enough to overwhelm some of my own personal preferences (I participate in a broader society voluntarily) or whether my personal values overwhelm my consideration of other people's feelings (I conquer the world and crush you beneath my feet).
↑ comment by Viliam_Bur · 2013-10-31T12:33:15.284Z · LW(p) · GW(p)
Seems to me that to a significant degree the psychopaths are successful because people around them have problems communicating. Information about what the specific psychopath did to whom are usually not shared. If they were easily accessible to people before interacting with the psychopath, a lot of their power would be lost.
Despite being introverted by nature, these days my heuristics for dealing with problematic people is to establish good communication lines among the non-problematic people. Then people often realize that what seemed like their specific problem is in fact almost everyone's problem with the same person, following the same pattern. When a former mystery becomes an obvious algorithm, it is easier to think about a counter-strategy.
Sometimes the mentally different person beats you not by using a strategy so complex you wouldn't understand it, but by using a relatively simple strategy that is so weird to you that you just don't notice it in the hypothesis space (and instead you imagine something more complex and powerful). But once you have enough data to understand the strategy, sometimes you can find and exploit its flaws.
A specific example of a powerful yet vulnerable strategy is lying strategically to everyone around you and establishing yourself as the only channel of information between different groups of people. Then you can make the group A believe the group B are idiots and vice versa, and make both groups see you as their secret ally. Your strategy can be stable for a long time, because when the groups believe each other to be idiots, they naturally avoid communicating with each other; and when they do, they realize the other side has completely wrong information, which they attribute to the other side's stupidity, not your strategic lying. -- Yet, if there is a person at each side that becomes suspicious of the manipulator, and if these two people can trust each other enough to meet and share their info (what each of them heard about the other side, and what actually happened), and if they make the result known to their respective groups, then... well, I don't actually know what happens, because right now I am exactly at this point in my specific undisclosed project... but I hope it can seriously backfire to the manipulator.
Of course, this is just a speculation. If we made communication among non-psychopaths more easy, the psychopaths would also make their next move in the arms race -- they could misuse the channels for more powerful attacks, or make people provide incorrect information about them by manipulation or threats. So it's not obvious that better communication would mean less power for psychopaths. But it seems to me that a lack of communication is always helpful for them, so more communication should generally be helpful. Even having the concept of a psychopath is helpful, although it can be abused. Investigating the specific weaknesses of psychopaths and making them widely known (just like the weaknesses of average people are generally known) could also reduce their advantage.
However, I imagine that the values of psychopaths are not so different from values of average people. They are probably a subset, and the missing parts (such as empathy) are those that cause problems. Let's say they give extreme priority to feeling superior and watching their enemies crushed and pretty much ignore everything else (a huge simplification). There is a chance their values are so different they could be satisfied in a manner we would consider unfriendly, but they wouldn't -- for example if reality is not valuable for them, why not give them an illusion of maximum superiority, and a happy life to everyone else, so everyone will have their utility function maximized? Maybe they would agree with this solution even if they had perfect intelligence and knowledge.
Replies from: WalterL↑ comment by ChrisHallquist · 2013-10-30T03:08:59.188Z · LW(p) · GW(p)
In particular, one way (the way?) his metaethics might spit up something that looks a lot like moral realism is if there is strong convergence of values upon agents receiving better information, time enough to work out contradictions in their values, etc. But the "strong convergence of values" thesis hasn't really been argued, so I remain unclear as to why Eliezer finds it plausible.
When you say "agents" here, did you mean to say "psychologically normal humans"? Because the general claim I think Eliezer would reject, based on what he says on No Universally Compelling Arguments. But I do think he would accept the narrower claim about psychologically normal humans, or as he sometimes says "neurologically intact humans." And the argument for that is found in places like The Psychological Unity of Humankind, though I think there's an even better link for it somewhere - I seem to distinctly remember a post where he says something about how you should be very careful about attributing moral disagreements to fundamentally different values.
EDIT: Here is the other highly relevant post I was thinking of.
Replies from: lukeprog, TheAncientGeek, Eugine_Nier, TheAncientGeek↑ comment by lukeprog · 2013-10-30T06:39:51.888Z · LW(p) · GW(p)
Yeah, I meant to remain ambiguous about how wide Eliezer means to cast the net around agents. Maybe it's psychologically normal humans, maybe it's wider or narrower than that.
I suppose 'The psychological unity of humankind' is sort of an argument that value convergence is likely at least among humans, though it's more like a hand-wave. In response, I'd hand-wave toward Sobel (1999); Prinz (2007); Doring & Steinhoff (2009); Doring & Andersen (2009); Robinson (2009); Sotala (2010); Plunkett (2010); Plakias (2011); Egan (2012), all of which argue for pessimism about value convergence. Smith (1994) is the only philosophical work I know of that argues for optimism about value convergence, but there are probably others I just don't know about.
Replies from: Ishaan↑ comment by Ishaan · 2013-10-31T19:34:49.082Z · LW(p) · GW(p)
Some of the sources you are hand waving towards are (quite rightly) pointing out that rational agents need not converge, but they aren't looking at the empirical question of whether humans, specifically, converge. Only a subset of those sources are actually talking about humans specifically.
(^This isn't disagreement. I agree with your main suggestion that humans probably don't converge, although I do think they are at least describable by mono-modal distributions)
I'm not sure it's even appropriate to use philosophy to answer this question. The philosophical problem here is "how do we apply idealized constructs like extrapolated preference and terminal values to flesh-and-blood animals?" Things like "should values which are not biologically ingrained count as terminal values?" and similar questions.
...and then, once we've developed constructs to the point that we're ready to talk about the extent to which humans specifically converge if at all, it becomes an empirical question..
↑ comment by TheAncientGeek · 2013-11-04T13:39:22.313Z · LW(p) · GW(p)
No Universally Compelling Arguments has been put to me as a decisive refutation of Moral Realism, by somebody who thought the LW line was anti-realist. It isn't a decisive refutation because no (non strawman) realist thinks there are arguments that could compel an irrational person, an insane person, an very unintelligent person, and so on. Moral realists only need to argue that moral truths are independently discoverable by suitably motivated and equipped people, like mathematical truths (etc).
↑ comment by Eugine_Nier · 2013-10-31T03:12:48.301Z · LW(p) · GW(p)
When you say "agents" here, did you mean to say "psychologically normal humans"? Because the general claim I think Eliezer would reject, based on what he says on No Universally Compelling Arguments.
Well, "No Universally Compelling Arguments" also applies to physics, but it is generally believed that all sufficiently intelligent agents would agree on the laws of physics.
Replies from: None↑ comment by [deleted] · 2013-11-10T13:04:38.353Z · LW(p) · GW(p)
True, but physics is discoverable via the scientific method, and ultimately, in the nastiest possible limit, via war. If we disagree on physics, all we have to do is highlight the disagreement and go to war over it: whichever one of us is closer to right will succeed in killing the other guy (and potentially a hell of a lot of other stuff).
Whereas if you try going to war over morality, everyone winds up dead and you've learned nothing, except possibly that almost everyone considers a Hobbesian war-of-all-against-all to be undesirable when it happens to him.
↑ comment by TheAncientGeek · 2013-11-04T16:08:26.696Z · LW(p) · GW(p)
EDIT: Here is the other highly relevant post I was thinking of.
I think what he is talking about there is lack of disagreement in the sense of incommensurability, or orthogonality as it is locally known. Lack of disagreement int he sense of convergence or consensus is a very different thing.
↑ comment by buybuydandavis · 2013-10-30T08:31:14.184Z · LW(p) · GW(p)
But the "strong convergence of values" thesis hasn't really been argued, so I remain unclear as to why Eliezer finds it plausible.
Hasn't been argued and seems quite implausible to me.
I find moral realism meaningful for each individual (you can evaluate choices according to my values applied with infinite information and infinite resources to think), but I don't find it meaningful when applied to groups of people, all with their own values.
EY finesses the point by talking about an abstract algorithm, and not clearly specifying what that algorithm actually implements, whether my values, yours, or some unspecified amalgamation of the values of different people. So that the point of moral subjectivism vs. moral universalism is left unspecified, to be filled in by the imagination of the reader.To my ear, sometimes it seems one way, and sometimes the other. My guess was that this was intentional, as clarifying the point wouldn't take much effort. The discussions of EY's metaethics always strike me as peculiar, as he's wandering about here somewhere while people discuss how they're unclear just what conclusion he had drawn.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-04T17:02:47.898Z · LW(p) · GW(p)
I find moral realism meaningful for each individual (you can evaluate choices according to my values applied with infinite information and infinite resources to think),
I can how that could be implemented. However, I don't see how that would count as morality. It amounts to Anything Goes, or Do What Thou Wilt. I don't see how a world in which that kind of "moral realism" holds would differ from one where moral subjectivism holds, or nihilism for that matter.
but I don't find it meaningful when applied to groups of people, all with their own values.
Where meaningful means implementable? Moral realism is not many things, and one of the things it is not is the claim that everyone gets to keep all their values and behaviour unaltered.
Replies from: buybuydandavis, None↑ comment by buybuydandavis · 2013-11-05T00:21:18.247Z · LW(p) · GW(p)
However, I don't see how that would count as morality.
See my previous coment on "Real Magic": http://lesswrong.com/lw/tv/excluding_the_supernatural/79ng
If you choose not to count the actual moralities that people have as morality, that's up to you.
↑ comment by [deleted] · 2013-11-10T14:42:01.292Z · LW(p) · GW(p)
Not "anything goes, do what you will", so much as "all X go, X is such that we want X before we do it, we value doing X while we are doing it, and we retrospectively approve of X after doing it".
We humans have future-focused, hypothetical-focused, present-focused, and past-focused motivations that don't always agree. CEV (and, to a great extent, moral rationality as a broader field) is about finding moral reasoning strategies and taking actions such that all those motivational systems will agree that we Did a Good Job.
That said, being able to demonstrate that the set of Coherently Extrapolated Volitions exists is not a construction showing how to find members of that set.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-11T18:00:06.940Z · LW(p) · GW(p)
Not "anything goes, do what you will", so much as "all X go, X is such that we want X before we do it, we value doing X while we are doing it, and we retrospectively approve of X after doing it".
As with a number of previous responses, that is ambiguous between the individual and the collective. If I could get some utility by killing you, then should I kill you? If the "we" above is interpreted individually, I should: if it is interpreted collectively, I shouldn't.
Replies from: None↑ comment by [deleted] · 2013-11-12T10:14:07.265Z · LW(p) · GW(p)
Yes, that is generally considered the core open problem of ethics, once you get past things like "how do we define value" and blah blah blah like that. How do I weigh one person's utility against another person's? Unless it's been solved and nobody told me, that's a Big Question.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-12T13:43:15.897Z · LW(p) · GW(p)
So...what's the point of CEV, hten?
Replies from: None↑ comment by [deleted] · 2013-11-12T19:28:13.399Z · LW(p) · GW(p)
It's a hell of a lot better than nothing, and it's entirely possible to solve those individual-weighting problems, possibly by looking at the social graph and at how humans affect each other. There ought to be some treatment of the issue that yields a reasonable collective outcome without totally suppressing or overriding individual volitions.
Certainly, the first thing that comes to mind is that some human interactions are positive sum, some negative sum, some zero-sum. If you configure collective volition to always prefer mutually positive-sum outcomes over zero-sum over negative, then it's possible to start looking for (or creating) situations where sinister choices don't have to be made.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-12T21:28:55.770Z · LW(p) · GW(p)
Who said the alternative is nothing? Theres any number of theories of morality, and a further number of theories of friendly .ai.
↑ comment by Carinthium · 2013-10-31T14:09:06.157Z · LW(p) · GW(p)
Requesting lukeprog get round to this. Lesswrong Metaethics, given that it rejects a large amount of rubbish (coherentism being the main part), is the best in the field today and needs further advancing.
Requesting people upvote this post if they agree with me that getting round to metaethics is the best thing Lukeprog could be doing with his time, and downvote if they disagree.
Replies from: Luke_A_Somers, Carinthium↑ comment by Luke_A_Somers · 2013-10-31T15:07:49.635Z · LW(p) · GW(p)
Getting round to metaethics should rank on Lukeprog's priorities: [pollid:573]
Replies from: shminux↑ comment by Shmi (shminux) · 2013-10-31T16:35:26.946Z · LW(p) · GW(p)
I would love to see Luke (the other Luke, but maybe you, too) and hopefully others (like Yvain) explicate their views on meta-ethics, given how the Eliezer's Sequence is at best unclear (though quite illuminating). It seems essential because a clear meta-ethics seems necessary to achieve MIRI's stated purpose: averting AGI x-risk by developing FAI.
↑ comment by Carinthium · 2013-10-31T14:09:49.047Z · LW(p) · GW(p)
Creating a "balance Karma" post. Asking people use this for their conventional Karma for my above post, or to balance out upvotes/downvotes. This way my Karma will remain fair.
comment by gjm · 2013-10-29T23:55:32.340Z · LW(p) · GW(p)
rigid designators
aside from a lot of arguing about definitions over whether Eliezer counts as a relativist.
I think these are in fact the whole story. Eliezer says loudly that he is a moral realist and not any sort of relativist, but his views amount to saying "Define good and bad and so forth in terms of what human beings, in fact, value; then, as a matter of objective fact, death and misery are bad and happiness and fun are good", which to many people sounds exactly like moral relativism plus terminological games; confusion ensues.
Replies from: nshepperd, None, Douglas_Knight↑ comment by nshepperd · 2013-10-30T09:44:21.975Z · LW(p) · GW(p)
The reason Eliezer's views are commonly mistaken for relativism in the manner you describe is because most people do not have a good grasp on the difference between sense and reference(a difference that, to be fair, doesn't seem to be well explained anywhere). To elaborate:
"Define good and bad and so forth in terms of what human beings, in fact, value" sounds like saying that goodness depends on human values. This is the definition you get if you say "let 'good' mean 'human values'". But the actual idea is meant to be more analogous to this: assuming for the sake of argument that humans value cake, define "good" to mean cake. Obviously, under that definition, "cake is always good regardless of what humans value" is true. In that case "good" is a rigid designator for cake.
The difference is that "good" and "human values" are not synonymous. But they refer to the same thing, when you fully dereference them, namely {happiness, fun and so forth}. This is the difference between sense and reference, and it's why it is necessary to understand rigid designators.
Replies from: Jack, gjm, TheAncientGeek↑ comment by Jack · 2013-10-30T10:51:37.481Z · LW(p) · GW(p)
This is an excellent description of the argument.
Here is my question: Why bother with the middle man? No one can actually define good and everyone is constantly checking with 'human values' to see what it says! Assuming the universe runs on math and humans share attitudes about some things obviously there is some platonic entity which precisely describes human values (assuming there isn't too much contradiction) and can be called "good". But it doesn't seem especially parsimonious to reify that concept. Why add it to our ontology?
It's just semantics in a sense: but there is a reason we don't multiply entities unnecessarily.
Replies from: nshepperd↑ comment by nshepperd · 2013-10-30T11:22:30.747Z · LW(p) · GW(p)
Well, if you valued cake you'd want a way to talk about cake and efficiently distinguish cakes from non-cakes—-and especially with regards to planning, to distinguish plans that lead to cake from plans that do not. When you talk about cake there isn't really any reification of "the platonic form of cake" going on; "cake" is just a convenient word for a certain kind of confection.
The motivation for humans having a word for goodness is the same.
Replies from: Jack↑ comment by Jack · 2013-10-30T12:00:57.088Z · LW(p) · GW(p)
I don't necessarily have a problem with using the word "good" so long as everyone understands it isn't something out there in the world that we've discovered-- that it's a creation of our minds, words and behavior-- like cake. This is a problem because most of the world doesn't think that. A lot of times it doesn't seem like Less Wrong thinks that (but I'm beginning to think that is just non-standard terminology).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-30T14:33:31.198Z · LW(p) · GW(p)
Yeah, a lot of the Metaethics Sequence seems to be trying to get to this point.
For my part, it seems easier to just stop using words like "good" if we believe they are likely to be misunderstood, rather than devoting a lot of energy to convincing everyone that they should mean something different by the word (or that the word really means something different from what they think it means, or whatever).
I'm content to say that we value what we currently value, because we currently value it, and asking whether that's good or not is asking an empty question.
Of course, I do understand the rhetorical value of getting to claim that our AI does good, rather than "merely" claiming that it implements what we currently value.
.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-04T17:31:54.175Z · LW(p) · GW(p)
I'm content to say that we value what we currently value, because we currently value it, and asking whether that's good or not is asking an empty question.
I am content to say the question is not empty, and if you assumptions lead you to suppose it is, then your assumptions need to be questioned.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-04T17:55:24.743Z · LW(p) · GW(p)
You seem to believe that I have arrived at my current position primarily via unquestioned assumptions.
What makes you conclude that?
↑ comment by gjm · 2013-10-30T10:03:09.562Z · LW(p) · GW(p)
Yes, sorry, I wasn't clear enough about that. No, let me go further; what I wrote was downright misleading. This is why I shouldn't write Less Wrong comments on a tablet where I am too strongly incentivized to make them brief :-). I endorse your description of Eliezer's position.
↑ comment by TheAncientGeek · 2013-11-04T17:21:03.047Z · LW(p) · GW(p)
This is the definition you get if you say "let 'good' mean 'human values'". But the actual idea is meant to be more analogous to this: assuming for the sake of argument that humans value cake, define "good" to mean cake. Obviously, under that definition, "cake is always good regardless of what humans value" is true. In that case "good" is a rigid designator for cake.
Why is cake a referent of good?
The difference is that "good" and "human values" are not synonymous. But they refer to the same thing, when you fully dereference them, namely {happiness, fun and so forth}. This is the difference between sense and reference, and it's why it is necessary to understand rigid designators.
And what happened to the normativity of Good? Why does it appear to make sense to wonder if we are valuing the right things, when Good is just whatever we value?
ADDED:
The reason Eliezer's views are commonly mistaken for relativism in the manner you describe is because most people do not have a good grasp on the difference between sense and reference(a difference that, to be fair, doesn't seem to be well explained anywhere).
I don't see the S/R difference is relevant to relativism. If the referents of "good" vary with the mental contents of the person saying "good", that is relativism/subjectivism. (That the values referenced are ultimately physical does not affect that: relativism is an epistemological claim, not a metaphysical one).
Replies from: nshepperd, TheOtherDave↑ comment by nshepperd · 2013-11-04T23:04:39.111Z · LW(p) · GW(p)
Why is cake a referent of good?
Why do we have words that mean things at all?
Why does it appear to make sense to wonder if we are valuing the right things
For a start, the fact that some things seem to make sense is not a oracular window unto philosophical truth. Anything that we are unsure about will seem as if it could go either way, even if one of the options is in fact logically necessary or empirically true. That's the point of being unsure (example: the Riemann conjecture).
At the object level, no-one knows in full detail exactly what they mean by "good", or the detailed contents of their own values. So trying to test "my values are good" by direct comparison, so to speak, is a highly nontrivial (read: impossible) exercise. Figuring out based on things like "wanting to do the right thing" that "good" and "human values" refer to the same thing while not being synonymous is another nontrivial exercise.
I don't see the S/R difference is relevant to relativism. If the referents of "good" vary with the mental contents of the person saying "good", that is relativism/subjectivism. (That the values referenced are ultimately physical does not affect that: relativism is an epistemological claim, not a metaphysical one).
To me, the fact that you don't understand is evidence the difference matters. Unless you're saying that "relativism" is just the statement that people on different planets speak different languages, in which case, "no shit" as the French say.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-04T23:37:09.382Z · LW(p) · GW(p)
I was wondering how one knows what the referents of good are when one doesn't know the sense.
I didn't claim that anything was anoracular window. But note that things you believe in, such as an external world, can just as glibly be dismissed as illusion.
↑ comment by TheOtherDave · 2013-11-04T18:08:08.968Z · LW(p) · GW(p)
Why does it appear to make sense to wonder if we are valuing the right things, when Good is just whatever we value?
We are in the habit of (and reinforced for) asking certain questions about actual real-world things. "Is the food I'm eating good food?" "Is the wood I'm building my house out of good wood?" "Is the exercise program I'm starting a good exercise program?" Etc. In each case, we have some notion of what we mean by the question that grounds out in some notion of our values... that is, in what we want food, housing materials, and exercise programs to achieve.
We continue to apply that habitual formula even in cases where we're not very clear what those values are, what we want those things to achieve. "Is democracy a good political system?" is a compelling-sounding question even for people who lack a clear understanding of what their political values are; "Is Christianity a good religion?" feels like a meaningful question to many people who don't have a clear notion of what they want a religion to achieve.
That we continue to apply the same formula to get the question "Are the values I'm using good values?" should not surprise us; I would expect us to ask it and for it to feel meaningful whether it actually makes sense or not.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-04T18:25:44.346Z · LW(p) · GW(p)
You can argue that the things your theory can't explain are non-issues. I don't have to buy that,
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-04T18:44:32.741Z · LW(p) · GW(p)
You certainly don't have to buy it, that's true.
But when you ask a question and someone provides an answer you don't like, showing why that answer is wrong can sometimes be more effective than simply asserting that you don't buy it.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-04T19:25:41.557Z · LW(p) · GW(p)
The problem is a kind of quodlibet. Any inadequate theory can be made to work if one is allowed to dismiss whatever the theory can't explain.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-05T00:41:48.501Z · LW(p) · GW(p)
Sure, I agree.
And any theory can be made to fail if I am allowed to demand that it explain things that don't actually exist.
So it seems to matter whether the thing I'm dismissing exists or not.
Regardless, all of this is a tangent from my point.
You asked "Why does it appear to make sense to wonder if we are valuing the right things?" as a rhetorical question, as a way of arguing that it appears to make sense because it does make sense, because the question of whether our values are right is non-empty. My point is that this is not actually why it appears to make sense; it would appear to make sense even if the question of whether our values are right were empty.
That is not proof that the question is empty, of course. All it demonstrates is that one of your arguments in defense of its non-emptiness is flawed.
You will probably do better to accept that and marshall your remaining argument-soldiers to a victorious campaign on other fronts.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-05T09:39:29.556Z · LW(p) · GW(p)
That is not proof that the question is empty, of course. All it demonstrates is that one of your arguments in defense of its non-emptiness is flawed.
Non-emptiness is no more flawed than emptiness. The Open Question remains open.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-05T14:23:06.434Z · LW(p) · GW(p)
This is a nonsequitor. My claim was about a specific argument.
↑ comment by [deleted] · 2013-10-30T00:18:52.397Z · LW(p) · GW(p)
There was one aspect of that which made intuitive sense to me, but which now that I think about it may not have been adequately explained, ever. Eliezer's position seems to be that from some universal reference frame human beings would be viewed as moral relativists. However it is a serious mistake to think that such universal frames exist! So we shouldn't even try to think from a universal frame. From within the confines of a single, specific reference frame, the experience of morality is that of a realist.
EDIT: Put differently, I think Eliezer might agree that there is a metaphorical stone tablet with the rules of morality spelled out - it's encoded in the information patterns of the 3 lbs of grey matter inside your skull. Maybe Eliezer would say that he is a "subjective realist" or something like that. This is strictly different from moral relativism, where choice of morality is more or less arbitrary. As a subjective realist your morality is different than your pebblesorter friend, but it's not arbitrary. You have only limited control over the morality that evolution and culture gifted you.
Replies from: Jack↑ comment by Jack · 2013-10-30T11:05:44.667Z · LW(p) · GW(p)
Maybe Eliezer would say that he is a "subjective realist" or something like that. This is strictly different from moral relativism, where choice of morality is more or less arbitrary. As a subjective realist your morality is different than your pebblesorter friend, but it's not arbitrary.
Philosophers just call this position "moral subjectivism". Moral realism is usually defined to exclude it. "Relativism" at this point should be tabooed since no one uses it in the technical sense and the popular sense includes a half dozen variations which are very different from one another to the extent they have been defined at all.
↑ comment by Douglas_Knight · 2013-10-30T00:44:39.562Z · LW(p) · GW(p)
Eliezer says loudly that he is a moral realist and not any sort of relativist
Yes, he loudly says he's not a relativist, but he doesn't loudly talk about realism. If you ask him whether he's a moral realist, he'll say yes, but if you ask him for a self-description, he'll say cognitivist w̶h̶i̶c̶h̶ ̶i̶s̶ ̶o̶f̶t̶e̶n̶ ̶g̶r̶o̶u̶p̶e̶d̶ ̶a̶g̶a̶i̶n̶s̶t̶ ̶r̶e̶a̶l̶i̶s̶m̶. Moreover, if asked for detail, he'll say that he's an anti-realist. (though not all cognitivists are anti-realists)
Let me try that again: Eliezer loudly claims to be cognitivist. He quietly equivocates on realism. He also loudly claims not to be relativist, but practically everyone does.
Replies from: gjm↑ comment by gjm · 2013-10-30T01:32:38.396Z · LW(p) · GW(p)
cognitivist, which is often grouped against realism
Is it? That seems backwards to me: non-cognitivism is one of the main varieties of non realism. (The other being error theory.) What am I missing?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-10-30T01:53:42.484Z · LW(p) · GW(p)
You're right, not all cognitivists are anti-realists. But some are, including Eliezer.
Indeed, realists are generally considered cognitivist. But my impression is that if a moral system is labeled cognitivist, the implication is that it is anti-realist. That's because realism is usually the top level of classifying moral systems, so if you're bothering to talk about cognitivism, it's because the system is anti-realist.
Replies from: Jack↑ comment by Jack · 2013-10-30T11:02:03.106Z · LW(p) · GW(p)
This is correct I think, but confusing. All realists are by definition cognitivists. Non-cognitivist is simply one variety of anti-realist: someone who thinks moral statements aren't the kinds of things that can have truth conditions at all. For example, someone who thinks they merely reflect the speakers emotional feelings about the matter (like loudly booing).
Of the anti-realists there are two kinds of cognitivists: Moral error theorists who think that moral statements are about mind-independent facts but that there are no such facts And moral subjectivists who think that moral statements are about mind-dependent facts. If what you say is true, Eliezer is one of those (more or less).
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-10-30T12:51:00.312Z · LW(p) · GW(p)
Yes, people who say that realists are cognitivists say that this is true by definition, but I don't think these terms are used consistently enough that it is a good idea to argue by definition. In particular, I think Eliezer is right to equivocate on whether he is a realist. He certainly rejects the description of his morality as "mind-dependent."
Replies from: Jack↑ comment by Jack · 2013-10-30T13:24:37.324Z · LW(p) · GW(p)
Yes, people who say that realists are cognitivists say that this is true by definition, but I don't think these terms are used consistently enough that it is a good idea to argue by definition.
I'm not trying to argue by definition: I'm just telling you what the terms means as they are used in the metaethical literature (where they're used plenty consistently). If someone wants to say they are a moral realist but not a cognitivist then I have no idea what they are because they're not using standard terminology. If someone doesn't fit into the boxes created by the traditional terminology then come up with different labels. But it's an incredibly confusing and bad idea to use an unorthodox definition to classify yourself as something you're not. You representation makes me more confused about Eliezer's views. Why position him with this language if you aren't taking definitions from an encyclopedia?
According to the standard groupings being an anti-realist cognitivist and objectivist would group someone with the error theorists. If Eliezer doesn't fit there then we can come up with a word to describe his position once it is precisely distinguished from the other positions.
Replies from: Douglas_Knight, Douglas_Knight, TheAncientGeek↑ comment by Douglas_Knight · 2013-10-30T15:08:43.946Z · LW(p) · GW(p)
Here's an example of inconsistency in philosophical use. I keep saying that Eliezer equivocates about whether he is a realist, and that I think he's right to do so. Elsewhere in the comments on this post you say that moral subjectivism is not realism by definition. But it's not clear to me from the Stanford Encyclopedia entry on moral realism that this is so. The entry on anti-realism says that Sayre-McCord explicitly puts moral subjectivism in moral realism. Since he wrote the article on realism, that explains why it seems to accept that possibility, but this it certainly demonstrates that this uncertainty is more mainstream than you allow.
Replies from: Jack, TheAncientGeek↑ comment by Jack · 2013-10-30T15:26:08.687Z · LW(p) · GW(p)
Uncertainty, even disagreement, about how to classify views is fine. It's not the same as inconsistency. Sayre-McCord's position on subjectivism is non-standard and treated as such. But I can still figure out what he thinks just from a single paragraph summarizing his position. He takes the standard definitions as a starting point and then makes an argument for his structure of theories. This is the sort of thing I'm asking you to do if you aren't going to use the standard terminology.
You seem to be concerned with bashing philosophy instead of explaining your usage. I'm not the field's standard bearer. I just want to know what you mean by the words you're using! Stop equivocating about realism and just state the ways in which the position is realist and the ways in which it is anti-realist. Or how it is realist but you don't think realism should mean what people think it means.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-10-30T16:28:42.813Z · LW(p) · GW(p)
I never used "realism," so there's no point in my defining it.
Look back at this thread!
My whole point was that Eliezer avoids the word. He thinks that cognitivism is a useful concept, so he uses it. Similarly, he avoids "moral subjectivism" and uses terms like "subjectively objective." He equivocates when asked for a label, endorsing both "realist" and "cognitivist anti-realist." But he does spell out the details, in tens of thousands of words across this sequence.
Yes, if people want to pin down Eliezer's views they should say what parts are realist and what parts are anti-realist. When I object to people calling him realist or anti-realist, I'm certainly agreeing with that!
After that comment about "bashing philosophy," I don't think there's any point in responding to your first paragraph.
Replies from: TheAncientGeek, Jack↑ comment by TheAncientGeek · 2013-11-04T18:15:40.747Z · LW(p) · GW(p)
But he does spell out the details, in tens of thousands of words across this sequence.
I am one of a number of people who cannot detect a single coherent theory in his writings. A summary in the standard jargon would be helpful in persuading me that there is one.
↑ comment by Jack · 2013-10-30T17:35:30.922Z · LW(p) · GW(p)
You're right, not all cognitivists are anti-realists. But some are, including Eliezer.
...
If you ask him whether he's a moral realist, he'll say yes, but if you ask him for a self-description, he'll say cognitivist w̶h̶i̶c̶h̶ ̶i̶s̶ ̶o̶f̶t̶e̶n̶ ̶g̶r̶o̶u̶p̶e̶d̶ ̶a̶g̶a̶i̶n̶s̶t̶ ̶r̶e̶a̶l̶i̶s̶m̶. Moreover, if asked for detail, he'll say that he's an anti-realist.
These quotes did not exactly express to me that you don't know to what extent his views or realist or anti-realist. I'm sorry if I was targeting you instead of Eliezer... but you were agreeing with his confusing equivocation.
Similarly, he avoids "moral subjectivism" and uses terms like "subjectively objective."
Ah yes, the old eschewing the well-recognized, well-explored terminology for an oxymoronic neologism. How could anyone get confused?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-10-30T18:11:32.855Z · LW(p) · GW(p)
You sure you're not trying to force me to use jargon I don't like? I don't know what else to call responding to new jargon with sarcasm.
At the very least, you seem to be demanding that we confuse laymen so that philosophers can understand. I happen to believe that philosophers won't understand, either.
No, the right answer isn't to say "I don't know if he is a realist." Actually, I do think it would be better to reject the question of realism than to equivocate, but I suspect Eliezer has tried this and found that people don't accept it.
Replies from: Jack↑ comment by Jack · 2013-10-30T18:34:51.847Z · LW(p) · GW(p)
At the very least, you seem to be demanding that we confuse laymen so that philosophers can understand. I happen to believe that philosophers won't understand, either.
As far as I can tell, no one understands. But I don't see how my suggestion, which involves reading maybe 2 encyclopedia articles to pick up jargon, would confuse laymen especially.
No, the right answer isn't to say "I don't know if he is a realist."
Right, it's just you explicitly called him an anti-realist. And he apparently calls himself both? You can see how I could get confused.
Actually, I do think it would be better to reject the question of realism than to equivocate, but I suspect Eliezer has tried this and found that people don't accept it.
Do people accept equivocation? I'd be fine with rejecting the question of realism so long as it was accompanied by an explanation of how it was a wrong question.
You sure you're not trying to force me to use jargon I don't like? I don't know what else to call responding to new jargon with sarcasm.
Just expressing my opinion re: design principles in the construction of jargon. I know I've been snippy with you, apologies, I haven't had enough sleep.
↑ comment by TheAncientGeek · 2013-11-04T18:05:26.543Z · LW(p) · GW(p)
The problem with the standard jargon is that "realism" is used to label a metaphysical and an epistemological claim. I like to call the epistemological claim, that there is a single set of moral truths, moral objectivism, which clearly is the opposite of moral subjectivism.
↑ comment by Douglas_Knight · 2013-10-30T14:00:18.159Z · LW(p) · GW(p)
I simply don't believe you that philosophers use these words consistently. Philosophers have an extremely bad track record of asserting that they use words consistently.
Replies from: Jack, Carinthium↑ comment by Jack · 2013-10-30T15:02:49.344Z · LW(p) · GW(p)
So, I think that is simply false regarding the analytic tradition, especially if we're comparing them to Less Wrong's use of specialized jargon (which is often hilariously ill-defined). I'd love to see some evidence for your claim. But that isn't the point.
There are standard introductory reference texts which structure theories of ethical semantics. They contain definitions. They don't contradict each other. And all of them will tell you what I'm telling you. Let's look, here's wikipedia. Here is the SEP on Moral Realism. Here is the SEP on Moral Anti-Realism. Here is the entry on Moral Cognitivism. All three are written by different philosophers and all use nearly identical definitions which define the moral realist as necessarily being a cognitivist. The Internet Encyclopedia of Philosophy says the same thing.
We're not talking about something that is ambiguous or borderline. Cognitivism is the first necessary feature of moral realism in the standard usage. If you are using the term "moral realist", but don't think cognitivism is part of the definition then no one can figure out what you're saying! Same goes for describing someone as an anti-realist who believes in cognitivism, that moral statments can be true and that they are mind-indpendent. All the terms after "anti-realist" in that sentence make up the entire definition of moral realism
I'm not trying to be pedantic or force you to use jargon you don't like. But if you're going to use it, why not use the terms as they are used in easily available encyclopedia articles written by prominent philosophers? Or at least redefine the terms somewhere.
↑ comment by Carinthium · 2013-10-31T14:15:13.412Z · LW(p) · GW(p)
Clarification- do you mean inconsistencies within or between philosophers? Between philosophers I agree with you- within a single philosopher's work I'd be curious to see examples.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-10-31T16:47:31.266Z · LW(p) · GW(p)
I just mean that philosophers have a bad track record asserting that they are using the same definition as each other. That's rather worse than just not using the same definition. I told Jack that he wasn't using the same definition as the Stanford Encyclopedia. I didn't expect him to believe me, but he didn't even notice. Does that count for your purpose, since he chose the source?
But, yes, I do condemn argument by definition because I don't trust the individuals to have definitions.
↑ comment by TheAncientGeek · 2013-11-04T17:54:42.510Z · LW(p) · GW(p)
If someone wants to say they are a moral realist but not a cognitivist then I have no idea what they are because they're not using standard terminology.
Presumably a Platonist who thinks the Form of the Good is revealed by a mystical insight.
Replies from: Jack↑ comment by Jack · 2013-11-05T03:36:43.085Z · LW(p) · GW(p)
A Platonist who thinks the Form of the Good is revealed by mystical insight is a cognitivist and I don't know why you would think otherwise. Wikipedia:) "Cognitivism is the meta-ethical view that ethical sentences express propositions and can therefore be true or false".
Or you're not using standard terminology, in which case, see above.
comment by Jack · 2013-10-30T07:34:42.399Z · LW(p) · GW(p)
I think my confusion is less about understanding the view (assuming the Richard's rigid designator interpretation is accurate) and more everyone's insistence on calling it a moral realist view. It feels like everyone is playing word games to avoid being moral subjectivists. I don't know if it was all the arguing with theists or being annoyed with moral relativist social-justice types but somewhere along the way much of the Less Wrong crowd developed strong negative associations with the words used to describe varieties of moral anti-realism.
As far as I can tell most everyone here has the same descriptive picture of what is going on with ethics. There is this animal on planet Earth that has semi-ordered preferences about how the world should be and how things similar to that animal should act. Those of this species which speak the language called "English" write inscriptions like "morality" and "right and wrong" to describe these preferences. These preferences are the result of evolved instincts and cultural norms. Many members of this species have very similar preferences.
This seems like a straightforward description of ethical subjectivism -- the position that moral sentences are about the attitudes of people (notice that isn't the same as saying they are relative). But people don't seem to like calling themselves ethical subjectivists-- or maybe they don't like that the theory doesn't tell them what to do? I don't understand this. I'd love for someone to explain it. In any case, then we start doing philosophy to try to shoehorn this description into something we can call moral realism.
And it definitely is true that much of our moral language function like rigid designators, which hides the causal history of our moral beliefs. This explains why people don't feel like morality changes under counterfactuals-- i.e. if you imagine a world in which you have a preference for innocent children being murdered you don't believe that murdering children is therefore moral in that world. I outlined this in more detail here. I didn't use the term 'rigid designator' in that post, but the point is that what we think is moral is invariant in counterfacturals.
I don't see how this isn't a straightforward example of moral subjectivism. And that is reflected in the fact that there are no universally compelling arguments. I can see how you can sort of structure the arguments and questions and get it to output "moral realism" if you really had to. You say that the word "right" designates particular facts about worlds such that worlds can be objectively evaluated according to that concept. But to me, it is weird and confusing to ignore the fact that the rule uniting those facts about the world is determined by our attitudes-- especially since we can't right now enumerate the rigid contents of our moral language and have to apply the rule in most circumstances.
Whether you call it moral subjectivism or not, it seems like the next step is examining our preferences to see how much they can overlap, and what constitutes an ethical and effective way of reconciling them so that they are consistent with each other. In other words, we need to know how we ought to resolve moral disagreements, 'reflective equilibrium', that kind of stuff. This is how we determine how universal our morality is. And that's what actually matters, not whether or not it exists independently of human attitudes.
Replies from: Carinthium, ChrisHallquist, buybuydandavis↑ comment by Carinthium · 2013-10-30T11:10:17.471Z · LW(p) · GW(p)
A few nitpicks of your descriptive picture.
1- There are inevitable conflicts between practically any two creatures on this planet as to what preferences they would have as to the world. If you narrow these down to the area classified by humans as "moral" the picture can be greatly simplified, but there will still be a large amount of difference. 2- I dispute that moral sentences ARE about the attitudes of people. Most people throughout history have had a concept of "Right" and "Wrong" as being objective. This naive conception is philosophically indefensible, but the best descriptor of what people throughout history, and even nowadays, have believed. It is hard to defend the idea that a person thinks they are referring to X and are in fact referring to Y when X and Y are drastically different things and the person is not thinking of Y on any level of their brain- the likely case for, say, a typical Stone Age man arguing a moral point.
Replies from: Jack↑ comment by Jack · 2013-10-30T12:11:36.588Z · LW(p) · GW(p)
1- There are inevitable conflicts between practically any two creatures on this planet as to what preferences they would have as to the world. If you narrow these down to the area classified by humans as "moral" the picture can be greatly simplified, but there will still be a large amount of difference.
Sure, as I said at the end, the "universality" of the whole thing is an open problem.
I dispute that moral sentences ARE about the attitudes of people. Most people throughout history have had a concept of "Right" and "Wrong" as being objective. This naive conception is philosophically indefensible, but the best descriptor of what people throughout history, and even nowadays, have believed. It is hard to defend the idea that a person thinks they are referring to X and are in fact referring to Y when X and Y are drastically different things and the person is not thinking of Y on any level of their brain- the likely case for, say, a typical Stone Age man arguing a moral point.
That's fine. But in that case, all moral sentences are false (or nonsense, depending on how you feel about references to non-entities). I agree that there is a sense in which that is true which you outlined here. In this case we can start from scratch and just make the entire enterprise about figuring out what we we really truly want to do with the world-- and then do that. Personally I find that interpretation of moral language a bit uncharitable. And it turns out people are pretty stuck on the whole morality idea and don't like it when you tell them their moral beliefs are false.
Subjectivism seems both more charitable and friendlier-- but ultimately these are two different ways of saying the same thing. The debates between varieties of anti-realism seem entirely semantic to me.
Replies from: Carinthium↑ comment by Carinthium · 2013-10-30T12:37:20.463Z · LW(p) · GW(p)
1- Alright. Misunderstood.
2- There are some rare exceptions- some people define morality differently and can thus be said to mean different things. Almost all moral sentences, if every claim to something be right or wrong throughout history count as moral sentences, are false/nonsense, however.
The principle of charity, however, does not apply here- the evidence clearly shows that human beings throughout history have truely believed that some things are morally wrong and some morally right on a level more than preferences, even if this is not in fact true.
Replies from: Jack↑ comment by Jack · 2013-10-30T13:36:20.805Z · LW(p) · GW(p)
The principle of charity, however, does not apply here- the evidence clearly shows that human beings throughout history have truely believed that some things are morally wrong and some morally right on a level more than preferences, even if this is not in fact true.
Philosophy typically involves taking folk notions that are important but untrue in a strict sense and constructing something tenable out of that material. And I think the situation is more ambiguous than you make it sound.
But it is essentially irrelevant. I mean, you could just go back to bed after concluding all moral statements are false. But that seems like it is ignoring everything that made us interested in this question in the first place. Regardless of what people think they are referring to when they make moral statements it seems pretty clear what they're actually doing. And the latter is accurately described by something like subjectivism or quasi-realism. People might be wrong about moral claims, but what we want to know is why and what they're doing when they make them.
Replies from: Carinthium↑ comment by Carinthium · 2013-10-31T01:57:26.885Z · LW(p) · GW(p)
A typical person would be insulted if you claimed that their moral statements referred only to feelings. Most philosophical definitions work on a principle which isn't quite like how ordinary people see them but would seem close enough to an ordinary person.
There are a lot of uses of the concepts of right and wrong, not just people arguing with each other. Ethical dilemnas, people wondering whether to do the "right" thing or the "wrong" thing, philosophical schools (think of the Confucians, for example, who don't define 'right' or 'wrong' but talk about it a lot). Your conception only covers one use.
↑ comment by ChrisHallquist · 2013-10-30T16:27:52.280Z · LW(p) · GW(p)
This seems like a straightforward description of ethical subjectivism -- the position that moral sentences are about the attitudes of people (notice that isn't the same as saying they are relative).
Except that's not Eliezer's view. The mistake you're making here is the equivalent of thinking that, because the meaning of the word "water" is determined by how English speakers use it, therefore sentences about water are sentences about the behavior of English speakers.
Replies from: Jack↑ comment by Jack · 2013-10-30T17:23:53.110Z · LW(p) · GW(p)
I understand, this is what I'm dealing with in the second to last paragraph.
I can see how you can sort of structure the arguments and questions and get it to output "moral realism" if you really had to. You say that the word "right" designates particular facts about worlds such that worlds can be objectively evaluated according to that concept. But to me, it is weird and confusing to ignore the fact that the rule uniting those facts about the world is determined by our attitudes-- especially since we can't right now enumerate the rigid contents of our moral language and have to apply the rule in most circumstances.
There is a sense in which all concepts both exist subjectively and objectively. There is some mathematical function that describes all the things that ChrisHallquist thinks are funny just like there is a mathematical function that describes the behavior of atoms. We can get into the nitty-gritty about what makes a concept subjective and what makes a concept objective. But I don't see what the case for morality counting as "objective" is unless we're just going to count all concepts as objective.
Replies from: Leonhart↑ comment by Leonhart · 2013-10-30T23:35:57.987Z · LW(p) · GW(p)
Can you be clearer about the way you are using "describes" here?
I'm not clear if you are thinking about a) a giant lookup table of all the things Chris Hallquist finds funny, or b) a program that is more compact than that list - so compact, indeed, that a cut-down bug-filled beta of it can be implemented inside his skull! - but yet can generate the list.
Replies from: Jack↑ comment by buybuydandavis · 2013-10-30T09:10:40.990Z · LW(p) · GW(p)
I am a moral subjectivist and a moral realist. The only point of contention I'd have with EY is if he is a non psycho human moral universalist. I felt that his language was ambiguous on that point, and at times he seemed to be making arguments in that direction. I just couldn't tell.
In other words, we need to know how we ought to resolve moral disagreements,
But if\ we're going to be moral subjectivists, we should realize that "we ought" too easily glosses over the fact that ought_you is not identical to ought_me.
And I don't think you get a compelling answer from some smuggled self recursion, but from your best estimate of what people actually are.
Replies from: Jack, TheAncientGeek↑ comment by Jack · 2013-10-30T10:31:51.870Z · LW(p) · GW(p)
I am a moral subjectivist and a moral realist.
Traditional usage defines those terms to exclude the possibility of being both. The standard definition of a moral realist is someone who believes that moral judgments express mind-independent facts; while the standard definition of a moral subjectivist is someone who believes moral judgments express mind-dependent facts.
So I don't know quite what you mean.
a non psycho human moral universalist.
You mean someone who doesn't believe that there are moral universals among humans? One too many adjectives for me.
And I don't think you get a compelling answer from some smuggled self recursion, but from your best estimate of what people actually are.
If I understand this right: you're contrasting trying to come up with some self-justifying method for resolving disagreement (recursively finding consensus on how to find consensus) with... descriptive moral psychology? I'm not sure I follow.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2013-11-30T01:12:55.472Z · LW(p) · GW(p)
The standard definition of a moral realist is someone who believes that moral judgments express mind-independent facts; while the standard definition of a moral subjectivist is someone who believes moral judgments express mind-dependent facts.
My point being that the categories themselves are not used consistently, so that I can be called either one or the other depending on usage.
Definitions tend to be theory bound themselves, so that mind dependent and mind independent are not clear cut. If I think that eating cows is fine, but I wouldn't if I knew more and thought longer, which represents my mind - both, neither, the first, the second?
For example, if you go to the article in La Wik on Ethical Subjectvism, they talk about "opinions" and not minds. In this case, my opinion would be that eating cows is fine, but it would not be my extrapolated values.
Some would call my position realism, and some would call it subjectivism. Me, I don't care what you call it. I recognize that my position could be called either within the bounds of normal usage.
non psycho human moral universalist.
Someone who believes that what is moral is universal across humans who are not psychos.
If I understand this right
I think you're getting the point there.
↑ comment by TheAncientGeek · 2013-11-04T19:57:20.870Z · LW(p) · GW(p)
But if\ we're going to be moral subjectivists, we should realize that "we ought" too easily glosses over the fact that ought_you is not identical to ought_me.
Does the fact that people have different opinions about non moral claims, means there are no objective, scientific facts?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-05T01:02:26.026Z · LW(p) · GW(p)
It doesn't mean that, no.
But it does mean that I ought not behave as though objective, scientific facts exist until I have some grounds for doing so, and that "some people think their intuitions reflect objective, scientific facts" doesn't qualify as a ground for doing so.
At this point, one could ask "well, OK, what qualifies as a ground for behaving as though objective, scientific facts exist?" and the conversation can progress in a vaguely sensible direction.
I would similarly ask (popping your metaphorical stack) "what qualifies as a ground for behaving as though objective moral facts exist?" and refrain from behaving as though they do until some such ground is demonstrated.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-05T09:29:59.034Z · LW(p) · GW(p)
I don't think you're in a position to do that unless you can actually solve the problem of grounding scientific objectivity without incurring Munchausen's trilemma. That is essentially an unsolved problem. Analytical philosophy, LW, and various other groups sidestep it by getting together with people who share the same intuitions. But that is not exactly the epistemic high ground.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-05T14:22:12.948Z · LW(p) · GW(p)
I'm content to ground behaving as though objective, scientific facts exist in the observation that such behavior reliably correlates with (and predicts) my experience of the world improving. I haven't observed anything analogous about behaving as though objective moral facts exist.
This, too, is not the epistemic high ground. I'm OK with that.
But, sure, if you insist on pulling yourself out of the Munchausen's swamp before you can make any further progress, then you're quite correct that progress is equally impossible on both scientific and ethical fronts.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-05T14:36:53.996Z · LW(p) · GW(p)
I'm content to ground behaving as though objective, scientific facts exist in the observation that such behavior reliably correlates with (and predicts) my experience of the world improving. I haven't observed anything analogous about behaving as though objective moral facts exist.
Indeed you haven't, because they are not analogous. Morality is about guiding action in the world, not passively registering the state of the world. It doesn't tell you what the melting point of aluminum is, it tells you whether what you are about to do is the right thing.
But, sure, if you insist on pulling yourself out of the Munchausen's swamp before you can make any further progress, then you're quite correct that progress is equally impossible on both scientific and ethical fronts.
And if you think it is such levitation is unnecessary, then progress is equally possible on both fronts.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-05T15:50:02.340Z · LW(p) · GW(p)
Science isn't just about passively registering the state of the world, either.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-05T17:47:39.648Z · LW(p) · GW(p)
Alice: "Science has a set of norms or guides-to-action called the scientific method. These have truth-values which are objective in the sense of not being a matter of individual whim"
Bob: "I don't believe you! What experiments do you perform to measure these truth-values, what equipment do you use?"
Charlie: "I don;'t believe you! You sound like you believe in some immaterial ScientificMethod object for these statements to correspond to!".
....welcome to my world.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-05T18:06:13.656Z · LW(p) · GW(p)
Dave: Behaving as though objective scientific facts exist has made it possible for me to talk to people all over the world, for the people I care about to be warm in the winter, cool in the summer, have potable water to drink and plenty of food to eat, and routinely survive incidents that would have killed us in pre-scientific cultures, and more generally has alleviated an enormous amount of potential suffering and enabled an enormous amount of value-satisfaction.
I am therefore content to continue behaving as though objective scientific facts exist.
If, hypothetically, it turned out that objective scientific facts didn't exist, but that behaving as though they do nevertheless reliably provided these benefits, I'd continue to endorse behaving as though they do. In that hypothetical scenario you and Alice and Bob and Charlie are free to go on talking about truth-values but I don't see why I should join you. Why should anyone care about truth in that hypothetical scenario?
Similarly, if behaving as though objective moral facts exist has some benefit, then I might be convinced to behave as though objective moral facts exist. But if it's just more talking about truth-values divorced from even theoretical benefits... well, you're free to do that if you wish, but I don't see why I should join you.
Replies from: Lumifer, TheAncientGeek↑ comment by Lumifer · 2013-11-05T18:26:56.480Z · LW(p) · GW(p)
Dave: Behaving as though objective scientific facts exist has made it possible for me to ... I am therefore content to continue behaving as though objective scientific facts exist.
I can construct a very similar argument for Christianity (or for most any religion, actually).
Usefulness of beliefs and verity of beliefs are not orthogonal but are not 100% correlated either.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-05T19:08:29.883Z · LW(p) · GW(p)
I can construct a very similar argument for Christianity
That's surprising, but if you can, please do. If behaving as though the beliefs of Christianity are objective facts reliably and differentially provides benefits on a par with the kinds of scientific beliefs we're discussing here, I am equally willing to endorse behaving as though the beliefs of Christianity are objective facts.
Usefulness of beliefs and verity of beliefs are not orthogonal but are not 100% correlated either.
Sure, I agree.
Replies from: Lumifer↑ comment by Lumifer · 2013-11-05T19:46:29.313Z · LW(p) · GW(p)
The argument wouldn't involve running hot water in your house, but would involve things like social cohesion, shared values, psychological satisfaction, etc.
Think about meme evolution and selection criteria. Religion is a very powerful meme that was strongly selected for. It certainly provided benefits for societies and individuals.
↑ comment by TheAncientGeek · 2013-11-05T18:21:07.980Z · LW(p) · GW(p)
Dave: Behaving as though objective scientific facts exist has made it possible for me to talk to people all over the world, for the people I care about to be warm in the winter, cool in the summer, have potable water to drink and plenty of food to eat, and routinely survive incidents that would have killed us in pre-scientific cultures, and more generally has alleviated an enormous amount of potential suffering and enabled an enormous amount of value-satisfaction.
Edith: A lot of good stuff, then?
Fred: Those facts didn't fall off a tree, they were arrived at by following a true..right..effective..call it what you will...set of methods.
Dave:Why should anyone care about truth in that hypothetical scenario?
Edith: You care about science because it leads to things that are good. Morality does too.
Dave: Similarly, if behaving as though objective moral facts exist has some benefit, then I might be convinced to behave as though objective moral facts exist
Edith: you don't already? How do you stay out of jail?
Dave: But if it's just more talking about truth-values divorced from even theoretical benefits..
Edith: If there are no moral facts, then the good things you like are not really good at all.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-05T19:26:54.872Z · LW(p) · GW(p)
Edith: A lot of good stuff, then?
I'm not sure what you mean to express by that word.
A lot of stuff I value, certainly.
Fred: Those facts didn't fall off a tree, they were arrived at by following a true..right..effective..call it what you will...set of methods.
Yes, that's true. And?
Edith: You care about science because it leads to things that are good. Morality does too.
Great! Wonderful! I'll happily endorse morality on the grounds of its reliable observable benefits, then, and we can drop all this irrelevant talk about "objective moral facts".
Edith: you don't already? How do you stay out of jail?
Same as everyone else... by following laws when I might be arrested for violating them. I would do all of that even if there were no objective moral facts. Indeed, I've been known to avoid getting arrested under laws that, if they did reflect objective moral facts, would seem to imply mutually exclusive sets of objective moral facts.
Edith: If there are no moral facts, then the good things you like are not really good at all.
Perhaps. So what? Why should I care? What difference does it make, in that scenario?
For example, I prefer people not suffering to people suffering... that's a value of mine. If it turns out that there really are objective moral facts that are independent of my values, and that people suffering actually is objectively preferable to people not-suffering, and my values are simply objectively wrong... why should I care?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-05T19:37:43.245Z · LW(p) · GW(p)
Yes, that's true. And?
And there is a way of guides-to-action to be objectively right (etc) that has nothing to with reflecting facts or predicting experience. Thus removing the "morality doesn't help me predict experience" objection.
I'll happily endorse morality on the grounds of its reliable observable benefits,
You have presupposed that there are Good Things (benefits) in that comment, and in your previous comment about science. You are already attaching truth values to propositions about what is good or not, I don't have to argue you into that.
Same as everyone else... by following laws when I might be arrested for violating them.
"Jail is bad" has the truth-value True?
I would do all of that even if there were no objective moral facts.
Why are you avoiding jail if its badness is not a fact?
Perhaps. So what? Why should I care?
Because you care about good things, benefits and so on. You are already caring about them, so I don't have to argue you into it.
If it turns out that there really are objective moral facts that are independent of my values, and that people suffering actually is objectively preferable to people not-suffering, and my values are simply objectively wrong... why should I care?
Do you update your other opinions if they turn out to be false?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-05T20:03:10.301Z · LW(p) · GW(p)
You have presupposed that there are Good Things (benefits) in that comment, and in your previous comment about science. You are already attaching truth values to propositions about what is good or not, I don't have to argue you into that.
You are treating my statements about what I value as assertions about Good Things.
If you consider those equivalent, then great... you are already treating Good as a fact about what we value, and I don't have to argue you into that.
If you don't consider them equivalent (which I suspect) then interpreting the former as a statement about the latter is at best confused, and more likely dishonest.
"Jail is bad" has the truth-value True?
I value staying out of jail.
Is there anything in your question I haven't agreed to by saying that?
If not, great. I will go on talking about what I value, and if you insist on talking about the truth-values of moral claims I will understand you as referring to what you value.
If so, what?
Why are you avoiding jail if its badness is not a fact?
Because I value staying out of jail. (Which in turn derives from other values of mine.)
Because you care about good things, benefits and so on. You are already caring about them, so I don't have to argue you into it.
As above; if this is an honest and coherent response, then great, we agree that "good things" simply refers to what we value.
Do you update your other opinions if they turn out to be false?
Sure, there are areas in which I endorse doing this.
So, you ask, shouldn't I endorse updating false moral beliefs as well?
Sure, if I anticipate observable benefits to having true moral beliefs, as I do to having true beliefs in those other areas in which I have opinions. But I don't anticipate such benefits.
Another area where I don't anticipate such benefits, and where I am similarly skeptical that the label "true beliefs" refers to anything or is worth talking about, is aesthetics. For example, sure, maybe my preference for blue over red is false, and a true aesthetic belief is that "red is more aesthetic than blue" is true. But... so what? Should I start preferring red over blue on that basis? Why on Earth would I do that?
(But Dave, you value having accurate beliefs in other areas! Why not aesthetics?)
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-05T20:23:47.519Z · LW(p) · GW(p)
If you consider those equivalent, then great... you are already treating Good as a fact about what we value, and I don't have to argue you into that.
I am not sure what that means. Is the "we" individual-by-individual or collective?
And where did you get the idea that Objective metaethics means giving up on values?
I value staying out of jail.
How does that differ from "jail is bad-for-me"?
If not, great. I will go on talking about what I value, and if you insist on talking about the truth-values of moral claims I will understand you as referring to what you value.
If I thought that he truth-values of moral claims refers only to what I value, I wouldn't be making much of a pitch for objectivism, would I?
As above; if this is an honest and coherent response, then great, we agree that "good things" simply refers to what we value.
Whatever that means?
Do you update your other opinions if they turn out to be false? Sure, there are areas in which I endorse doing this.
What explains the difference?
So, you ask, shouldn't I endorse updating false moral beliefs as well? Sure, if I anticipate observable benefits to having true moral beliefs,
But that isn't the function of moral beliefs: their function is to guide action. You have admitted that your behaviour is guided by jail-avoidance.
Another area where I don't anticipate such benefits, and where I am similarly skeptical that the label "true beliefs" refers to anything or is worth talking about, is aesthetics. For example, sure, maybe my preference for blue over red is false, and a true aesthetic belief is that "red is more aesthetic than blue" is true. But... so what? Should I start preferring red over blue on that basis? Why on Earth would I do that?
You seem to be interested in the meta-level question of objective aesthetics. Why is that?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-06T00:06:40.877Z · LW(p) · GW(p)
Is the "we" individual-by-individual or collective?
I think that's a separate discussion, and I don't think spinning it off will be productive. Feel free to replace "we" with "I" if that's clearer. If it's still not clear what I mean, I'm content to let it drop there.
And where did you get the idea that Objective metaethics means giving up on values?
I'm not sure what "giving up on values" means.
How does [I value staying out of jail] differ from "jail is bad-for-me"?
Beats me. Perhaps it doesn't.
If I thought that he truth-values of moral claims refers only to what I value, I wouldn't be making much of a pitch for objectivism, would I?
No, you wouldn't.
Whatever [what-we-value] means?
Yes.
What explains the difference [between areas where I endorse updating false opinions and those where I don't] ?
Whether concerning myself with the truth-values of the propositions expressed by opinions reliably provides observable and differential benefits.
But [observable benefits] isn't the function of moral beliefs: their function is to guide action.
I agree that beliefs guide action (this is not just true of moral beliefs).
If the sole function of moral beliefs is to guide action without reference to expected observable benefits, I don't see why I should prefer "true" moral beliefs (whatever that means) to "false" ones (whatever that means).
You have admitted that your behaviour is guided by jail-avoidance.
Yes. Which sure sounds like a benefit to me.
You seem to be interested in the meta-level question of objective aesthetics. Why is that?
I don't seem that way to myself, actually. I bring it up as another example of an area where some people assert there are objective truths and falsehoods, but where I see no reason to posit any such thing...positing the existence of individual aesthetic values seems quite adequate to explain my observations.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-06T18:02:31.063Z · LW(p) · GW(p)
I think that's a separate discussion, and I don't think spinning it off will be productive. Feel free to replace "we" with "I" if that's clearer. If it's still not clear what I mean, I'm content to let it drop there.
I think it is a key issue. This is about ethical objectivism. If Good is a fact about what we value collectively, in your view, then your theory is along the lines of utilitariansim, which is near enough to objectivism AFAIC. Yet you seem to disagree with me about something.
What explains the difference [between areas where I endorse updating false opinions and those where I don't] ?
Whether concerning myself with the truth-values of the propositions expressed by opinions reliably provides observable and differential benefits.
If you concern yourself with the truth values of your own beliefs about what you believe to be good and bad, and revise your beliefs accordingly and act on them, you will end up doing the right thing.
What's more beneficial than doing the right thing?
If the things you think are beneficial are in fact not beneficial, then you are not getting benefits; you just mistakenly think you are.
To actually get benefits, you have to know what is actually beneficial.
If the sole function of moral beliefs is to guide action without reference to expected observable benefits, I don't see why I should prefer "true" moral beliefs (whatever that means) to "false" ones (whatever that means).
Morality is all about what is truly beneficial. Those truths aren't observable: neither are the truths of mathematics.
I bring it up as another example of an area where some people assert there are objective truths and falsehoods, but where I see no reason to posit any such thing...positing the existence of individual aesthetic values seems quite adequate to explain my observations.
Are you a passive observer who never acts?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-06T18:23:33.735Z · LW(p) · GW(p)
Yet you seem to disagree with me about something.
It is not clear to me what we disagree about, precisely, if anything.
What's more beneficial than doing the right thing?
I don't know. It is not clear to me what the referent of "the right thing" is when you say it, or indeed if it even has a referent, so it's hard to be sure one way or another. (Yes, I do understand that you meant that as a rhetorical question whose correct answer was "Nothing.")
If the things you think are beneficial are in fact not beneficial, then you are not getting benefits; you just mistakenly think you are.
Yes, that's true.
To actually get benefits, you have to know what is actually beneficial.
No, that's false. But my expectation of actually getting benefits increases sharply if I know what is actually beneficial.
Morality is all about what is truly beneficial. Those truths aren't observable
I disagree.
neither are the truths of mathematics.
Supposing this is true, I don't see why it's relevant.
Are you a passive observer who never acts?
No.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-06T18:49:03.765Z · LW(p) · GW(p)
It is not clear to me what we disagree about, precisely, if anything.
Is ethical objectivism true, IYO?
It is not clear to me what the referent of "the right thing"
Doing thins such that it is an objective fact that they are beneficial, and not just a possibly false belief.
Morality is all about what is truly beneficial. Those truths aren't observable
I disagree.
Explain how you observe the truth-value of a claim about what is beneficial.
neither are the truths of mathematics.
Supposing this is true, I don't see why it's relevant.
it is relevant you attitude that only the observable maters in epistemology.
I bring it up as another example of an area where some people assert there are objective truths and falsehoods, but where I see no reason to posit any such thing...positing the existence of individual aesthetic values seems quite adequate to explain my observations.
Are you a passive observer who never acts?
No.
Then explaining your observations is not the only game in town.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-06T19:51:27.338Z · LW(p) · GW(p)
Is ethical objectivism true, IYO?
If you point me at a definition of ethical objectivism you consider adequate, I'll try to answer that question.
What's more beneficial than doing the right thing?
what the referent of "the right thing"
Doing thins such that it is an objective fact that they are beneficial, and not just a possibly false belief.
So, you're asking what's more beneficial than doing things such that it's an objective fact that they are beneficial?
Presumably doing other things such that it's an objective fact that they are more beneficial is more beneficial than merely doing things such that it's an objective fact that they are beneficial.
Explain how you observe the truth-value of a claim about what is beneficial.
When I experience X having consequences I value in situations where I didn't expect it to, I increase my confidence in the claim that X is beneficial. When I experience X failing to have such consequences in situations where I did expect it to, I decrease my confidence in the claim.
it is relevant you attitude that only the observable maters in epistemology.
How do unobservable mathematical truths matter in epistemology?
explaining your observations is not the only game in town.
That's true.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-06T20:13:03.385Z · LW(p) · GW(p)
If you point me at a definition of ethical objectivism you consider adequate, I'll try to answer that question.
"moral claims have subject-independent truth values".
Presumably doing other things such that it's an objective fact that they are more beneficial is more beneficial than merely doing things such that it's an objective fact that they are beneficial.
And doing things that aren't really beneficial at all isn't really beneficial at all.
When I experience X having consequences I value in situations where I didn't expect it to, I increase my confidence in the claim that X is beneficial.
Explain how you justified the truth of the claim "what Dave values is beneficial"
How do unobservable mathematical truths matter in epistemology?
Epistemology is about truth.
explaining your observations is not the only game in town.
That's true.
So you no longer reject metaethics on the basis that it doesn't explain your observations?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-06T20:27:13.114Z · LW(p) · GW(p)
Is ethical objectivism ("moral claims have subject-independent truth values") true, IYO?
No.
And doing things that aren't really beneficial at all isn't really beneficial at all.
Yes, that's true.
Explain how you justified the truth of the claim "what Dave values is beneficial"
Increasing it has consequences I value.
Epistemology is about truth.
No, epistemology is about knowledge. For example, unknowable truths are not within the province of epistemology.
So you no longer reject metaethics on the basis that it doesn't explain your observations?
If you point me to where in this discussion I rejected metaethics on the basis that it doesn't explain my observations, I will tell you if I still stand by that rejection. As it stands I don't know how to answer this question.
Replies from: TheAncientGeek, TheOtherDave↑ comment by TheAncientGeek · 2013-11-07T11:03:35.183Z · LW(p) · GW(p)
And doing things that aren't really beneficial at all isn't really beneficial at all.
Yes, that's true.
So you have beliefs that you have done beneficial things, but you don't know if you have, because you don't know what is beneficial, because you have never tried to find out, because you have assumed there is no answer to the question?
Explain how you justified the truth of the claim "what Dave values is beneficial"
Increasing it has consequences I value.
That boils down to "what Dave values, Dave values".
Epistemology is about truth.
No, epistemology is about knowledge. For example, unknowable truths are not within the province of epistemology
"Epistemic Logic: A Survey of the Logic of Knowledge" by Nicholas Rescher has a chapter on unknowable truth.
But that is not the point. The point was unobservable truth. You seem to have decided, in line with your previous comments, that what is unobservable is unknowable. But logical and mathematical truths are well-known examples of unobservable (non empirical truths).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-07T14:21:04.996Z · LW(p) · GW(p)
So you have beliefs that you have done beneficial things, but you don't know if you have, because you don't know what is beneficial, because you have never tried to find out, because you have assumed there is no answer to the question?
That doesn't seem to follow from what we've said thus far.
That boils down to "what Dave values, Dave values".
Absolutely. Which, IIRC, is what I said in the first place that inspired this whole conversation, so it certainly ought not surprise you that I'm saying it now.
The point was unobservable truth. You seem to have decided, in line with your previous comments, that what is unobservable is unknowable. But logical and mathematical truths are well-known examples of unobservable (non empirical truths).
(shrug) All right. Let's assume for the sake of comity that you're right, that we can come to know moral truths about our existence through a process divorced from observation, just like, on your account, we come to know logical and mathematical truths about our existence through a process divorced from observation.
So what are the correct grounds for deciding what is in the set of knowable unobserved objective moral truths?
For example, consider the claim "angles between 85 and 95 degrees, other than 90 degrees, are bad."
There are no observations (actual or anticipated) that would lead me to that conclusion, so I'm inclined to reject the claim on those grounds. But for the sake of comity I will set that standard aside, as you suggest. So... is that claim a knowable unobserved objective moral truth? A knowable unobserved objective moral falsehood? A moral claim whose unobserved objective truth-value is unknowable? A moral claim without an unobserved objective truth-value? Not a moral claim at all? Something else?
How do you approach that question so as to avoid mistaking one of those other things for knowable unobserved objective moral truths?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-07T15:27:05.219Z · LW(p) · GW(p)
So you have beliefs that you have done beneficial things, but you don't know if you have, because you don't know what is beneficial, because you have never tried to find out, because you have assumed there is no answer to the question?
That doesn't seem to follow from what we've said thus far.
Have you
a) seen outcomes which are beneficial, and which you know to be beneficial?
or
b) seen outcomes which you believe to be beneficial?
That boils down to "what Dave values, Dave values".
Absolutely. Which, IIRC, is what I said in the first place that inspired this whole conversation, so it certainly ought not surprise you that I'm saying it now.
AFAIC, this conversation is about your claim that ethical objectivism is false. That claim cannot be justified by a tautology like " "what Dave values, Dave values".
The point was unobservable truth. You seem to have decided, in line with your previous comments, that what is unobservable is unknowable. But logical and mathematical truths are well-known examples of unobservable (non empirical truths).
(shrug) All right. Let's assume for the sake of comity that you're right, that we can come to know moral truths about our existence through a process divorced from observation, just like, on your account, we come to know logical and mathematical truths about our existence through a process divorced from observation.
So what are the correct grounds for deciding what is in the set of knowable unobserved objective moral truths?
It's being a special case of an overaching principle such as ""Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.", or "increase aggregate utility".
For example, consider the claim "angles between 85 and 95 degrees, other than 90 degrees, are bad." There are no observations (actual or anticipated) that would lead me to that conclusion, so I'm inclined to reject the claim on those grounds. But for the sake of comity I will set that standard aside, as you suggest. So... is that claim a knowable unobserved objective moral truth?
How does it even relate to action?
Not a moral claim at all?
How does it even relate to action?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-07T16:07:55.258Z · LW(p) · GW(p)
AFAIC, this conversation is about your claim that ethical objectivism is false.
I started all of this by saying:
I ought not behave as though objective, scientific facts exist until I have some grounds for doing so, and that "some people think their intuitions reflect objective, scientific facts" doesn't qualify as a ground for doing so. At this point, one could ask "well, OK, what qualifies as a ground for behaving as though objective, scientific facts exist?" and the conversation can progress in a vaguely sensible direction. I would similarly ask (popping your metaphorical stack) "what qualifies as a ground for behaving as though objective moral facts exist?" and refrain from behaving as though they do until some such ground is demonstrated.
As far as I can tell, no such ground has been demonstrated throughout our whole discussion.
So I continue to endorse not behaving as though objective moral facts exist.
But as far as you're concerned, what we're discussing instead is whether I'm justified in claiming that ethical objectivism is false. (shrug) OK. I retract that claim. If that ends this discussion, I'm OK with that.
Have you
a) seen outcomes which are beneficial, and which you know to be beneficial?
or
b) seen outcomes which you believe to be beneficial?
I have seen outcomes that I'm confident are beneficial. I don't think the relationship of such confidence to knowledge or belief is a question you and I can profitably discuss.
So what are the correct grounds for deciding what is in the set of knowable unobserved objective moral truths?
It's being a special case of an overaching principle such as ""Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.", or "increase aggregate utility".
This just triggers regress. That is, OK, I'm evaluating moral claim X, for which I have no observed evidence, to see whether it's a knowable unobserved objective moral truth. To determine this, I first evaluate whether I can will that X should become a universal law. OK, fine... what are the correct grounds for deciding whether I can will that X be a universal law?
But you additionally suggest that "increase aggregate utility" is the determiner here... which suggests that if X increases the aggregate utility of everything everywhere, I can will that X should become a universal law, and therefore can know that X is an objective moral truth.
Yes? Have I understood your view correctly?
How does it even relate to action?
Well, if angles between 85 and 95 degrees, other than 90 degrees, are bad, then it seems to follow that given a choice of angle between 85 and 95 degrees, I should choose 90 degrees. That sure sounds like a relationship to an action to me. So, to repeat my question, is "angles between 85 and 95 degrees, other than 90 degrees, are bad" a knowable unobserved objective moral truth, or not?
By the standard you describe above, I should ask whether choosing 90 degrees rather than other angles between 85 and 95 degrees increases aggregate utility. If it does, then "angles between 85 and 95 degrees, other than 90 degrees, are bad" is an objective moral truth, otherwise it isn't. Yes?
So, OK. How do I determine that?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-07T16:53:47.877Z · LW(p) · GW(p)
I have seen outcomes that I'm confident are beneficial.
Confidence isn;t knowledge. So: b). You have only seen outcomes which you believe to be beneficial.
I don't think the relationship of such confidence to knowledge or belief is a question you and I can profitably discuss.
Why not?
OK, fine... what are the correct grounds for deciding whether I can will that X be a universal law?
If considering murder, you ask yourself whether you would want everyone to be able ot murder you, willy-nilly. Far from regressing, the answer to that grounds out in one of those kneejerk obvioulsy-not-valuable-to-Dave intuitions you have been appealing to throughout this discussion.,
increase aggregate utility"
Does your murdering someone increase aggregate utility?
, I should choose 90 degrees.
How does that affect other people? Choices that effect only yourself are aesthetics, not ethics.
Replies from: TheOtherDave, TheOtherDave↑ comment by TheOtherDave · 2013-11-07T17:07:59.103Z · LW(p) · GW(p)
Tapping out here.
↑ comment by TheOtherDave · 2013-11-07T17:07:19.142Z · LW(p) · GW(p)
I'll address your example after you address mine.
↑ comment by TheOtherDave · 2013-11-06T20:41:53.179Z · LW(p) · GW(p)
Actually, on further thought... by "moral claims have subject-independent truth values" do you mean "there exists at least one moral claim with a subject-independent truth value"? Or "All moral claims have subject-independent truth values"?
I'm less confident regarding the falsehood of the former than the latter
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-07T10:00:32.265Z · LW(p) · GW(p)
The former.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-07T14:23:46.763Z · LW(p) · GW(p)
Fair enough. So, which moral claims have subject-independent truth values, on your account?
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-07T15:15:25.481Z · LW(p) · GW(p)
Mot of them. But there may be some claims that are self-reflexive, eg "to be the best person I can be, I should get a PhD".
comment by Larks · 2013-10-29T23:55:34.877Z · LW(p) · GW(p)
I found it much clearer when I realised he was basically talking about rigid designation. It didn't help when EY started talking about rigid designation and using the terminology incorrectly.
Reference class: I studied academic philosophy.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-10-30T00:29:58.354Z · LW(p) · GW(p)
It didn't help when EY started talking about rigid designation and using the terminology incorrectly.
I didn't notice that, can you elaborate?
Replies from: Larks↑ comment by Larks · 2013-10-31T01:13:35.651Z · LW(p) · GW(p)
Multiple philosophers have suggested that this stance seems similar to "rigid designation", i.e., when I say 'fair' it intrinsically, rigidly refers to something-to-do-with-equal-division. I confess I don't see it that way myself - if somebody thinks of Euclidean geometry when you utter the sound "num-berz" they're not doing anything false, they're associating the sound to a different logical thingy. It's not about words with intrinsically rigid referential power, it's that the words are window dressing on the underlying entities. I want to talk about a particular logical entity, as it might be defined by either axioms or inchoate images, regardless of which word-sounds may be associated to it. If you want to call that "rigid designation", that seems to me like adding a level of indirection; I don't care about the word 'fair' in the first place, I care about the logical entity of fairness. (Or to put it even more sharply: since my ontology does not have room for physics, logic, plus designation, I'm not very interested in discussing this 'rigid designation' business unless it's being reduced to something else.)
He seems to have thought Rigid Designation was about a magic connection between sound wave patterns and objects, such that the sound waves would always refer to the same object, rather than that those sound waves, when spoken by such a speaker in such a context, would always refer to the same object, regardless of which possible world that object was in.
I'm sorry if that explanation was a little unclear; it was aimed at non-philosophers, but I suspect you could explain it better.
EDIT: see also prior discussion
Replies from: komponisto↑ comment by komponisto · 2013-11-01T10:47:18.500Z · LW(p) · GW(p)
(In other words, he confused rigid designation with semantic externalism.)
comment by Shmi (shminux) · 2013-10-29T23:38:34.558Z · LW(p) · GW(p)
Personally, I remain confused about his claim that morality is objective in some sense in The Bedrock of Morality: Arbitrary?, no matter how many times i reread it.
Replies from: passive_fist, Vaniver↑ comment by passive_fist · 2013-10-30T01:10:18.461Z · LW(p) · GW(p)
I think it all boils down to this quote at the end (emphasis mine):
We are better than the Pebblesorters, because we care about sentient lives, and the Pebblesorters don't.
I agree with you that this claim is confusing (I am confused about it as well). I don't think, however, that he's trying to justify that it's objective. He's merely stating what it is and deferring the justification to a later time.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-10-30T08:24:05.568Z · LW(p) · GW(p)
We are better than the Pebblesorters, because we care about sentient lives, and the Pebblesorters don't.
Translated:
Replies from: nshepperdHumans are preferable to Pebblesorters according to human utility function, because humans care about maximizing human utility function, and the Pebblesorters don't.
↑ comment by nshepperd · 2013-10-30T22:51:13.840Z · LW(p) · GW(p)
But that's not what "better" means at all, any more than "sorting pebbles into prime heaps" means "doing whatever pebblesorters care about".
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-10-31T12:49:51.957Z · LW(p) · GW(p)
any more than "sorting pebbles into prime heaps" means "doing whatever pebblesorters care about"
How specifically are these two things different? I can imagine some differences, but I am not sure which one did you mean.
For example, if you meant that sorting pebbles is what they do, but it's not their terminal value and certainly not their only value (just like humans build houses, but building houses is not our terminal value), in that case you fight the hypothetical.
If you meant that in a different universe pebblesorter-equivalents would evolve differently and wouldn't care about sorting pebbles into prime heaps, then the pebblesorter-equivalents wouldn't be pebblesorters. Analogically, there could be some human-equivalents in a paraller universe with inhuman values; but they wouldn't be humans.
Or perhaps you meant the difference between extrapolated values and "what now feels like a reasonable heuristics". Or...
Replies from: nshepperd↑ comment by nshepperd · 2013-10-31T16:47:55.978Z · LW(p) · GW(p)
What I meant is that "prime heaps" are not about pebblesorters. There are exactly zero pebblesorters in the definitions of "prime", "pebble" and "heap".
If I told you to sort pebbles into prime heaps, the first thing you'd do is calculate some prime numbers. If I told you to do whatever pebblesorters care about, the first thing you'd do is find one and interrogate it to find out what they valued.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-10-31T18:07:33.448Z · LW(p) · GW(p)
If I gave you a source code of a Friendly AI, all you'd have to do would be to run the code.
If I told you to do whatever human CEV is, you'd have to find and interrogate some humans.
The difference is that by analysing the code of the Friendly AI you could probably learn some facts about humans, while by learning about prime numbers you don't learn about the pebblesorters. But that's a consequence of humans caring about humans, and pebblesorters not caring about pebblesorters. Our values are more complex than prime numbers and include caring about ourselves... which is probably likely to happen to a species created by evolution.
↑ comment by Vaniver · 2013-10-30T00:18:57.795Z · LW(p) · GW(p)
I think he means that if the pebblesorters came along, and studied humanity, they would come up with a narrow cluster which they would label "h-right" instead of their "p-right", and that the cluster h-right is accessible to all scientifically-minded observers. It's objective in the sense that "the number of exterior columns in the design of the Parthenon" is objective, but not in the sense that "15*2+8*2" is objective. The first is 46, but could have been something else in another universe; the second is 46, and can't be something else in another universe.
But... it looks like he's implying that "h-right" is special among "right"s in that it can't be something else in another universe, but that looks wrong for simple reasons. It's also not obvious to me that h-right is a narrow cluster.
Replies from: None, Error↑ comment by [deleted] · 2013-10-30T00:22:01.805Z · LW(p) · GW(p)
But... it looks like he's implying that "h-right" is special among "right"s in that it can't be something else in another universe, but that looks wrong for simple reasons. It's also not obvious to me that h-right is a narrow cluster.
It's because you're a human. You can't divorce yourself from being human while thinking about morality.
Replies from: Vaniver↑ comment by Vaniver · 2013-10-30T00:40:17.518Z · LW(p) · GW(p)
It's because you're a human. You can't divorce yourself from being human while thinking about morality.
It's not clear to me that the first of those statements implies the second of those statements. As far as I can tell, I can divorce myself from being human while thinking about morality. Is there some sort of empirical test we can do to determine whether or not that's correct?
Replies from: Viliam_Bur, None↑ comment by Viliam_Bur · 2013-10-30T08:21:21.900Z · LW(p) · GW(p)
As far as I can tell, I can divorce myself from being human while thinking about morality.
Seems to me that if you weren't human, you wouldn't care about morality (and instead care about paperclips or whatever). So even if you try to imagine yourself as some kind of neutral disembodied mind, the fact that this mind is interested in morality (instead of paperclips) shows that it's a human in disguise. Otherwise it would be very difficult to locate morality in the vast set of "things a mind could consider valuable", so there is almost zero probability that the neutral disembodied mind would spend even a few seconds thinking about it.
Replies from: Vaniver, TheAncientGeek, Carinthium↑ comment by Vaniver · 2013-10-30T14:27:45.967Z · LW(p) · GW(p)
Seems to me that if you weren't human, you wouldn't care about morality (and instead care about paperclips or whatever).
If you take "morality" to be "my peculiar preference for the letter v," but it seems to me that a more natural meaning of "morality" is "things other people should do." Any agent which interacts with other agents has both a vested stake in how windfalls are distributed and in the process used to determine how windfalls are distributed, and so I'd like to talk about "fair" in a way that paperclippers, pebblesorters, and humans find interesting.
That is, how is it difficult to think about "my particular value system," "value systems in general," "my particular protocols for interaction," and "protocols for interaction in general" as different things? Why, when Eliezer is so quick to taboo words and get to the heart of things in other areas, does he not do so here?
So even if you try to imagine yourself as some kind of neutral disembodied mind, the fact that this mind is interested in morality (instead of paperclips) shows that it's a human in disguise.
But when modelling a paperclipper, the neutral disembodied mind isn't interested in human morality, and is interested in paperclips, and thinks of desire for paperclips as the universal impulse. That is to say, I think I have more control over my interests than this thought experiment is presuming.
Replies from: nshepperd↑ comment by nshepperd · 2013-10-30T22:50:10.666Z · LW(p) · GW(p)
"things other people should do."
You've passed the recursive buck here.
Replies from: Vaniver↑ comment by Vaniver · 2013-10-31T00:01:02.057Z · LW(p) · GW(p)
Sort of? I'm not trying to explain morality, but label it, and I think that the word "should" makes a decent label for the cluster of things which make up the "morality" I was trying to point to. The other version I came up with was like thirty words long, and I figured that 'should' was a better choice than that.
↑ comment by TheAncientGeek · 2013-11-04T19:15:24.052Z · LW(p) · GW(p)
I dare say that a disembodied, solipsistic mind wouldn't need to think much about morality. But an embodied mind, in a society, competing for resources with other agents, interacting with them in painful and pleasant ways would need something morality-like, some way of regulating interactions and assigning resources. "Social" isn't some tiny speck in mindspace, it's a large chunk.
↑ comment by Carinthium · 2013-10-31T14:14:16.407Z · LW(p) · GW(p)
It's true that he can't divorce himself from human in a sense, but a few nitpicks.
1- In theory (although probably not in practice), Vaniver could imagine himself as another sort of hypothetically or actually possible moral being. Apes have morality, for example. You could counter with Elizier's definition of morality here, but his case for moral convergence is fairly poor. 2- Even a completely amoral being can "think about morality" in the sense of attempting to predict human actions and taking moral codes into account. 3- I know this is very pedantic, but I would contend there are possible universes in which the phrase "You can't divorce yourself from being human while thinking about morality" does not apply. An Aristotelean universe in which creatures have purposes and inherently gain satisfication from fullfilling their purpose would use an Aristotelean metaethics of purpose-fullfilment, and a Christian universe a metaethics of the Will of God- both would apply.
↑ comment by [deleted] · 2013-10-30T01:03:13.877Z · LW(p) · GW(p)
No, there's not, which is rather the point. It's like asking "what would it be like to move faster than the speed of light?" The very question is silly, and the results of taking it seriously aren't going to be any less silly.
Replies from: Vaniver↑ comment by Vaniver · 2013-10-30T01:14:32.346Z · LW(p) · GW(p)
No, there's not, which is rather the point. It's like asking "what would it be like to move faster than the speed of light?" The very question is silly, and the results of taking it seriously aren't going to be any less silly.
I still don't think I'm understanding you. I can imagine a wide variety of ways in which it could be possible to move more quickly than c, and a number of empirical results of the universe being those ways, and tests have shown that this universe does not behave in any of those ways.
(If you're trying to demonstrate a principle by example, I would prefer you discuss the principle explicitly.)
↑ comment by Error · 2013-10-30T01:48:20.226Z · LW(p) · GW(p)
Datapoint: I didn't find Metaethics all that confusing, although I am not sure I agree with it.
It looks like he's implying that "h-right" is special among "right"s in that it can't be something else in another universe, but that looks wrong for simple reasons. It's also not obvious to me that h-right is a narrow cluster.
I had this impression too, and have more or less the same sort-of-objection to it. I say "sort of" because I don't find "h-right as a narrow cluster" obvious, but I don't find it obviously wrong either. It feels like it should be a testable question but I'm not sure how one would go about testing it, given how crap humans are at self-reporting their values and beliefs.
On edit: Even if h-right isn't a narrow cluster, I don't think it would make the argument inconsistent; it could still work if different parts of humanity have genuinely different values modeled as, say, h1-right , h2-right, etc. At that point I'm not sure the theory would be all that useful, though.
Replies from: Vaniver↑ comment by Vaniver · 2013-10-30T02:11:48.757Z · LW(p) · GW(p)
I say "sort of" because I don't find "h-right as a narrow cluster" obvious, but I don't find it obviously wrong either.
I think part of the issue is that "narrow" might not have an obvious reference point. But it seems to me that there is a natural one: a single decision-making agent. That is, one might say "it's narrow because the moral sense of all humans that have ever lived occupies a dot of measure 0 in the total space of all possible moral senses," but that seems far less relevant to me than the question of if the intersection of those moral senses is large enough to create a meaningful agent. (Most likely there's a more interesting aggregation procedure than intersection.)
Even if h-right isn't a narrow cluster, I don't think it would make the argument inconsistent; it could still work if different parts of humanity have genuinely different values modeled as, say, h1-right , h2-right, etc. At that point I'm not sure the theory would be all that useful, though.
I do think that it makes the part of it that wants to drop the "h" prefix, and just talk about "right", useless.
As well, my (limited!) understanding of Eliezer's broader position is that there is a particular cluster, which I'll call h0-right, which is an attractor- the "if we knew more, thought faster, were more the people we wished we were, had grown up farther together" cluster- such that we can see h2-right leads to h1-right leads to h0-right, and h-2-right leads to h-1-right leads to h0-right, and h2i-right leads to hi-right leads to h0-right, and so on. If such a cluster does exist, then it makes sense to identify it as a special cluster. Again, it's non-obvious to me that such a cluster exists, and I haven't read enough of the CEV paper / other work to see how this is reconciled with the orthogonality thesis, and it appears that word doesn't appear in the 2004 writeup.
comment by Douglas_Knight · 2013-10-30T13:04:56.446Z · LW(p) · GW(p)
aside from a lot of arguing about definitions over whether Eliezer counts as a relativist
I think the whole point was to taboo "realist" and "relativist." So if people come out of the sequence arguing about those definitions, they don't seem to have gotten anything out of the sequence. So, yes, aside from everything, there's no other problem. But that doesn't help you narrow down the problem. I suspect this is either strong agreement or strong disagreement with gjm, but I don't know which.
comment by TheOtherDave · 2013-10-30T02:56:39.847Z · LW(p) · GW(p)
It didn't seem terribly compelling to me, but whether that was a failure of understanding or not I can't really say.
For my own part, I'm perfectly content to say that we care about what we (currently) care about because we care about it, so all of this "moral miracle" stuff about how what we (currently) care about really is special seems unnecessary. I can sort of understand why it's valuable rhetorically when engaging with people who really want some kind of real true specialness in their values, but I mostly think such people should get over it.
I also tend to doubt that "what humanity currently cares about" is as internally consistent and coherently extrapolatable as much of the moral philosophy in the Sequences and elsewhere on this site would seem to imply.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-10-31T03:26:14.097Z · LW(p) · GW(p)
For my own part, I'm perfectly content to say that we care about what we (currently) care about because we care about it, so all of this "moral miracle" stuff about how what we (currently) care about really is special seems unnecessary.
It is equally correct to say we believe what we believe, that doesn't make our beliefs true.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-31T03:50:12.238Z · LW(p) · GW(p)
Yes: valuing something implies that I value it, and believing something doesn't imply that it's true. Agreed.
I assume you're trying to imply that there exists some X that bears the same kind of relationship to valuing that truth has to belief, and that I'm making an analogous error by ignoring X and just talking about value as if I ignored truth and just talked about belief.
Then again, maybe not. You seem fond of making these sorts of gnomic statements and leaving it to others to unpack your meaning. I'm not really sure why.
Anyway, if that is your point and you feel like talking about what you think the X I'm illegitimately ignoring is, or if your point is something else and you feel like actually articulating it, I'm listening.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-10-31T04:07:12.271Z · LW(p) · GW(p)
Well, the common name for this X is something being "moral" or "right" but it appears a lot of people in this thread like to use those words in non-standard ways.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-31T13:37:11.588Z · LW(p) · GW(p)
If you mean what I think you mean, then I agree... I'm disregarding the commonly-referenced "morality" or "rightness" of acts that somehow exists independent of the values that various value-having systems have.
If it turns out that such a thing is important, then I'm importantly mistaken.
Do you believe such a thing is important?
If so, why?
↑ comment by TheAncientGeek · 2013-11-04T20:22:55.115Z · LW(p) · GW(p)
I assume you're trying to imply that there exists some X that bears the same kind of relationship to valuing that truth has to belief, and that I'm making an analogous error by ignoring X and just talking about value as if I ignored truth and just talked about belief.
I think that is a distinct possibility.
Do you believe such a thing is important?
What's more important? What would serve as a good excuse for doing immoral things, or not knowing right from wrong?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-05T00:33:36.426Z · LW(p) · GW(p)
What would serve as a good excuse for doing immoral things, or not knowing right from wrong?
The lack of anything depending on whether an act was immoral; the lack of any consequences to not knowing right from wrong.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-05T09:42:01.120Z · LW(p) · GW(p)
Firstly, you are assuming something that many would disagree with: that an act with no consequences can be immoral, rather than being automatically morally neutral.
Secondly: even if true, that is a special case.
The importance of morality flows from its obligatoriness.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-05T14:26:40.173Z · LW(p) · GW(p)
Sure. You asked a very open-ended question, I made some assumptions about what you meant. If you'd prefer to clarify your own meaning instead, I'd be delighted, but that doesn't appear to be your style.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-05T15:05:48.520Z · LW(p) · GW(p)
The intended answer to "what is more important than morality", AKA "what is a good excuse for behaving immorally" was "nothing" (for all that you came up with ... nothing much). The question was intended to show that not only is morality important, it is ultimately so.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-05T15:50:40.029Z · LW(p) · GW(p)
Thanks for clarifying.
comment by timtyler · 2013-10-30T01:14:55.687Z · LW(p) · GW(p)
Why didn't people (apparently?) understand the metaethics sequence?
Perhaps back up a little. Does the metaethics sequence make sense? As I remember it, a fair bit of it was a long, rambling and esoteric bunch of special pleading - frequently working from premises that I didn't share.
Replies from: ChrisHallquist, Carinthium↑ comment by ChrisHallquist · 2013-10-30T02:55:57.911Z · LW(p) · GW(p)
Long and rambling? Sure. But then so is much else in the sequences, including the quantum mechanics sequence. As for arguing from premises you don't share, what would those premises be? It's a sincere question, and knowing your answer would be helpful for writing my own post(s) on metaethics.
Replies from: byrnema↑ comment by byrnema · 2013-10-30T17:53:36.548Z · LW(p) · GW(p)
I recall not being able to identify with the premises... some of them were really quite significant.
I now recall, it was with "The Moral Void, in which apparently I had different answers than expected.
"Would you kill babies if it was inherently the right thing to do?"
The post did discuss morality on/off switches later in the context of religion, as an argument against (wishing for / wanting to find) universally compelling arguments.
The post doesn't work for me because it seems there is an argument against the value of universally compelling arguments with the implicit assumption that since universally compelling argument don't exist, any universally compelling argument would be false.
I happen to (mostly) agree that there aren't universally compelling arguments, but I still wish there were. The metaethics sequence failed to talk me out of valuing this.
Also, there were some particular examples that didn't work for me, since I didn't have a spontaneous 'ugh' field around some of the things that were supposed to be bad.
I see Jack expressed this concept here:
And it definitely is true that much of our moral language function like rigid designators, which hides the causal history of our moral beliefs. This explains why people don't feel like morality changes under counterfactuals-- i.e. if you imagine a world in which you have a preference for innocent children being murdered you don't believe that murdering children is therefore moral in that world. I outlined this in more detail here. I didn't use the term 'rigid designator' in that post, but the point is that what we think is moral is invariant in counterfacturals.
For whatever reason, I feel like my morality changes under counterfactuals.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-10-30T18:22:31.477Z · LW(p) · GW(p)
I happen to (mostly) agree that there aren't universally compelling arguments, but I still wish there were. The metaethics sequence failed to talk me out of valuing this.
But you realize that Eliezer is arguing that there aren't universally compelling arguments in any domain, including mathematics or science? So if that doesn't threaten the objectivity of mathematics or science, why should that threaten the objectivity of morality?
For whatever reason, I feel like my morality changes under counterfactuals.
Can you elaborate?
Replies from: byrnema↑ comment by byrnema · 2013-10-30T18:39:52.158Z · LW(p) · GW(p)
Waah? Of course there are universally compelling arguments in math and science. (Can you elaborate?)
For whatever reason, I feel like my morality changes under counterfactuals.
Can you elaborate?
It is easy for me to think of scenarios where any particular behavior might be moral. So that if someone asks me, "imagine that it is the inherently right thing to kill babies, " it seems rather immediate to answer that in that case, killing babies would be inherently right.
This is also part of the second problem, where there aren't so many things I consider inherently wrong or right ... I don't seem to have the same ugh fields as the intended audience. (One thing which seems inherently right to me is that there would be an objective morality, it just happens to be apparently false in this universe, for now.)
Replies from: hairyfigment, ChrisHallquist↑ comment by hairyfigment · 2013-10-31T21:46:05.523Z · LW(p) · GW(p)
Of course there aren't. You can trivially imagine programming a computer to print, "2+2=5" and no verbal argument will persuade it to give the correct answer - this is basically Eliezer's example! He also says that, in principle, an argument might persuade all the people we care about.
While his point about evolution and 'psychological unity' seems less clear than I remembered, he does explicitly say elsewhere that moral arguments have a point. You should assign a high prior probability to a given human sharing enough of your values to make argument worthwhile (assuming various optimistic points about argumentation in general with this person). As for me, I do think that moral questions which once provoked actual war can be settled for nearly all humans. I think logic and evidence play a major part in this. I also think it wouldn't take much of either to get nearly all humans to endorse, eg, the survival of humanity - if you think that part's unimportant, you may be forgetting Eliezer's goal (and in the abstract, you may be thinking of a narrower range of possible minds).
One thing which seems inherently right to me is that there would be an objective morality, it just happens to be apparently false in this universe
How could it be true, aside from a stronger version of the previous paragraph? I don't know if I understand what you want.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2013-11-04T20:11:30.173Z · LW(p) · GW(p)
Of course there aren't. You can trivially imagine programming a computer to print, "2+2=5" and no verbal argument will persuade it to give the correct answer -
You can't persuade rocks either. Don't you think this might be just a wee bit of a strawman of the views of people who believe in universally compelling arguments?
↑ comment by ChrisHallquist · 2013-10-30T19:06:37.064Z · LW(p) · GW(p)
Waah? Of course there are universally compelling arguments in math and science. (Can you elaborate?)
Okay... I need to write a post about that.
It is easy for me to think of scenarios where any particular behavior might be moral. So that if someone asks me, "imagine that it is the inherently right thing to kill babies, " it seems rather immediate to answer that in that case, killing babies would be inherently right.
Are you really imagining a coherent possibility, though? I mean, you could also say, "If someone tells me, 'imagine that p & ~p,' it seems that in that case, p & ~p."
Replies from: byrnema↑ comment by byrnema · 2013-10-30T19:19:40.036Z · LW(p) · GW(p)
Are you really imagining a coherent possibility, though?
I am. It's so easy to do I can't begin to guess what the inferential distance is.
Wouldn't it be inherently right to kill babies if they were going to suffer? Wouldn't it be inherently right to kill babies if they had negative moral value to me, such as baby mosquitoes carrying malaria?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-30T20:01:36.730Z · LW(p) · GW(p)
I think it's fair, principle of charity and all, to assume "babies" means "baby humans" specifically. A lot of things people say about babies becomes at best false, at worst profoundly incoherent, without this assumption.
But you're right of course, that there are many scenarios in which killing human babies leads to better solutions than not killing them. Every time I consider pointing this out when this question comes up, I decide that the phrase "inherently right" is trying to do some extra work here that somehow or other excludes these cases, though I can't really figure out how it is supposed to do that work, and it never seems likely that raising the question will get satisfying answers.
This seems like it might get back to the "terminal"/"instrumental" gulf, which is where I often part company with LW's thinking about values.
Replies from: byrnema↑ comment by byrnema · 2013-10-30T21:01:47.161Z · LW(p) · GW(p)
Yeah, these were just a couple examples. (I can also imagine feeling about babies the way I feel about mosquitos with malaria. Do I have an exceptionally good imagination? As the imagined feelings become more removed from reality, the examples must get more bizarre, but that is the way with counter-factuals.) But there being ready examples isn't the point. I am asked to consider that I have this value, and I can, there is no inherent contradiction.
Perhaps as you suggest, there is no p&-p contradiction because preserving the lives of babies is not a terminal value. And I should replace this example with an actual terminal value.
But herein lies a problem. Without objective morality, I'm pretty sure I don't have any terminal values -- everything depends on context. (I'm also not very certain what a terminal value would like if there was an objective morality.)
↑ comment by Carinthium · 2013-10-31T14:05:50.763Z · LW(p) · GW(p)
Could you clarify a bit? I'd be curious to hear your ethical views myself, particularly your metaethical views. I was convinced of some things by the Metaethics sequence (it convinced me that despite the is-ought distinction ethics could still exist), but I may have made a mistake so I want to know what you think.
Replies from: timtyler↑ comment by timtyler · 2013-11-01T10:53:50.965Z · LW(p) · GW(p)
That's an open-ended question which I don't have many existing public resources to address - but thanks for your interest. Very briefly:
I like evolution, Yukdowsky seems to dislike it. Ethically, Yukdowsky is an intellectual descendant of Huxley, while I see myself as thinking more along the lines of Kropotkin.
Yukdowsky seems to like evolutionary psychology. So far evolutionary psychology has only really looked at human universals. To take understanding of the mind further, it is necessary to move to a framework of gene-meme coevolution. Evolutionary psychology is politically correct - through not examining huamn differences - but is scientifically very limited in what it can say, because of the significance of cultural transmission on human behaviour.
Yudkowsky likes utilitarianism. I view utilitarianism largely as a pretty unrealistic ethical philosophy adopted by ethical philosophers for signalling reasons.
Yukdowsky is an ethical philosopher - and seems to be on a mission to persuade people that giving control a machine that aggregates their preferences will be OK. I don't have a similar axe to grind.
comment by Dorikka · 2013-10-30T04:20:16.220Z · LW(p) · GW(p)
It's been a while since I read (part of) the metaethics sequence. With that said:
I have a pretty strong aversion to the word "right" used in discourse. The word is used to mean a few different things, and people often fail to define their use of it sufficiently for me to understand what they're talking about. I don't remember being able to tell whether Eliezer was attempting to make a genuine argument for moral-realism; when he introduced the seemingly sensical term h-right (recognizing that things humans often feel are "right" are simply terminal values humans/that subset of humans) and then seemed to declare h-right->right, I stopped reading shortly thereafter (as I was either totally failing to parse or he was making no sense.)
comment by [deleted] · 2013-10-30T00:09:54.845Z · LW(p) · GW(p)
Let's get some data (vote accordingly):
Did you understand the metaethics sequence, when you read it?
[pollid:572]
Replies from: shminux, ChrisHallquist, ChrisHallquist, Carinthium, None, None, None↑ comment by Shmi (shminux) · 2013-10-30T00:31:24.324Z · LW(p) · GW(p)
How do you know if you understood it? Is there a set of problems to test your understanding?
Replies from: Vaniver, somervta↑ comment by ChrisHallquist · 2013-10-30T00:26:52.764Z · LW(p) · GW(p)
I approve of having a poll, but isn't there a better way to do polls in the LW software?
Replies from: Vaniver↑ comment by ChrisHallquist · 2013-11-01T06:59:27.021Z · LW(p) · GW(p)
Oh wow, this is very different from what I would've expected, based on the way people talk about the metaethics sequence.
Guesses as to whether this is a representative sample?
In retrospect, I should've considered the possibility that "people don't understand the metaethics sequence!" was reflective of a loud minority... on the other hand, can anyone think of reasons why this poll might be skewed towards people who understood the metaethics sequence?
Replies from: Moss_Piglet↑ comment by Moss_Piglet · 2013-11-01T10:31:17.423Z · LW(p) · GW(p)
Because a large subset of people who don't understand things are unaware of their misunderstanding?
Replies from: Douglas_Knight, TheAncientGeek↑ comment by Douglas_Knight · 2013-11-04T20:25:28.858Z · LW(p) · GW(p)
Chris is surprised because he saw a lot of people saying that they themselves did not understand the sequence.
↑ comment by TheAncientGeek · 2013-11-04T20:34:24.340Z · LW(p) · GW(p)
Several people have tried to explain Lesswrongian metaethics to me, only to give up in confusion. Being able to explain something is the acid test of understanding it.
↑ comment by Carinthium · 2013-10-30T11:11:25.043Z · LW(p) · GW(p)
I voted for No, defined by when I first read it.
comment by cousin_it · 2013-10-30T12:58:31.352Z · LW(p) · GW(p)
If you decide to write that post, it would be great if you started by describing the potential impact of metaethics on FAI design, to make sure that we're answering questions that need answering and aren't just confusions about words. If anyone wants to take a stab here in the comments, I'd be very interested.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-30T14:23:04.627Z · LW(p) · GW(p)
Well... according to the SEP, metaethics encompasses the attempt to understand the presuppositions and commitments of moral practice.
If I'm trying to engineer a system that behaves morally (which is what FAI design is, right?), it makes some sense that I'd want to understand that stuff, just as if I'm trying to engineer a system that excavates tunnels I'd want to understand the presuppositions and commitments of tunnel excavation.
That said, from what I've seen it's not clear to me that the actual work that's been done in this area (e.g., in the Metaethics Sequence) actually serves any purpose other than rhetorical.
Replies from: V_V↑ comment by V_V · 2013-10-31T12:47:09.284Z · LW(p) · GW(p)
I think that framing the issue of AI safety in terms of "morality" or "friendliness" is a form of misleading anthropomorphization. Morality and friendliness are specific traits of human psychology which won't necessarily generalize well to artificial agents (even attempts to generalize them to non-human animals are often far-fetched).
I think that AI safety would be probably best dealt with in the framework of safety engineering.
↑ comment by TheOtherDave · 2013-10-31T13:40:32.768Z · LW(p) · GW(p)
All right. I certainly agree with you that talking about "morality" or "friendliness" without additional clarifications leads most people to conclusions that have very little to do with safe AI design. Then again, if we're talking about self-improving AIs with superhuman intelligence (as many people on this site are) I think the same is true of talking about "safety."
comment by fubarobfusco · 2013-10-30T16:49:22.880Z · LW(p) · GW(p)
Should we expect metaethics to affect normative ethics? Should people who care about behaving morally, therefore care about metaethics at all?
Put another way — Assume that there is a true, cognitivist, non-nihilist, metaethical theory M. (That is, M asserts that there exists at least one true moral judgment.) Do we expect that people who know or believe M will act more morally, or even have more accurate normative-ethical beliefs, than people who do not?
It's conceivable for metaethics to not affect normative ethics — by analogy to the metaphysics of mathematics. Platonists, formalists, and other schools of philosophy of math disagree about what it means to be a truth of mathematics, but (as far as I'm aware) they do not disagree on which mathematical inferences are valid.
It's conceivable for metaethics to affect normative ethics only in weird but relevant cases, such as FAI design. In this case, people who don't believe M would be less likely to create AI that is capable of behaving morally. So many around here would probably argue that people who don't believe M (that is, who do not possess true metaethical theory) should not create AI.
Replies from: Douglas_Knight, Carinthium↑ comment by Douglas_Knight · 2013-10-30T17:25:30.971Z · LW(p) · GW(p)
a tangential response on mathematics
Today there is little disagreement over inference, but a century ago there was a well-known conflict over the axiom of choice and a less known conflict over propositional logic. I've never been clear on the philosophy of intuitionism, but it was the driving force behind constructive mathematics. And it is pretty clear that Platonism demands proof by contradiction.
As for axioms of set theory, Platonists debate which axioms to add, while formalists say that undecidability is the end of the story. Platonists pretty consistently approve higher cardinal axioms, but I don't know that there's a good reason for their agreement. They certainly disagree about the continuum hypothesis. That's just Platonic set theorists. Mainstream mathematicians tend to (1) have less pronounced philosophy and (2) not care about higher cardinals, even if they are Platonists (but perhaps only because they haven't studied set theory). Bourbaki and Grothendieck used higher cardinals in mainstream work, but lately there has been a turn to standardizing on ZFC.
Going back to the more fundamental issue of constructive math: many years ago, I heard a talk by a mathematician who looked into formal proof checkers. They came out of CS departments and he was surprised to find that they were all constructivist. I'm not sure this reflects a philosophical difference between math and CS, rather than minimalism or planned application to the Curry-Howard correspondence.
↑ comment by Carinthium · 2013-10-31T14:03:55.841Z · LW(p) · GW(p)
If we take for granted that there is a true metaethical theory, then it depends on what that metaethical theory says. Unlike Elizier, I would argue that there are plenty of possible metaethical theories that would at least arguably override subjective opinion. Two examples are the Will of God metaethical theory (if an omnipotent God existed) or the Purpose theory (which states that although humans are free-willed, some actions do or do not contribute to achieving a human's natural purpose in life. Said purpose is meant to be coherent, unlike evolutionary purpose- so better achievement would lead to achieving satisfaction in the long run). These are debatable, but make ethics one way or another more than mere human opinion.
Without any rational evidence moral nihilism cannot be considered refuted. Under Elizier's theory, moral nihilism is refuted in a sense- but without a rational argument to oppose it, the metaethicist has no answer. I was a moral nihilist until I read and understood the Sequences, for example.
Finally, metaethics is useful in one particular scenario- the ethical dilemna. When there is a conflict between two desires both which feel like they have some claim to moral rightness, correct metaethics is essential to sort out what best to do.
None of this helps with acting more selflessly and less selfishly, or deciding to do what is right againt selfish instincts. However, that's not what it needs to do.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-10-31T15:40:13.512Z · LW(p) · GW(p)
although humans are free-willed, some actions do or do not contribute to achieving a human's natural purpose in life. Said purpose is meant to be coherent, unlike evolutionary purpose- so better achievement would lead to achieving satisfaction in the long run
If I understand you correctly, your claim that if this turns out to be true, then I ought to perform those acts which contribute to achieving my natural purpose, whether I net-value satisfaction or not. Yes?
When there is a conflict between two desires both which feel like they have some claim to moral rightness, correct metaethics is essential to sort out what best to do.
Is it? It seems like object-level ethics achieves this purpose perfectly well. If it returns the result that they are equally good to do, then the correct thing to do is pick one. What do I need metaethics for, here?
Replies from: Carinthium↑ comment by Carinthium · 2013-11-01T03:24:34.715Z · LW(p) · GW(p)
The probability of that theory in reality is very, very low- it is a hypothetical universe. However, given that human beings have a tendency to define ethics in an Objective light in such a universe it would make sense to call it "objective ethics". Admittedly I assume you value satisfaction here, but my argument is about what to call moral behaviour more than what you 'should' do.
Assuming Eliezer's metaethics is actually true, you have a very good point. Eliezer, however, might argue that it is necessary to avoid becoming a 'morality pump'- doing a series of actions which feel right but which have effects in the world that cancel each other out or end up at a clear loss.
However, there are other plausible theories. One possible theory (similiar to one I once held but which I'm not sure about now) would say that you need to think through the implications of both courses of action and how you would feel about the results as best as you can so you don't regret your decision.
In addition, you should at least concede that your theory only works in this universe, not in some possible universes. It really depends upon the assumption that Eliezer's metaethics or something similiar to is the true metaethics.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-01T15:14:16.884Z · LW(p) · GW(p)
I apologize, but after reading this a few times I don't really understand what you're saying here, not even approximately enough to ask clarifying questions. It's probably best to drop the thread here.
comment by TheAncientGeek · 2013-11-04T15:52:02.794Z · LW(p) · GW(p)
Reading the comments on the metaethics sequence, though, hasn't enlightened me about what exactly people had a problem with, aside from a lot of arguing about definitions over whether Eliezer counts as a relativist.
Since you (apparently) understand him., Chris, maybe you could settle the matter.