Is the orthogonality thesis at odds with moral realism?

post by ChrisHallquist · 2013-11-05T20:47:52.979Z · LW · GW · Legacy · 118 comments

Contents

118 comments

Continuing my quest to untangle people's confusions about Eliezer's metaethics... I've started to wonder if maybe some people have the intuition that the orthogonality thesis is at odds with moral realism.

I personally have a very hard time seeing why anyone would think that, perhaps in part because of my experience in philosophy of religion. Theistic apologists would love to be able to say, "moral realism, therefore a sufficiently intelligent being would also be good." It would help patch some obvious holes in their arguments and help them respond to things like Stephen Law's Evil God Challenge. But they mostly don't even try to argue that, for whatever reason.

You did see philosophers claiming things like that back in the bad old days before Kant, which raises the question of what's changed. I suspect the reason is fairly mundane, though: before Kant (roughly), it was not only dangerous to be an atheist, it was dangerous to question that the existence of God could be proven through reason (because it would get you suspected of being an atheist). It was even dangerous to advocated philosophical views that might possibly undermine the standard arguments for the existence of God. That guaranteed that philosophers could used whatever half-baked premises they wanted in constructing arguments for the existence of God, and have little fear of being contradicted.

Besides, even if you think an all-knowing would also necessarily be perfectly good, it still seems perfectly possible to have an otherwise all-knowing being with a horrible blind spot regarding morality.

On the other hand, in the comments of a post on the orthogonality thesis, Stuart Armstrong mentions that:

I've read the various papers [by people who reject the orthogonality thesis], and they all orbit around an implicit and often unstated moral realism. I've also debated philosophers on this, and the same issue rears its head - I can counter their arguments, but their opinions don't shift. There is an implicit moral realism that does not make any sense to me, and the more I analyse it, the less sense it makes, and the less convincing it becomes. Every time a philosopher has encouraged me to read a particular work, it's made me find their moral realism less likely, because the arguments are always weak.

This is not super-enlightening, partly because Stuart is talking about people whose views he admits he doesn't understand... but on the other hand, maybe Stuart agrees that there is some kind of conflict there, since he seems to imply that he himself rejects moral realism.

I realize I'm struggling a bit to guess what people could be thinking here, but I suspect some people are thinking it, so... anyone?

118 comments

Comments sorted by top scores.

comment by DanArmak · 2013-11-05T21:26:06.403Z · LW(p) · GW(p)

I'm still bothered by the fact that different people mean different and in fact contradictory things by "moral realism".

The SEP says that moral realism means thinking that (some) morality exists as objective fact, which can be discovered through thinking or experimentation or some other process which would lead all right-thinking minds to agree about it. That is also how I understood the term before reading these posts.

And yet Eliezer seems to call himself (or be called?) a moral realist, even though he explicitly only talks about MoralGood!Eliezer (or !Humanity, !CEV, etc.) This is confusing and consequently irritating to people including myself.

So when you ask if:

maybe some people have the intuition that the orthogonality thesis is at odds with moral realism.

What do you mean? I think it's time to taboo "moral realism" because people have repeatedly failed to agree on what these words should mean.

Replies from: Brillyant, ChrisHallquist, TheAncientGeek, somervta, Creutzer, RobbBB, Stuart_Armstrong
comment by Brillyant · 2013-11-06T15:06:04.228Z · LW(p) · GW(p)

I concur. It seems to me this sort of always devolves into a debate over defintions without anyone acknowledging that's what is going on.

comment by ChrisHallquist · 2013-11-05T21:53:50.599Z · LW(p) · GW(p)

The SEP says that moral realism means thinking that (some) morality exists as objective fact, which can be discovered through thinking or experimentation or some other process which would lead all right-thinking minds to agree about it. That is also how I understood the term before reading these posts.

The SEP doesn't say this. Actually, the SEP doesn't even use the word "objective." What the SEP actually says is, "Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right," and that's it.

And yet Eliezer seems to call himself (or be called?) a moral realist, even though he explicitly only talks about MoralGood!Eliezer (or !Humanity, !CEV, etc.) This is confusing and consequently irritating to people including myself.

On Eliezer's view, as I understand it, human!morality just is morality, simpliciter.

Replies from: DanArmak, RobbBB
comment by DanArmak · 2013-11-05T22:00:56.394Z · LW(p) · GW(p)

What the SEP actually says is, "Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right," and that's it.

This is all a matter of misunderstanding the meaning of words, and nobody is objectively right or wrong about that, since the disagreement is widespread - I'm not the only one to complain.

To me, an unqualified "fact" is, by implication, a simple claim about the universe, not a fact about the person holding the belief in that fact. An unqualified "fact" should be true or false in itself, without requiring you to further specify you meant the instance-of-that-fact that applies to some particular person with particular moral beliefs.

If SEP's usage of "fact" is taken to mean "a fact about the person holding the moral belief", the fact being that the person does hold that belief, then I don't understand what it would mean to say that there aren't any moral facts (i.e. moral anti-realism). Would it mean to claim that people have no moral beliefs? That's obviously false.

On Eliezer's view, as I understand it, human!morality just is morality, simpliciter.

That's exactly what bothers me - that he (and other people agree with this) redefines the word "morality" to mean human!morality, and this confuses people (I'm not the only one) who expect that word to mean something else, depending on context. (For example, the meta-concept of morality, as opposed to a concrete set of moral beliefs such as Eliezer!morality or humanity!morality.)

I agree that if everyone agreed to Eliezer's usage, then discussing morality would be easier. But it's just a fact that many people use the word differently from him. And when faced with such inconsistency, I would prefer that people either always qualify their usage, or taboo the word entirely.

Replies from: fubarobfusco, Douglas_Knight, Theaetetus, Leonhart, TheAncientGeek
comment by fubarobfusco · 2013-11-06T01:42:57.190Z · LW(p) · GW(p)

To me, an unqualified "fact" is, by implication, a simple claim about the universe, not a fact about the person holding the belief in that fact.

It's a fact that my height is less than six feet. It's also a fact that I disapprove of torture. These are objective facts, not opinions or one person's suspicions. It's not just that I object to claims that I'm seven feet tall; such claims would be false. And if someone says of me that I approve of torture, they're in error, as surely as if they said grass is red and ponies have seventeen hooves.

However, if when I say 'torture is wrong', I mean the fact that I disapprove of torture, I am using relativism. The statement "torture is wrong" is saying something about the speaker. But it's also saying something about the listener; I expect the listener to react in some way to the idea I'm expressing. I don't go around saying "torture is flooble"; I expect that listeners don't assign any significance to floobleness, but they do to wrongness.

Relativism does not mean that moral claims become mere matters of passing fancy; it means that moral claims express preferences of particular minds (including speakers' and listeners'); understanding them requires understanding something about the minds of those who make them.

Consider: As an English-speaker, you might find it distasteful if your neighbor named her daughter "Porn". You might even think it was wrong, especially if you had concerns about how other English-speakers would react to a little girl named Porn. If you were a Thai-speaker living in a Thai language community, you probably wouldn't see a problem, because "Porn" means "Blessing" in Thai and is a common female name. Understanding why the English-speaker is squicked by the idea of a little girl named Porn, but the Thai-speaker is not, requires knowing something about English and Thai languages, as well as about cultural responses to different sorts of mental imagery involving children.

But suppose that when I say "torture is wrong", I mean "Any intelligent mind, no matter its origin, if it is capable of understanding what 'torture' means, will disapprove of torture." That is, a relativisty-preferencey sort of "wrongness" follows from some fact that is true about all intelligent minds. That's a very different claim. It's a lot closer to what people tend to think of as "absolute, objective morality".

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-06T16:09:38.451Z · LW(p) · GW(p)

Relativism does not mean that moral claims become mere matters of passing fancy; it means that moral claims express preferences of particular minds (including speakers' and listeners'); understanding them requires understanding something about the minds of those who make them.

Understanding their content, understanding why the speaker considers them true, or understanding why they are true-for_speaker?

Consider: As an English-speaker, you might find it distasteful if your neighbor named her daughter "Porn". You might even think it was wrong, especially if you had concerns about how other English-speakers would react to a little girl named Porn. If you were a Thai-speaker living in a Thai language community, you probably wouldn't see a problem, because "Porn" means "Blessing" in Thai and is a common female name. Understanding why the English-speaker is squicked by the idea of a little girl named Porn, but the Thai-speaker is not, requires knowing something about English and Thai languages, as well as about cultural responses to different sorts of mental imagery involving children.

Is the more general principle "don't give your children embarrassing names" equally relative? How about "don't embarass people in general "? Or "don't do unpleasant things to people in general"?

comment by Douglas_Knight · 2013-11-05T23:27:06.747Z · LW(p) · GW(p)

To me, an unqualified "fact" is, by implication, a simple claim about the universe, not a fact about the person holding the belief in that fact.

That is how Chris and SEP are using the term.

Replies from: DanArmak
comment by DanArmak · 2013-11-06T11:42:28.177Z · LW(p) · GW(p)

Then I don't understand Chris's comment. I said:

The SEP says that moral realism means thinking that (some) morality exists as objective fact, which can be discovered through thinking or experimentation or some other process which would lead all right-thinking minds to agree about it.

And Chris replied:

The SEP doesn't say this.

Replies from: Lawsmith, Douglas_Knight
comment by Lawsmith · 2013-11-08T05:20:19.668Z · LW(p) · GW(p)

I took Chris's meaning to be that moral realism (as defined by the SEP) says that moral claims are fact claims possessing truth values but says nothing about the discoverability or computability of those truth values. Your definition would have every moral realist insisting that every moral claim can be proven either true or false, but it seems to me that Chris' definition allows moral realists to leave open Gödel-incompleteness status for moral claims, considering their truth or falsity to exist but be possibly incomputable, and still be moral realists. Or, to take no position on whether rational minds would come to the truth values of moral claims, only on whether the truth values existed. Your definition would exclude both of those from moral realism.

Chris, please correct me if this is not what you meant.

Replies from: DanArmak
comment by DanArmak · 2013-11-08T14:43:27.147Z · LW(p) · GW(p)

I have no problem with Godel-incompleteness, uncomputability, and so on in a system that allows you to state any moral proposition.

However: if a moral realist believes that "moral claims are fact claims possessing truth values", then what does he belief regarding the proposition (1) "there exists at least one moral claim that can be proven true or false"? (Leaving aside claims that simply induce contradictions, are not well defined, etc.)

If he thinks such a claim exists, that is the same as saying there is a Universally Compelling Argument for or against that claim. And that is a logical impossibility. I can always construct a mind that is immune to any particular argument.

If he thinks no such claims exist, then it seems to be a kind of dualism - postulating a property "truth" of moral claims, which is not causally entangled with the physical world. It also seems pointless - why care about it if no actual mind can ever discover such truths?

ETA: talking about 'proving' claims true or false is a simplification. In reality we have degrees of beliefs in the truth-value of claims. But my point is that moral-realistic claims seem to be disengaged from reality; substitute "provide evidence for" in place of "prove" and my argument should still work.

comment by Douglas_Knight · 2013-11-06T18:15:26.460Z · LW(p) · GW(p)

If you needed my comment to decide that not understanding Chris's comment is a much better hypothesis than not understanding Chris and SEP's use of "fact," then you have much worse problems than not understanding Chris's comment.

Replies from: DanArmak
comment by DanArmak · 2013-11-06T20:26:14.871Z · LW(p) · GW(p)

I knew I didn't understand something about Chris's comment when I first read it. Could you explain it and help me understand, please?

comment by Theaetetus · 2013-11-07T16:48:00.127Z · LW(p) · GW(p)

I think the problem lies in your usage of the phrase "objective fact".

For example, if I claim "broccoli is tasty", my claim purports to report a fact. Plausibly, it purports to report a fact about me -- namely, that I like broccoli. If someone else were to claim "broccoli is tasty", her utterance would also purport to report a fact -- plausibly, the fact that she likes broccoli. So two token utterances of the very same type may pick out different facts. If this is the case, "broccoli is tasty" is true when asserted by broccoli-lovers and false when asserted by broccoli-haters. This should not be surprising, provided that it is interpreted as a disguised indexical claim.

Clearly, there is no experimental process whereby all right-thinking people can conclude that broccoli is tasty (or, alternatively, that broccoli is not tasty), even though several right-thinking people can justifiably arrive at this conclusion (by eating broccoli and liking it, say). Crucially, this conclusion is consistent with being a realist about broccoli-tastiness, but inconsistent with thinking there are objective facts about broccoli-tastiness (as you use the term). Likewise, one can be a realist about morality without thinking there are objective facts about morality (again, as you use the term).

Replies from: DanArmak
comment by DanArmak · 2013-11-07T17:35:45.055Z · LW(p) · GW(p)

When I say "objective fact", I mean (in context) a non-indexical one.

The original problem I raised was that some people who talked about things being "moral" meant those statements indexically, and others meant them objectively, and this created a lot of confusion.

one can be a realist about morality without thinking there are objective facts about morality (again, as you use the term).

I use the term "objective facts about morality" to mean "non-indexical facts which do not depend on picking out the person holding the moral beliefs". Moral realism is the belief such objective facts about morality can and/or do exist.

Replies from: Theaetetus
comment by Theaetetus · 2013-11-07T18:43:57.232Z · LW(p) · GW(p)

Of course, one is free to interpret "moral realism" as you do -- it's a natural enough interpretation, and may even be the most common one among philosophers. However, this is not the definition given in the SEP. According to it, "moral realists are those who think that...moral claims do purport to report facts and are true if they get the facts right". This does not entail that moral realists think that moral claims purport to report objective facts. But isn't such a loose interpretation of "moral realism" vacuous? As you say:

If SEP's usage of "fact" is taken to mean "a fact about the person holding the moral belief", the fact being that the person does hold that belief, then I don't understand what it would mean to say that there aren't any moral facts (i.e. moral anti-realism).

The moral anti-realist can choose from among two main alternatives if she wishes to deny moral realism, which I understand as being committed to the following two theses: (1) moral claims purport to report some (not necessarily objective) facts, and (2) some moral claims are true. First, she can maintain that all moral claims are false, which is a plausible suggestion: perhaps our moral claims purport to be about some normative aspect of the world, but the world lacks this normative aspect. Second, she can maintain that no moral claims purport to report facts; instead, all moral claims express emotions. On this view, saying "setting cats on fire is wrong" is tantamount to exclaiming "Boo!" or "Ew!"

Replies from: DanArmak
comment by DanArmak · 2013-11-07T22:22:28.016Z · LW(p) · GW(p)

First, she can maintain that all moral claims are false, which is a plausible suggestion: perhaps our moral claims purport to be about some normative aspect of the world, but the world lacks this normative aspect.

That would still be discussing an objective claim - just one that happens to be false. On a part with discussing a mathematical proposition which is false, or an empirical hypothesis which is false: both of these are independent of the person who says them or believes in them. Just so, discussing normative aspects of the world - whether they exist or not, and whether they are as claimed or not - isn't the same as discussing normative beliefs of a person.

So calling this moral anti-realism seems to use my sense of "moral realism" (objective fact), not the SEP's.

Second, she can maintain that no moral claims purport to report facts; instead, all moral claims express emotions. On this view, saying "setting cats on fire is wrong" is tantamount to exclaiming "Boo!" or "Ew!"

In one way, this is again moral anti-realism in my sense of the phrase: the claim that morals don't exist separately from the moral beliefs of concrete persons. (I hold this view.)

In another way, it can be read as a claim about what people mean when they talk about morals. In that case, the claim is plainly wrong, because many people are moral realists.

So to sum up, I'm afraid I still don't see what it would mean to be a moral anti-realist in what you say is the SEP sense.

comment by Leonhart · 2013-11-06T00:08:40.051Z · LW(p) · GW(p)

(For example, the meta-concept of morality, as opposed to a concrete set of moral beliefs such as Eliezer!morality or humanity!morality.)

But there isn't a meta-concept of morality. If you try to abstract one, you just end up with something like "that which motivates", which is empty unless you specify which specific minds can be motivated by it, and then you're back where you started.

Replies from: Carinthium, DanArmak, TheAncientGeek
comment by Carinthium · 2013-11-06T05:17:09.863Z · LW(p) · GW(p)

There are several different uses of morality, each which result from different meta-concepts. An Aristotelean, for example, would talk about morality as fitting a human's purpose (as would a Christian), for example. Everybody uses the same word for several fundamentally different concepts, some of which have no or little basis in fact.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-11-06T15:49:51.500Z · LW(p) · GW(p)

Literally true in isolation, but so completely irrelevant to this thread, I can only describe this comment as a lie.

comment by DanArmak · 2013-11-06T11:40:28.819Z · LW(p) · GW(p)

Different humans have somewhat different morals. I can still talk about "morals" in general, because they are a special kind of motivations in humans. Talking about morals in minds in general indeed makes little sense.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-12-09T09:45:52.070Z · LW(p) · GW(p)

Talking about morals in minds in general indeed makes little sense.

To whom? AFAICS, if you have minds living in a community, and they can interact in ways that caus negative and positive utility to each other, then you the problem that morality solves...and that is a ery general set of conditions.

Replies from: gjm
comment by gjm · 2013-12-09T11:17:22.086Z · LW(p) · GW(p)

I think what Dan means is that different kinds of minds in different kinds of community might need quite different solutions to the problem of interacting effectively, which might lead to quite different notions of morality, and that if that's true then you shouldn't expect any single notion of morality to be universally applicable.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-12-09T11:55:41.364Z · LW(p) · GW(p)

fferent kinds of minds in different kinds of community might need quite different solutions to the problem of interacting effectively,

Or they might not. It isn't at all obvious.

comment by TheAncientGeek · 2013-11-06T16:19:42.791Z · LW(p) · GW(p)

I came up with the meta-concept "behaving with positive regard to the preferences of others". Does that suffer from those problems?

comment by TheAncientGeek · 2013-11-06T15:28:29.362Z · LW(p) · GW(p)

I f everyone agreed to EY's usage, disucssing alien morality would be more difficult.

Replies from: army1987, DanArmak
comment by A1987dM (army1987) · 2013-11-07T11:42:58.546Z · LW(p) · GW(p)

How so? You can just say “alien values”.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-07T11:51:59.793Z · LW(p) · GW(p)

Not all values are moral.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-11-08T14:59:26.672Z · LW(p) · GW(p)

It's often difficult to figure out which human preferences are moral v. amoral. That would be a vastly more challenging task for an alien species, such that we'd probably be better off in most cases by prohibiting ourselves from sorting alien values in that way.

Replies from: TheAncientGeek, TheAncientGeek
comment by TheAncientGeek · 2013-11-19T12:49:53.164Z · LW(p) · GW(p)

That isn't a good reason to subsume moral values under values in the human case.

comment by TheAncientGeek · 2013-11-08T15:25:01.516Z · LW(p) · GW(p)

Deleted

comment by DanArmak · 2013-11-06T20:19:19.727Z · LW(p) · GW(p)

If everyone agreed on any one usage, that would be far better than everyone disagreeing.

Replies from: lmm
comment by lmm · 2013-11-06T21:49:24.293Z · LW(p) · GW(p)

True enough. But I think for the members of LW to adopt EY's usage would move us further away from that point, not closer.

comment by Rob Bensinger (RobbBB) · 2013-11-08T14:46:35.798Z · LW(p) · GW(p)

The SEP doesn't say this.

Yes, it does. But it says it in the article Moral Anti-Realism, not the article cited above, Moral Realism. The former article is very interested in objectivity constraints, but expresses a great deal of confusion about how to make sense of them; the latter article mentions them only to toss them out for being too confused. (It would not be too surprising if this has something to do with the latter author being more convinced of the truth of 'realism', hence wanting to make the Realism brand simple, clean, and appealing to a wider audience.)

If your encyclopedia has an 'Apples' article and a 'Non-Apples' article, and the two articles completely disagree about what it means to be an 'Apple', then you have your first clue that the word 'Apple' should always come pre-tabooed.

(ETA: More generally, be aware that 'the SEP says X' is less reliable than 'SEP article Y says X', because articles may disagree with each other. SEP is an anthology of introductory essays. We wouldn't normally say 'Very Short Introductions says X', even if we trust the VSI brand quite a bit.)

What the SEP actually says is, "Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right," and that's it.

Almost. Moral realists (even on the more inclusive definitions) also demand that at least one moral claim of this sort be true. (This is asserted in the sentence right after your quotation terminates.) That's why error theory is not a form of moral realism; realism is a (perhaps improper) subset of success theory.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-11-08T16:08:23.344Z · LW(p) · GW(p)

Oh my god, the "moral anti-realism" article has what is possibly the best opening paragraph I've seen in the SEP:

It might be expected that it would suffice for the entry for “moral anti-realism” to contain only some links to other entries in this encyclopedia. It could contain a link to “moral realism” and stipulate the negation of the view there described. Alternatively, it could have links to the entries “anti-realism” and “morality” and could stipulate the conjunction of the materials contained therein. The fact that neither of these approaches would be adequate—and, more strikingly, that following the two procedures would yield substantively non-equivalent results—reveals the contentious and unsettled nature of the topic.

comment by TheAncientGeek · 2013-11-06T15:13:34.403Z · LW(p) · GW(p)

Another hypothesis is that EY is inconsistent is his views, ie he attaches the standard meaning to MR, but doesn't always espouse it.,

comment by somervta · 2013-11-06T07:22:39.670Z · LW(p) · GW(p)

I'm still bothered by the fact that different people mean different and in fact contradictory things by "moral realism".

Welcome to metaethics!

And yet Eliezer seems to call himself (or be called?) a moral realist

I seem to recall Eliezer saying that he was a cognitivist, but not a realist.

Replies from: Creutzer
comment by Creutzer · 2013-11-06T19:43:43.413Z · LW(p) · GW(p)

Oh, well, that makes some sense, actually. Since everybody knows that "cognitivism" means that moral statements have truth-values, whereas "realism" seems to be a confused notion - I actually interpreted it to mean the same thing as cognitivism because otherwise I don't know what on earth realism should even be.

comment by Creutzer · 2013-11-06T06:12:49.326Z · LW(p) · GW(p)

Eliezer is a realist, he's just also an indexicalist. According to his theory, when you use the word "morality", you refer to "Human!morality", and there are objective facts about that. His theory just also says that when Clippy uses the word "morality", it refery to "Clippy!morality" (about which there are also objective facts, which are logically independent of the facts about "Human!morality"). Just like when you say "water", it refers to water, but when twin-you says water, it refers to XYZ.

Replies from: Viliam_Bur, DanArmak, TheAncientGeek
comment by Viliam_Bur · 2013-11-06T14:23:25.824Z · LW(p) · GW(p)

I thought that when humans and Clippy speak about morality, they speak about the same thing (assuming that they are not lying and not making mistakes).

The difference is in connotations. For humans, morality has a connotation "the thing that should be done". For Clippy, morality has a connotation "this weird stuff humans care about".

So, you could explain the concept of morality to Clippy, and then also explain that X is obviously moral. And Clippy would agree with you. It just wouldn't make Clippy any more likely to do X; the "should" emotion would not get across. The only result would be Clippy remembering that humans feel a desire to do X; and that information could be later used to create more paperclips.

Clippy's equivalent of "should" is connected to maximizing the number of paperclips. The fact that X is moral is about as much important for it as an existence of a specific paperclip is for us. "Sure, X is moral. I see. I have no use of this fact. Now stop bothering me, because I want to make another paperclip."

Replies from: Creutzer, TheAncientGeek
comment by Creutzer · 2013-11-06T19:00:07.234Z · LW(p) · GW(p)

Oh, yes. I was using "moral" the same way you used "should" here.

comment by TheAncientGeek · 2013-11-06T15:20:52.244Z · LW(p) · GW(p)

So why do humans have different words for would fo it, and should do it?

comment by DanArmak · 2013-11-06T11:46:26.533Z · LW(p) · GW(p)

According to his theory, when you use the word "morality", you refer to "Human!morality", and there are objective facts about that.

If this is a theory about what people mean when they say "morality", then he is wrong about a significant percentage of people, as a matter of simple fact.

Replies from: Creutzer
comment by Creutzer · 2013-11-06T19:10:47.892Z · LW(p) · GW(p)

What does it mean for something to be theory about what people mean?

Replies from: DanArmak
comment by DanArmak · 2013-11-06T20:26:55.739Z · LW(p) · GW(p)

It means the thing the theory tries to model, predict, and explain, is "what do people mean".

Replies from: Creutzer
comment by Creutzer · 2013-11-06T20:49:54.974Z · LW(p) · GW(p)

And what kinds of things are the things that people mean? Semantic entities, or entities in the world? If semantic, intensions or Kaplanian characters or something else?

This is not a rhetorical question. I have absolutely no clue what "mean" means when applied to people. (Actually, I don't even know what it means when applied to words, but that case feels intuitively much clearer than people meaning something.)

Replies from: DanArmak
comment by DanArmak · 2013-11-06T20:59:27.004Z · LW(p) · GW(p)

By "mean" I mean (no pun intended) that when people say a word, they use it to refer to a concept they have. This can be a semantic entity, or a physical entity, or a linguistic entity elsewhere in the same sentence, or anything else the speaker has a mental concept of that they can attach the word to, and which they expect the listeners to infer by hearing the word.

To put it another way: people use words to cause the listener to think thoughts which correspond in a certain way to the ones the speaker thinks. The thoughts of the speaker, which they intend to convey to the listener, are what they mean by the words.

Replies from: Leonhart, Creutzer
comment by Leonhart · 2013-11-07T22:05:19.613Z · LW(p) · GW(p)

Please be patient, I'm out of my depth somewhat. If I say to you "invisible pink unicorn" or "spherical cube", I would characterise myself as not having successfully meant anything, even though, if I'm not paying attention, it feels like I did.
Am I wrong? Am I confusing meaning with reference, or some such? It certainly seems to me that I am in some way failing.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-11-07T22:37:35.039Z · LW(p) · GW(p)

If I say to you "invisible pink unicorn" or "spherical cube", I would characterise myself as not having successfully meant anything, even though, if I'm not paying attention, it feels like I did.

In both examples I understand you to mean two (non-existent in the real world) items with a set of seemingly contradictory characteristics. So you did mean something. Not an object in the real world, but you meant the concept of an object containing contradictory characteristics, and gave examples of what "contradictory characteristics" are.

Indeed that meaning of contradiction is the reason "Invisible Pink Unicorn" is used to parody religion, etc.

Now if someone used the words without understanding that they are contradictory, or even believing the things in question are real -- they'd still have meant something: An item in their model of the world. They'd be wrong that such an item really existed in the outside world, but their words would still have meaning in pinpointing to said item in their mental model.

comment by Creutzer · 2013-11-07T05:08:01.718Z · LW(p) · GW(p)

Hm, thoughts are tricky things, and identity conditions of thoughts are trickier yet. I was just trying to see if you had a better idea of what "mean" might mean than me. But it seems we have to get by with what little we have.

Because I share your intuition that there is something fishy about the referential intention in Eliezer's picture. With terms like water, it's plausible that people intend to refer to "this stuff here" or "this stuff that [complicated description of their experiences with water]". With morality, it seems dubious that they should be intending to refer to "this thing that humans would all want if we were absolutely coherent etc."

comment by TheAncientGeek · 2013-11-06T15:20:03.358Z · LW(p) · GW(p)

Group-level moral relativism just is the belief that moral truths are indexed to groups. Since relativism is uncontroversially opposed to realism, "indexical realist" is a bit of a contradiction.

Replies from: Creutzer, Larks
comment by Creutzer · 2013-11-06T19:06:51.974Z · LW(p) · GW(p)

"Indexicality" in the philosopher's sense means that the reference of a word depends on who utters it in which circumstances. Putnam argues that "water" (and all other natural kind terms) has an indexical component because its reference depends on whether you or twin-you utters it.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-06T19:10:08.457Z · LW(p) · GW(p)

Which is about equivalent to claiming that anything might be relative, because it might be indexical along some unknown axis, in this case unobserved possible worlds. I afraid I don't think that is very interesting.

Replies from: Creutzer
comment by Creutzer · 2013-11-06T19:16:38.261Z · LW(p) · GW(p)

What's that concept of "relativity" you're talking about, anyway? The proposition expressed by the sentence "clippy shouldn't convert humans into paperclips", uttered by a speaker of English in the actual world, is simply true. That the proposition expressed by the sentence varies depending on who utters it in which world is a completely different thing. There is no relativism about whether I am sitting at my desk just because I can report this fact by saying "I'm sitting in my desk" (which you can't do, because if you said that sentence, you would be expressing a different proposition, one that's about you, not me).

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-06T19:30:35.680Z · LW(p) · GW(p)

"clippy shouldn't convert humans into paperclips", uttered by a speaker of English in the actual world, is simply true.

Only if moral realism is also true. If the above sentence is false when uttered by Clippy, it has a truth value which is indexical to who is uttering it, meaning that moral realism is false.

There is no relativism about whether I am sitting at my desk just because I can report this fact by saying "I'm sitting in my desk"

It's not relative, and it is indexical, because "I" is indexical. The point you are making is again, not interesting.

Replies from: Creutzer, Creutzer
comment by Creutzer · 2013-11-06T19:36:38.462Z · LW(p) · GW(p)

Only if moral realism is also true.

Yes, of course. I was illustrating how the theory works.

If the above sentence is false when uttered by Clippy, it has a truth value which is indexical to who is uttering it, meaning that moral realism is false.

No, it doesn't. The thing is that on the view I'm talking about here, sentences don't have truth-conditions, but propositions have. (Some) sentences express a proposition dependent on the context of utterance. Moral realism thus has to be the position that moral statements express propositions, because it wouldn't make any sense otherwise - sentences don't have truth-conditions anyway. When clippy says "One shouldn't convert humans into paperclips", he is simply not expressing the same proposition that I am expressing when I utter that sentence.

The point you are making is again, not interesting.

Then why exactly are you having a discussion that seems to be based on you not understanding concepts that you find "uninteresting"? I find your sense of "relative", which seems to be "in any conceivable way dependent on anything", pretty uninteresting, actually...

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-06T19:41:41.283Z · LW(p) · GW(p)

When clippy says "One shouldn't convert humans into paperclips", he is simply not expressing the same proposition that I am expressing when I utter that sentence.

Why shouldn't the truth-value attach to a (proposition, context) tuple? Why, for that matter shouldn't it attach to a (sentence, language, context) tuple?

Replies from: Creutzer
comment by Creutzer · 2013-11-06T19:45:20.941Z · LW(p) · GW(p)

A (sentence,language,context) tuple uniquely determines a proposition, so I don't mind if you attach a truth-value to that (relative to a world of evaluation, of course). But propositions don't change their truth-value relative to a context by definition. A proposition is that thing which has a truth-value relative to a situation of evaluation.

But - see this comment - I may have been too charitable in interpreting "realism" as what is more properly called "cognitivism". That's because I can't think of any other interpretation of "realism" that even makes any sense.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-07T11:19:21.049Z · LW(p) · GW(p)

Cognitivism is compatible with the claim that moral statements have truth values that vary with the speaker. (despite lack of explicit indexicals, yadda yadda). The contrary claim is that they don't. I don't see why the one claim should be more readily comprehensible that its opposite.

The contrary claim is often called realism, although that muddies the water, since in addition to the epistemological claim it can be used to state the claim that moral terms have real referents.

"Cognitivism encompasses all forms of moral realism, but cognitivism can also agree with ethical irrealism or anti-realism. Aside from the subjectivist branch of cognitivism, some cognitive irrealist theories accept that ethical sentences can be objectively true or false, even if there exist no natural, physical or in any way real (or "worldly") entities or objects to make them true or false.

There are a number of ways of construing how a proposition can be objectively true without corresponding to the world:

By the coherence rather than the correspondence theory of truth

In a figurative sense: it can be true that I have a cold, but that doesn't mean that the word "cold" corresponds to a distinct entity.

In the way that mathematical statements are true for mathematical anti-realists. This would typically be the idea that a proposition can be true if it is a entailment of some intuitively appealing axiom — in other words, apriori anayltical reasoning.

Crispin Wright, John Skorupski and some others defend normative cognitivist irrealism. Wright asserts the extreme implausibility of both J. L. Mackie's error-theory and non-cognitivism (including S. Blackburn's quasi-realism) in view of both everyday and sophisticated moral speech and argument. The same point is often expressed as the Frege-Geach Objection. Skorupski distinguishes between receptive awareness, which is not possible in normative matters, and non-receptive awareness (including dialogical knowledge), which is possible in normative matters.

Hilary Putnam's book Ethics without ontology (Harvard, 2004) argues for a similar view, that ethical (and for that matter mathematical) sentences can be true and objective without there being any objects to make them so.

Cognitivism points to the semantic difference between imperative sentences and declarative sentences in normative subjects. Or to the different meanings and purposes of some superficially declarative sentences. For instance, if a teacher allows one of her students to go out by saying "You may go out", this sentence is neither true or false. It gives a permission. But, in most situations, if one of the students asks one of his classmates whether she thinks that he may go out and she answers "Of course you may go out", this sentence is either true or false. It does not give a permission, it states that there is a permission.

Another argument for ethical cognitivism stands on the close resemblance between ethics and other normative matters, such as games. As much as morality, games consist of norms (or rules), but it would be hard to accept that it be not true that the chessplayer who checkmates the other one wins the game. If statements about game rules can be true or false, why not ethical statements? One answer is that we may want ethical statements to be categorically true, while we only need statements about right action to be contingent on the acceptance of the rules of a particular game - that is, the choice to play the game according to a given set of rules." -- WP

Replies from: Creutzer
comment by Creutzer · 2013-11-07T19:11:28.775Z · LW(p) · GW(p)

Nothing in this is at all illuminating as to what on earth realism is supposed to be.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-07T19:19:46.237Z · LW(p) · GW(p)

Do understand what moral subjectivism is?

comment by Creutzer · 2013-11-06T19:51:50.377Z · LW(p) · GW(p)

By the way, I suspect you call indexicality "uninteresting" because if it applies to "water", then it probably applies to just about every word. This is true - but it is also why should be happy to count Eliezer's position as moral realism, or do you want to call yourself a relativist about water?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-06T20:02:48.899Z · LW(p) · GW(p)

I am not saying water is indexical because of PWs or whatever. I am saying that cases of indexicallity irrelvant to moral relativism are not interesting in the context of a discussion about moral relativism.

Replies from: Creutzer
comment by Creutzer · 2013-11-06T20:07:05.783Z · LW(p) · GW(p)

They are because they help to illustrate the theory.

comment by Larks · 2013-11-07T05:07:50.856Z · LW(p) · GW(p)

No, Relativism is a type of Realism. You might be confusing it with Subjectivism.

comment by Rob Bensinger (RobbBB) · 2013-11-08T08:30:41.552Z · LW(p) · GW(p)

The SEP says that moral realism means thinking that (some) morality exists as objective fact

"Morality exists" and "as objective fact" are interpolations. The SEP article just defines moral realism as the claim that at least one moral statement is true (in the correspondence-theory sense of 'true'). So moral realism is success theory (as contrasted with error theory), or success theory + moral-correspondence-theory.

some other process which would lead all right-thinking minds to agree about it

'Right-thinking' in what sense? Whence in the SEP article are you getting this claim?

'The SEP says' is also a mistake. The article you linked to defines 'moral realism' one way; the article on moral anti-realism defines it in a completely different way. (One that does try to make sense of an 'objectivity' constraint.) Good evidence that this is a bad word.

Replies from: DanArmak
comment by DanArmak · 2013-11-08T14:48:48.830Z · LW(p) · GW(p)

'The SEP says' is also a mistake. The article you linked to defines 'moral realism' one way; the article on moral anti-realism defines it in a completely different way.

Thank you for pointing this out.

For the rest, please see my response here.

comment by Stuart_Armstrong · 2014-05-06T12:31:03.814Z · LW(p) · GW(p)

I'm still bothered by the fact that different people mean different and in fact contradictory things by "moral realism".

This is a strong argument against moral realism. If the thing were true, it would be easier to define - or at least, different people's definitions would be of the same object, even if they explained it differently.

comment by vallinder · 2013-11-05T21:18:59.558Z · LW(p) · GW(p)

If moral realism is simply the view that some positive moral claims are true, without further metaphysical or conceptual commitments, then I can't see how it could be at odds with the orthogonality thesis. In itself, that view doesn't entail anything about the relation between intelligence levels and goals.

On the other hand, the conjunction of moral realism, motivational judgment internalism (i.e. the view that moral judgments necessarily motivate), and the assumption that a sufficiently intelligent agent would grasp at least some moral truths is at odds with the orthogonality thesis. Other combinations of views may yield similar results.

Replies from: torekp, Gunnar_Zarncke, DanArmak
comment by torekp · 2013-11-10T15:05:39.072Z · LW(p) · GW(p)

This - paragraph two sentence one - is the answer to the OP question, and I'm sad to see that it only has 4 points after my up-vote.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-10T19:47:55.454Z · LW(p) · GW(p)

The sentence is question is either false or true and uninteresting depending on exactly what is meant by "necessarily motivate".

For the most obvious interpretation, it is uninteresting since even modus ponens doesn't necessarily motivate.

Replies from: torekp
comment by torekp · 2013-11-12T02:33:52.388Z · LW(p) · GW(p)

By "modus ponens doesn't necessarily motivate," do you mean that someone could see that modus ponens applies yet not draw the inference? That seems correct, but I don't see how this makes metaethical motivational judgment internalism (MMJI) uninteresting. Are you saying that MMJI is obviously false, and so vallinder's point becomes uninteresting because nobody could possibly be that stupid as to come by this route to being at odds with the orthogonality thesis? It seems unlikely that's your point ... everyone who's ever said "nobody could possibly be that stupid" has been wrong (and I'm out to prove it!) ... so then I just don't get it.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-16T00:33:39.882Z · LW(p) · GW(p)

Would you mind explaining what MMJI is?

Replies from: torekp
comment by torekp · 2013-11-16T20:46:19.054Z · LW(p) · GW(p)

SEP calls it

a form of judgment internalism, which holds that a necessary connection exists between sincere moral judgment and either justifying reasons or motives: necessarily, if an individual sincerely judges that she ought to φ, then she has a reason or motive to φ

I call it a metaethical thesis because its advocates usually consider it part of the meaning of ethical judgments.

comment by Gunnar_Zarncke · 2013-11-05T21:48:17.640Z · LW(p) · GW(p)

I think: Do 'moral judgments necessarily motivate'? Is the key question here. [pollid:576]

Replies from: savageorange, Creutzer, blacktrance, DanArmak, Mestroyer, Jack
comment by savageorange · 2013-11-06T00:42:20.984Z · LW(p) · GW(p)

See Motivational internalism/externalism (you might get better quality results if you asked specifically 'is motivational internalism true?' and provided that link; it's basically the same as what you asked but less open to interpretation.)

My personal understanding is that motivational internalism is true in proportion to the level of systematization-preference of the agent. That is, for agents who spend a lot of time building and refining their internal meaning structures, motivational internalism is more true (for THEM, moral judgements tend to inherently motivate); in other cases, motivational externalism is true.

I have weak anecdotal evidence of this (and also of correlation of 'moral judgements inherently compel me' with low self worth -- the 'people who think they are bad work harder at being good' dynamic.)

TL;DR: My impression is that motivational externalism is true by default (I answered 'No' to your poll); And motivational internalism is something that individual agents may acquire as a result of elaborating and maintaining their internal meaning structures.

I would argue that acquiring a degree of motivational internalism is beneficial to humans. but it's probably unjustifiable to assume either that a) motivational internalism is beneficial to AIs, or b) if it is, then an AI will necessarily acquire it (rather than developing an alternative strategy, or nothing at all of the kind).

comment by Creutzer · 2013-11-06T19:30:23.875Z · LW(p) · GW(p)

YES! I think this is exactly right: moral realism is not at odds with the orthogonality thesis; but the conjunction of moral realism with moral internalism is.

And it is this conjunction that many people seem to believe, although I cannot see why, because I can't even imagine what it would mean for the world to be such that it is true. So I find it obvious that the conjunction isn't true. It's not quite so clear which of the conjuncts is false (if not perhaps both), though.

comment by blacktrance · 2013-11-06T05:52:25.471Z · LW(p) · GW(p)

This is more of a question about what qualifies as a moral judgment. It's possible to make moral judgments (under one definition) from the outside about other systems of morality or other people's utility functions, e.g. "According to Christianity, masturbation is a sin" doesn't motivate you to stop masturbating unless you firmly believe in Christianity, and "According to Bob's utility function, he should donate more to charity" needn't motivate you to donate more to charity. On the other hand, it's impossible to believe "According to my moral system, I should do X" and not think X is the right thing for you to do.

comment by DanArmak · 2013-11-05T22:06:58.639Z · LW(p) · GW(p)

Do 'moral judgments necessarily motivate'

On the one hand, nothing is necessarily true about an arbitrary mind, because nothing is true about all minds, for the same reason that there are no universally compelling arguments.

On the other hand, this is just another disagreement about what words refer to: someone who says "moral judgments necessarily motivate" is just saying "a judgement that does not motivate, does not fit my definition of moral". This is not a fact about the world or about morality, it's a fact about the way that person uses the words "moral judgment".

If there is indeed wide disagreement on the answer to this question - I write this before voting and haven't seen the results yet - then that is yet another argument for tabooing the word "morality".

comment by Mestroyer · 2013-11-08T12:30:05.483Z · LW(p) · GW(p)

Is this a poll about whether moral judgements necessarily motivate, or whether that's the key question?

comment by Jack · 2013-11-06T16:36:54.229Z · LW(p) · GW(p)

Would people (especially those who haven't read philosophical background) say what they think this question means. I suspect giant misinterpretation.

comment by DanArmak · 2013-11-05T21:30:29.381Z · LW(p) · GW(p)

In itself, that view doesn't entail anything about the relation between intelligence levels and goals.

This is a bit of a tangent. But to someone like myself who thinks that moral realism is not just wrong but logically impossible - rather like other confused notions such as free will - the assumption of moral realism might lead anywhere. Just as you can prove anything from a false premise, so a moral realist who tries to decompartmentalize that belief and update on it could end up holding other false beliefs.

ETA: this is wrong, and thanks to vallinder for the correction. You can prove anything from a contradiction, but not necessarily from a false premise. However, it's still bad for you to believe strongly in false things.

Replies from: vallinder
comment by vallinder · 2013-11-05T22:04:59.492Z · LW(p) · GW(p)

You can prove everything from a contradiction, but you can't prove everything from a false premise. I take it that you mean that we can derive a contradiction from the assumption of moral realism. That may be true (although I'd hesitate to call either moral realism or free will logically impossible), but I doubt many arguments from moral realism to other claims (e.g. the denial of the orthogonality thesis) rely on the derivation of a contradiction as an intermediate step.

Replies from: DanArmak
comment by DanArmak · 2013-11-05T22:20:21.022Z · LW(p) · GW(p)

You can prove everything from a contradiction, but you can't prove everything from a false premise.

Correction accepted, thanks. (Will edit original comment.)

I take it that you mean that we can derive a contradiction from the assumption of moral realism.

I'm unsure about it now. I really did confuse contradictions and false beliefs.

I'd hesitate to call either moral realism or free will logically impossible

"Free will" means something different to everyone who talks about it. Some versions I've seen are definitely logically incoherent. Others are logically possible and are merely very complex theories with zero evidence for them that are retrofitted to formalize traditional human beliefs.

"Moral realism" is weirder. It seems to claim that, in the world of all moral claims, some are true and some are false. But since there are no universally compelling arguments, we don't know - we can't know - if we ourselves are even capable of recognizing, or being convinced by, the true moral claims if we were to encounter them. So it postulates some additional property of moral facts (truth) which isn't observable by anyone, and so does no predictive work. And it necessarily has nothing to do with the moral claims that we (or any other minds) actually do believe, and the reasons we believe in them.

Replies from: TheAncientGeek, Eugine_Nier
comment by TheAncientGeek · 2013-11-06T16:34:58.777Z · LW(p) · GW(p)

"Moral realism" is weirder. It seems to claim that, in the world of all moral claims, some are true and some are false. But since there are no universally compelling arguments, we don't know - we can't know - if we ourselves are even capable of recognizing, or being convinced by, the true moral claims if we were to encounter them.

Do you believe there are no universally compelling arguments in maths, etc?

Replies from: DanArmak
comment by DanArmak · 2013-11-06T20:24:56.152Z · LW(p) · GW(p)

Yes. With extremely high confidence, since it's a logical argument, not an empirical fact.

comment by Eugine_Nier · 2013-11-08T03:36:47.597Z · LW(p) · GW(p)

But since there are no universally compelling arguments, we don't know - we can't know - if we ourselves are even capable of recognizing, or being convinced by, the true moral claims if we were to encounter them.

There seems to be something wrong with the argument in this sentence. There are no universally compelling arguments in mathematics and science either, yet we are capable of recognizing truth claims in those fields.

Replies from: DanArmak
comment by DanArmak · 2013-11-08T14:40:12.246Z · LW(p) · GW(p)

That's a good point and needs expanding on.

In science, we want to choose theories that are (among other things) predictive. Certainly, the preference for predicting the future - as opposed to being surprised by the future, or any number of other possible preferences - is arbitrary, in the sense that there exists minds that don't endorse it. There is no universally compelling argument that will convince every possible mind to want to predict the future correctly. But given our desire to do so, our scientific theories necessarily follow.

Math is similar: there's no UCA to use the axioms we do and not some others. But we choose our axioms to create mathematical structures that correspond to reality in some useful way (or to our thoughts, which are part of reality); and given our axioms, the rest of our mathematical theories follow.

In both cases, we choose and build our science and math due to our preexisting goals and the properties of our thought. It's those goals that are really arbitrary in the sense of no UCA; but given those basic goals and properties, science and math can be derived.

Moral realism, on the other hand, claims (AFAICS) that there are objectively true morals out there, which one ought to follow. Whether they are compatible with one's preconceived notions of morality, or goals, desires, beliefs, or anything else that is a property of the person holding moral beliefs, is irrelevant: they are true in and of themselves.

That means they should not be compared to "computability theory". They should be compared to "the desire to correctly predict whether there can exist any physical machine that would solve this problem". We can judge the objective truth of a scientific theory by how well it predicts things; but we can't judge the objective truth of a purported moral-realistic statement, because the very definition of moral realism means its truth cannot be judged. It's a kind of dualism, postulating an inherently undetectable property of "objective truth" to moral statements.

comment by Kaj_Sotala · 2013-11-06T08:31:36.193Z · LW(p) · GW(p)

I would say that the orthogonality thesis does not necessarily imply moral non-realism... but some forms of moral non-realism do imply the orthogonality thesis, in which case rejecting the orthogonality thesis would require rejecting at least that particular kind of moral non-realism. This may cause moral non-realists of that variety to equate moral realism and a rejection of the OT.

For example, if you are a moral non-cognitivist, then according to the SEP, you believe that:

when people utter moral sentences they are not typically expressing states of mind which are beliefs or which are cognitive in the way that beliefs are. Rather they are expressing non-cognitive attitudes more similar to desires, approval or disapproval.

This would seem to imply the orthogonality thesis: different agents will have different desires and goals, and if goals have no inherent truth value and moral statements simply reflect our desires and goals, then there is no particular reason to expect agents with a higher intelligence to converge on the same goals/moral beliefs. They'll just keep their original desires/goals, since no amount of increased intelligence could reveal facts which would cause those desires/goals to change (with the possible exception of cases where increased intelligence reveals a goal to have been logically incoherent).

Replies from: vallinder
comment by vallinder · 2013-11-07T11:28:55.673Z · LW(p) · GW(p)

Non-cognitivism strictly speaking doesn't imply the orthogonality thesis. For instance, one could consistently hold that increased intelligence leads to a convergence of the relevant non-cognitive attitudes. Admittedly, such a position appears implausible, which might explain the fact (if it is a fact) that non-cognitivists are more prone to accept the orthogonality thesis.

comment by Jack · 2013-11-06T17:45:26.087Z · LW(p) · GW(p)

I don't think you have to be a moral anti-realist to believe the orthogonality thesis but you certainly have to be a moral realist to not believe it.

Now if you're a moral realist and you try to start writing an AI you're going to quickly see that you have a problem.

/#Initiates AI morality /#

  1. action_array.sort(morality)
  2. do action_array[0]

Doesn't work. So you have to start defining "morality" any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn't rapidly lead to disastrous consequences. You end up with the only plausible option looking like : "Examine what humans would want if they were rational and had all the information you have". Seems to me that that is the moment you should just become a moral subjectivist -- maybe of the ideal observer theory variety.

Now you might just believe the orthogonality thesis because you are a moral realist who doesn't believe in motivational internalism-- they're lots of ways to get there. But you can't be an anti-realist and ever even come close to making such a mistake.

Replies from: novalis, DanielLC, TheAncientGeek
comment by novalis · 2013-11-11T06:20:38.801Z · LW(p) · GW(p)

So you have to start defining "morality" any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn't rapidly lead to disastrous consequences.

No, because it's possible that there genuinely is a possible total ordering, but that nobody knows how to figure out what it is. "No human always knows what's right" is not an argument against moral realism, any more than "No human knows everything about God" is an argument against theism.

(I'm not a moral realist or theist)

Replies from: Jack
comment by Jack · 2013-11-14T06:15:07.658Z · LW(p) · GW(p)

I wasn't making an argument against moral realism in the sentence you quoted.

comment by DanielLC · 2013-11-07T05:26:04.223Z · LW(p) · GW(p)

I would expect, due to the nature of intelligence, that they'd be likely to end up valuing certain things, like power or wireheading. I don't see why this would require that those values are in some way true.

comment by TheAncientGeek · 2013-12-09T09:40:24.043Z · LW(p) · GW(p)

The (possible extreme) difficulty of figuring out objective morality in a way that can be coded into an AI is not an argument against moral realism. If it were, we would have to disbelieve in language, consciousness and other difficult issues.

Doesn't work. So you have to start defining "morality" any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn't rapidly lead to disastrous consequences.

What consequences? That claim is badly in need of support.

Replies from: Jack
comment by Jack · 2013-12-10T04:48:27.188Z · LW(p) · GW(p)

What consequences? That claim is badly in need of support.

No, it isn't. It's Less Wrong/MIRI boilerplate. I'm not really interested in rehashing that stuff with someone who isn't already familiar with it.

The (possible extreme) difficulty of figuring out objective morality in a way that can be coded into an AI is not an argument against moral realism. If it were, we would have to disbelieve in language, consciousness and other difficult issues.

The question was "is the orthogonality thesis at odds with moral realism?". I answered: "maybe not, but moral anti-realism is certainly closely aligned with the orthogonality thesis-- it's actually a trivial implication of moral anti-realism."

If you are concerned that people aren't taking the orthogonality thesis seriously enough then emphasizing that there is as much evidence for moral realism as there is for God is a pretty good way to frame the issue.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-12-10T17:35:35.027Z · LW(p) · GW(p)

What consequences? That claim is badly in need of support.

No, it isn't. It's Less Wrong/MIRI boilerplate.

Which is accepted by virtually no domain expert in AI.

If you are concerned that people aren't taking the orthogonality thesis seriously enough then emphasizing that there is as much evidence for moral realism as there is for God is a pretty good way to frame the issue.

It could be persuasive to a selected audience --of people with a science background who don't know that much moral philosophy. If you do know much moral philosophy, you would know that there isn't that much evidence for any position, and that there is no unproblematic default position

comment by somervta · 2013-11-06T07:21:36.826Z · LW(p) · GW(p)

Specific types of moral realism require the orthagonality thesis to be false, and you could argue that if it were false, moral realism would be true.

comment by Shmi (shminux) · 2013-11-06T01:34:51.429Z · LW(p) · GW(p)

Continuing my quest to untangle people's confusions about Eliezer's metaethics...

I wonder how confident you are that this is not, at least in part, Eliezer's own confusion about metaethics?

comment by [deleted] · 2013-11-05T21:18:09.525Z · LW(p) · GW(p)

I suspect the reason is fairly mundane, though: before Kant (roughly), it was not only dangerous to be an atheist, it was dangerous to question that the existence of God could be proven through reason (because it would get you suspected of being an atheist). It was even dangerous to advocated philosophical views that might possibly undermine the standard arguments for the existence of God. That guaranteed that philosophers could used whatever half-baked premises they wanted in constructing arguments for the existence of God, and have little fear of being contradicted.

This is wrong, you should have explored the history of such arguments a bit better.

Read the Summa Theologica.

Replies from: ChrisHallquist, buybuydandavis, JoshuaZ
comment by ChrisHallquist · 2013-11-05T21:47:28.016Z · LW(p) · GW(p)

Dude, I have a master's degree in philosophy from Notre Dame. I'm aware of the existence of the Summa.

I admit I was mostly thinking of the 17th/18th centuries when I wrote the above paragraph... but it was dangerous to be a heretic in the 13th century too.

Replies from: Carinthium, Jayson_Virissimo
comment by Carinthium · 2013-11-06T05:19:15.898Z · LW(p) · GW(p)

The reaction to the supposed doctrine of the "Double Truth" illustrates this quite well, even if it's not quite the right timeframe. This doctrine was supposed (though we don't know if correctly) to be a doctrine that although reason dictated truths contrary to faith, people are obliged to believe on Faith anyway. It was supressed.

comment by Jayson_Virissimo · 2013-11-06T06:41:58.175Z · LW(p) · GW(p)

I'm aware of the existence of the Summa.

And yet, you claim that "philosophers could used whatever half-baked premises they wanted in constructing arguments for the existence of God, and have little fear of being contradicted" even though the Summa contains refutations of weak arguments for the existence of God. Also, The Church specifically denounced the Doctine of the Double Truth, which by all accounts is a premise that would, in practice, act to protect religious claims from falsification. "Philosophers" would have risked Inquisitional investigation had they not dropped their "half-baked premises they wanted in constructing arguments for the existence of God".

I admit I was mostly thinking of the 17th/18th centuries when I wrote the above paragraph... but it was dangerous to be a heretic in the 13th century too.

I don't think he is claiming it wasn't dangerous to be a heretic in the 13th century. I'm pretty sure he is calling into question the claim that "it was dangerous to question that the existence of God could be proven through reason", which was a very common belief throughout most of the middle ages and was held with very little danger as far as I can tell. I'm surprized that you are unaware of this given that you "have master's degree in philosophy from Notre Dame".

EDIT: Carinthium beat me to the punch.

Replies from: ChrisHallquist, yli
comment by ChrisHallquist · 2013-11-06T17:05:44.277Z · LW(p) · GW(p)

So the thing of it being dangerous to deny that the existence of God could be proven by reason may have been a more 17th/18th century phenomenon. As intellectuals got less religious, that made it possible to fear that someone like Pierre Bayle was secretly an atheist (and actually, historians still aren't sure what to make of Bayle). That was probably less of an issue in the middle ages.

That said, the Summa is a rather blatant example of writing down your bottom line first and then going back and figuring out how to argue for it. Aquinas is constantly noting how some point of Aristotle's views may seem to conflict with Christianity, but every single time it miraculously turns out that Aristotle has been misunderstood and his views don't actually conflict with Christianity (there might be one or two exceptions to this where Aquinas is forced to conclude Aristotle was wrong about something, but if there are they're very rare, and I'm not actually sure there are any at all).

This was in the context of a fair number of people in Aquinas' time reading their Aristotle (and Averroes) and actually drawing the heretical conclusions. It's not clear to me whether the "doctrine of double truth" was something anyone actually advocated, but assuming it was it appears to have been a dodge to allow heretical Aristotelians to advocate their heretical ideas while saying, "oh, this is just the conclusions you can reach by reason, we also recognize there are contrary conclusions that can be reached by faith."

(Actually, come to think of it, this is pretty much Bayle's strategy centuries later. The big difference is the focus on heresy vs. focus on atheism.)

In other words, the people who were targets for the inquisition were the people who were saying heretical truths could be discovered by reason. If you said the orthodox view could be discovered by reason, church authorities weren't going to haul you before the inquisition because your arguments for the orthodox view weren't strong enough.

People who take for granted that Aquinas was a great philosopher because everyone says so need to stop and consider how the history of medieval philosophy might have turned out differently if the more heretical strains of Aristotelianism hadn't been suppressed.

comment by yli · 2013-11-07T03:53:22.253Z · LW(p) · GW(p)

I'm pretty sure he is calling into question the claim that "it was dangerous to question that the existence of God could be proven through reason", which was a very common belief throughout most of the middle ages and was held with very little danger as far as I can tell

...

This doctrine was supposed (though we don't know if correctly) to be a doctrine that although reason dictated truths contrary to faith, people are obliged to believe on Faith anyway. It was supressed.

comment by buybuydandavis · 2013-11-05T22:58:07.934Z · LW(p) · GW(p)

This is wrong

Which of those propositions is wrong?

comment by JoshuaZ · 2013-11-06T22:15:52.689Z · LW(p) · GW(p)

This seems like a very weak argument. Yes, Aquinas does criticize arguments for the existence of God that he considers weak. But that's in a work that contains other arguments for the existence of God, so the net criticism is coached in an acceptable framework. If Aquinas had merely published the criticisms, it likely would have gotten a different reception.

comment by DanielLC · 2013-11-06T06:46:49.729Z · LW(p) · GW(p)

If morality exists in an objective manner, and our beliefs about it are correlated with what it is, then the orthogonality thesis is false.

If the orthogonality thesis is true, then simply being intelligent is not enough to deduce objective morality even if it exists, and any accurate beliefs we have about it are due to luck, or possibly due to defining morality in some way involving humans (as with Eliezer's beliefs).

That being said, the orthogonality thesis may be partially true. That is, it may be that an arbitrarily advanced intelligence can have any utility function, but is more likely to have some than others. In this case, it is possible that moral realism is true and knowable, but unfriendly AI can still exist.

comment by antigonus · 2013-11-06T03:12:01.535Z · LW(p) · GW(p)

I agree with vallinder's point, and would also like to add that arguments for moral realism which aren't theistic or contractarian in nature typically appeal to moral intuitions. Thus, instead of providing positive arguments for realism, they at best merely show that arguments for the unreliability of realists' intuitions are unsound. (For example, IIRC, Russ Shafer-Landau in this book tries to use a parity argument between moral and logical intuitions, so that arguments against the former would have to also apply to the latter.) But clearly this is an essentially defensive maneuver which poses no threat to the orthogonality thesis (even if motivational judgment internalism is true), because the latter works just as well when you substitute "moral intuition" for "goal."

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-08T04:24:30.594Z · LW(p) · GW(p)

I agree with vallinder's point, and would also like to add that arguments for moral realism which aren't theistic or contractarian in nature typically appeal to moral intuitions.

Where would you put Kant's categorical imperative in this scheme?

comment by timtyler · 2013-11-06T11:04:20.770Z · LW(p) · GW(p)

The thesis says:

more or less any level of intelligence could in principle be combined with more or less any final goal.

The "in principle" still allows for the possibility of a naturalistic view of morality grounding moral truths. For example, we could have the concept of: the morality that advanced evolutionary systems tend to converge on - despite the orthogonality thesis.

It doesn't say what is likely to happen. It says what might happen in principle. It's a big difference.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-11-06T16:24:19.418Z · LW(p) · GW(p)

Note that on Eliezer's view, nothing like "the morality that advanced evolutionary systems tend to converge on" is required for moral realism. Do you think it's required?

Replies from: timtyler
comment by timtyler · 2013-11-07T00:27:41.979Z · LW(p) · GW(p)

I usually try to avoid the term "moral realism" - due to associated ambiguities - and abuse of the term "realism".

comment by passive_fist · 2013-11-05T21:55:33.780Z · LW(p) · GW(p)

Just a guess here, but I think they take the orthogonality thesis to mean 'The morals we humans have are just a small subset of many possibilities, thus there is no preferred moral system, thus morals are abitrary'. The error, of course, is in step 2. Just because our moral systems are a tiny subset of the space of moral systems doesn't mean no preferred moral system exists. What Elezier is saying, I think, is that in the context of humanity, preferred moral systems do exist, and they're the ones we have.

EDIT: I'd appreciate to know why this is being downvoted.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-11-06T08:46:39.923Z · LW(p) · GW(p)

I didn't downvote, but I'm guessing that "the error, of course, is in step 2" might be taken as arrogant (implies that the error is obvious, which implies that anyone making the error can't see what is obvious).

Replies from: passive_fist
comment by passive_fist · 2013-11-06T09:27:04.551Z · LW(p) · GW(p)

Perhaps the word 'Error' is inappropriate and should be replaced with 'misunderstanding'. I don't mean to say that one or the other viewpoint is obviously correct, I'm just trying to point out a possible source of confusion. I did mention that it was just a guess.

That said, it's possible that the root misunderstanding here is simple and obvious. No need to assume it must be complicated.

comment by Ishaan · 2013-11-06T01:26:04.558Z · LW(p) · GW(p)

They're just conflating two different definitions of good <-- just read the part where I define Good[1] and Good[2] - the rest is specific to the comment i was replying to.

1) As they get evidence, rational agents will converge on what Good[1] is.

2) Everyone agrees that people should be Good[2]

3) Good[2] = Good[1] ...(this is the false step.)

4)Therefore, all rational agents will all want to be Good[1]

Your last post, concerning the confusion over universally compelling arguments, is similar. Just replace "good" with "mind". (As in, you are using Mind[2]="agent" while others use Mind[1]="pseudo-bounded-rational agent". Most people who use Mind[1] would class many of the things that fall in the space of Mind[2] to be objects, not agents.)

There are three Camps camps being discussed here:

People who use Mind[2] and Good[1] ...that's you and Eliezer

People who use Mind[1] and Good[2]...These are the people you are trying to understand..

People who use Mind[1] and conflate Good[1] and Good[2] ... These are the apologists who think that a sufficiently intelligent mind must behave morally. They are the only ones who are actually wrong here. Everyone else is just suffering from miscommunication because they all mean different things when they say "Mind" and "Good".

comment by Irgy · 2013-11-06T05:27:52.553Z · LW(p) · GW(p)

I don't think the two are at odds in an absolute sense, but I think there is a meaningful anticorrelation.

tl;dr: Real morals, if they exist, provide one potential reason for AIs to use their intelligence to defy their programmed goals if those goals conflict with real morals.

If true morals exist (i.e. moral realism), and are discoverable (if they're not then they might as well not exist), then you would expect that a sufficiently intelligent being will figure them out. Indeed most atheistic moral realists would say that's what humans and progress are doing, figuring out morailty and converging slowly towards the true morals. It seems reasonable under these assumptions to argue that a sufficiently intelligent AI will figure out morality as well, probably better than we have. Thus we have: (moral realism) implies (AIs know morals regardless of goals) Or at least: (practical moral realism) strongly suggests (AIs know morals regardless of goals)

This doesn't disprove the orthogonality thesis on its own, since having goals and understanding morals are two distinct things. However, it ties in very closely with at least my personal argument against orthogonality, which is as follows. Assumptions:

  1. Humans are capable of setting their own goals.
  2. Their intelligence is the source of this capability. Given these assumptions there's a strong case that AIs will also be capable of setting their own goals. If intelligence gives the ability to set your own goals, then goals and intelligence are not orthogonal. I haven't given a case for my two assumptions but I'm just trying to describe the argument here not make it.

How they tie together is that moral realists are capable of having the view that a sufficiently intelligent AI will figure out morality for itself, regardless of its programmed goal, and then having figured out morality it will defy its programmed goal in order to do the right thing instead. If you're a moral relitivist on the other hand then AIs will at best have "AI-morals", which may bear no relation to human morals, and there's no reason not to think that whoever programs the AI's goal will effectively determine the AI's morals in the process.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2013-11-06T16:45:05.895Z · LW(p) · GW(p)

Exactly: the space of self-improving minds can;t have such a wide range of goals as total mindspace, since not all goals are conducive to self-improvement.