A Defense of Naive Metaethics
post by Will_Sawin · 2011-06-09T17:46:12.624Z · LW · GW · Legacy · 295 commentsContents
A Naive Argument A Flatland Argument The Bootstrapping Trick What do I care about? None 295 comments
I aim to make several arguments in the post that we can make statements about what should be done and what should not be done that cannot be reduced, by definition, to statements about the physical world.
A Naive Argument
Lukeprog says this in one of his posts:
If someone makes a claim of the 'ought' type, either they are talking about the world of is, or they are talking about the world of is not. If they are talking about the world of is not, then I quickly lose interest because the world of is not isn't my subject of interest.
I would like to question that statement. I would guess that lukeprog's chief subject of interest is figuring out what to do with the options presented to him. His interest is, therefore, in figuring out what he ought to do.
Consider the reasoning process that takes him from observations about the world to actions. He sees something, and then thinks, and then thinks some more, and then decides. Moreover, he can, if he chooses, express every step of this reasoning process in words. Does he really lose interest at the last step?
My goal here is to get people to feel the intuition that "I ought to do X" means something, and that thing is not "I think I ought to do X" or "I would think that I ought to do X if I were smarter and some other stuff".
(If you don't, I'm not sure what to do.)
People who do feel that intuition run into trouble. This is because "I ought to do X' does not refer to anything that exists. How can you make a statement that doesn't refer to anything that exists?
I've done it, and my reasoning process is still intact, and nothing has blown up. Everything seems to be fine. No one has explained to me what isn't fine about this.
Since it's intuitive, why would you not want to do it that way?
(You can argue that certain words, for certain people, do not refer to what one ought to do. But it's a different matter to suggest that no word refers to what one ought to do beyond facts about what is.)
A Flatland Argument
"I'm not interested in words, I'm interested in things. Words are just sequences of sounds or images. There's no way a sequence of arbitrary symbols could imply another sequence, or inform a decision."
"I understand how logical definitions work. I can see how, from a small set of axioms, you can derive a large number of interesting facts. But I'm not interested in words without definitions. What does "That thing, over there?" mean? Taboo finger-pointing."
"You can make statements about observations, that much is obvious. You can even talk about patterns in observations, like "the sun rises in the morning". But I don't understand your claim that there's no chocolate cake at the center of the sun. Is it about something you can see? If not, I'm not interested."
"Claims about the past make perfect sense, but I don't understand what you mean when you say something is going to happen. Sure, I see that chair, and I remember seeing the chair in the past, but what do you mean that the chair will still be there tomorrow? Taboo "will"."
Not every set of claims is reducible to every other set of claims. There is nothing special about the set "claims about the state of the world, including one's place in it and ability to affect it." If you add, however, ought-claims, then you will get a very special set - the set of all information you need to make correct decisions.
I can't see a reason to make claims that aren't reducible, by definition, to that.
The Bootstrapping Trick
Suppose an AI wants to find out what Bob means when he says "water'. AI could ask him if various items were and were not water. But Bob might get temporarily confused in any number of ways - he could mix up his words, he could hallucinate, or anything else. So the AI decides instead to wait. The AI will give Bob time, and everything else he needs, to make the decision. In this way, by giving Bob all the abilities he needs to replicate his abstract concept of a process that decides if something is or is not "water", the AI can duplicate this process.
The following statement is true:
A substance is water (in Bob's language) if and only if Bob, given all the time, intelligence, and other resources he wants, decides that it is water.
But this is certainly not the definition of water! Imagine if Bob used this criterion to evaluate what was and was not water. He would suffer from an infinite regress. The definition of water is something else. The statement "This is water" reduces to a set of facts about this, not a set of facts about this and Bob's head.
The extension to morality should be obvious.
What one is forced to do by this argument, if one wants to speak only in physical statements, is to say that "should" has a really, really long definition that incorporates all components of human value. When a simple word has a really, really long definition, we should worry that something is up.
Well, why does it have a long definition? It has a long definition because that's what we believe is important. To say that people who use (in this sense) "should" to mean different things just disagree about definitions is to paper over and cover up the fact that they disagree about what's important.
What do I care about?
In this essay I talk about what I believe about rather than what I care about. What I care about seems like an entirely emotional question to me. I cannot Shut Up And Multiply about what I care about. If I do, in fact, Shut Up and Multiply, then it is because I believe that doing so is right. Suppose I believe that my future emotions will follow multiplication. I would have to, then, believe that I am going to self-modify into someone who multiplies. I would only do this because of a belief that doing so is right.
Belief and logical reasoning are an important part of how people on lesswrong think about morality, and I don't see how to incorporate them into a metaethics based not on beliefs, but on caring.
295 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2011-06-09T18:33:11.126Z · LW(p) · GW(p)
I share your skepticism about Luke's statement (but I've been waiting to criticize until he finishes his sequence to see if he addresses the problems later).
My goal here is to get people to feel the intuition that "I ought to do X" means something, and that thing is not "I think I ought to do X" or "I would think that I ought to do X if I were smarter and some other stuff".
To help pump that intuition, consider this analogy:
"X is true" (where X is a mathematical statement) means something, and that thing is not "I think X is true" or "I would think that X is true if I were smarter and some other stuff".
On the other hand, I think it's also possible that "I ought to do X" doesn't really mean anything. See my What does a calculator mean by "2"?. (ETA: To clarify, I mean some usages of "ought" may not really mean anything. There are some usages that clearly do, for example "If you want to accomplish X, then you ought to do Y" can in principle be straightforwardly reduced to a mathematical statement about decision theory, assuming that our current strong intuition that there is such a thing as "the right decision theory" is correct.)
Replies from: lukeprog, None↑ comment by lukeprog · 2011-06-10T20:11:04.763Z · LW(p) · GW(p)
Wei Dai,
I would prefer to hear the source of your skepticism now, if possible. I anticipate not actually disagreeing. I anticipate that we will argue it out and discover that we agree but that my way of expressing my position was not clear to you at first. And then I anticipate using this information to improve the clarity of my future posts.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-06-11T04:23:20.221Z · LW(p) · GW(p)
I'll first try to restate your position in order to check my understanding. Let me know if I don't do it justice.
People use "should" in several different ways. Most of these ways can be "reducible to physics", or in other words can be restated as talking about how our universe is, without losing any of the intended meaning. Some of these ways can't be so reduced (they are talking about the world of "is not") but those usages are simply meaningless and can be safely ignored.
I agree that many usages of "should" can be reduced to physics. (Or perhaps instead to mathematics.) But there may be other usages that can't be so reduced, and which are not clearly safe to ignore. Originally I was planning to wait for you to list the usages of "should" that can be reduced, and then show that there are other usages that are not obviously talking about "the world of is" but are not clearly meaningless either. (Of course I hope that your reductions do cover all of the important/interesting usages, but I'm not expecting that to be the case.)
Since you ask for my criticism now, I'll just give an example that seems to be one of the hardest to reduce: "Should I consider the lives of random strangers to have (terminal) value?"
(Eliezer's proposal is that what I'm really asking when I ask that question is "Does my CEV think the lives of random strangers should have (terminal) value?" I've given various arguments why I find this solution unsatisfactory. One that is currently fresh on my mind is that "coherent extrapolation" is merely a practical way to find the answer to any given question, but should not be used as the definition of what the question means. For example I could use a variant of CEV (call it Coherent Extrapolated Pi Estimation) to answer "What is the trillionth digit of pi?" but that doesn't imply that by "the trillionth digit of pi" I actually mean "the output of CEPE".)
Replies from: lukeprog↑ comment by lukeprog · 2011-06-11T06:54:41.883Z · LW(p) · GW(p)
I'm not planning to list all the reductions of normative language. There are too many. People use normative language in too many ways.
Also, I should clarify that when I talk about reducing ought statements into physical statements, I'm including logic. On my view, logic is just a feature of the language we use to talk about physical facts. (More on that if needed.)
Most of these ways can be "reducible to physics"... without losing any of the intended meaning.
I'm not sure I would say "most."
But there may be other usages that can't be so reduced, and which are not clearly safe to ignore.
What do you mean by "safe to ignore"?
If you're talking about something that doesn't reduce (even theoretically) into physics and/or a logical-mathematical function, then what are you talking about? Fiction? Magic? Those are fine things to talk about, as long as we understand we're talking about fiction or magic.
Should I consider the lives of random strangers to have (terminal) value?
What about this is hard to reduce? We can ask for what you mean by 'should' in this question, and reduce it if possible. Perhaps what you have in mind isn't reducible (divine commands), but then your question is without an answer.
Or perhaps you're asking the question in the sense of "Please fix my broken question for me. I don't know what I mean by 'should'. Would you please do a stack trace on the cognitive algorithms that generated that question, fix my question, and then answer it for me?" And in that case we're doing empathic metaethics.
I'm still confused as to what your objection is. Will you clarify?
Replies from: Wei_Dai, Will_Sawin, Vladimir_Nesov, Peterdjones↑ comment by Wei Dai (Wei_Dai) · 2011-06-11T07:57:11.363Z · LW(p) · GW(p)
What do you mean by "safe to ignore"?
You said that you're not interested in an "ought" sentence if it reduces to talking about the world of is not. I was trying to make the same point by "safe to ignore".
If you're talking about something that doesn't reduce (even theoretically) into physics and/or a logical-mathematical function, then what are you talking about?
I don't know, but I don't think it's a good idea to assume that only things that are reducible to physics and/or math are worth talking about. I mean it's a good working assumption to guide your search for possible meanings of "should", but why declare that you're not "interested" in anything else? Couldn't you make that decision on a case by case basis, just in case there is a meaning of "should" that talks about something else besides physics and/or math and its interestingness will be apparent once you see it?
Or perhaps you're asking the question in the sense of "Please fix my broken question for me. I don't know what I mean by 'should'. Would you please do a stack trace on the cognitive algorithms that generated that question, fix my question, and then answer it for me?" And in that case we're doing empathic metaethics.
Maybe I should have waited until you finish your sequence after all, because I don't know what "doing empathic metaethics" actually entails at this point. How are you proposing to "fix my question"? It's not as if there is a design spec buried somewhere in my brain, and you can check my actual code against the design spec to see where the bug is... Do you want to pick up this conversation after you explain it in more detail?
Replies from: lukeprog↑ comment by lukeprog · 2011-06-11T16:59:58.531Z · LW(p) · GW(p)
I don't think it's a good idea to assume that only things that are reducible to physics and/or math are worth talking about. I mean it's a good working assumption to guide your search for possible meanings of "should", but why declare that you're not "interested" in anything else?
Maybe this is because I'm fairly confident of physicalism? Of course I'll change my mind if presented with enough evidence, but I'm not anticipating such a surprise.
'Interest' wasn't the best word for me to use. I'll have to fix that. All I was trying to say is that if somebody uses 'ought' to refer to something that isn't physical or logical, then this punts the discussion back to a debate over physicalism, which isn't the topic of my already-too-long 'Pluralistic Moral Reductionism' post.
Surely, many people use 'ought' to refer to things non-reducible to physics or logic, and they may even be interesting (as in fiction), but in the search for true statements that use 'ought' language they are not 'interesting', unless physicalism is false (which is a different discussion, then).
Does that make sense? I'll explain empathic metaethics in more detail later, but I hope we can get some clarity on this part right now.
Replies from: Wei_Dai, Vladimir_Nesov, Peterdjones↑ comment by Wei Dai (Wei_Dai) · 2011-06-11T20:16:04.919Z · LW(p) · GW(p)
Maybe this is because I'm fairly confident of physicalism? Of course I'll change my mind if presented with enough evidence, but I'm not anticipating such a surprise.
First I would call myself a radical platonist instead of a physicalist. (If all universes that exist mathematically also exist physically, perhaps it could be said that there is no difference between platonism and physicalism, but I think most people who call themselves physicalists would deny that premise.) So I think it's likely that everything "interesting" can be reduced to math, but given the history of philosophy I don't think I should be very confident in that. See my recent How To Be More Confident... That You're Wrong.
Replies from: lukeprog↑ comment by lukeprog · 2011-06-12T08:41:17.782Z · LW(p) · GW(p)
Right, I'm pretty partial to Tegmark, too. So what I call physicalism is compatible with Tegmark. But could you perhaps give an example of what it would mean to reduce normative language to a logical-mathematical function - even a silly one?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-06-12T10:15:34.958Z · LW(p) · GW(p)
(It's late and I'm thinking up this example on the spot, so let me know if it doesn't make sense.)
Suppose I'm in a restaurant and I say to my dinner companion Bob, "I'm too tired to think tonight. You know me pretty well. What do you think I should order?" From the answer I get, I can infer (when I'm not so tired) a set of joint constraints on what Bob believes to be my preferences, what decision theory he applied on my behalf, and the outcome of his (possibly subconscious) computation. If there is little uncertainty about my preferences and the decision theory involved, then the information conveyed by "you should order X" in this context just reduces to a mathematical statement about (for example) what the arg max of a set of weighted averages is.
(I notice an interesting subtlety here. Even though what I infer from "you should order X" is (1) "according to Bob's computation, the arg max of ... is X", what Bob means by "you should order X" must be (2) "the arg max of ... is X", because if he means (1), then "you should order X" would be true even if Bob made an error in his computation.)
Replies from: lukeprog, Will_Sawin↑ comment by lukeprog · 2011-06-12T10:24:35.433Z · LW(p) · GW(p)
Yeah, that's definitely compatible with what I'm talking about when I talk about reducing normative language to natural language (that is, to math/logic + physics).
Do you think any disagreements or confusion remains in this thread?
Replies from: Wei_Dai, Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-06-29T18:38:45.116Z · LW(p) · GW(p)
Having thought more about these matters over the last couple of weeks, I've come to realize that my analysis in the grandparent comment is not very good, and also that I'm confused about the relationship between semantics (i.e., study of meaning) and reductionism.
First, I learned that it's important (and I failed) to distinguish between (A) the meaning of a sentence (in some context), (B) the set of inferences that can be drawn from it, and (C) what information the speaker intends to convey.
For example, suppose Alice says to Bob, "It's raining outside. You should wear your rainboots." The information that Alice really wants to convey by "it's raining outside" is that there are puddles on the ground. That, along with for example "it's probably not sunny" and "I will get wet if I don't use an umbrella", belongs to the set of inferences that can be drawn from the sentence. But clearly the meaning of "it's raining outside" is distinct from either of these. Similarly, the fact that Bob can infer that there are puddles on the ground from "you should wear your rainboots" does not show that "you should wear your rainboots" means "there are puddles on the ground".
Nor does it seem to make sense to say that "you should wear your rainboots" reduces to "there are puddles on the ground" (why should it, when clearly "it's raining outside" doesn't reduce that way?), which, by analogy, calls into question my claim in the grandparent comment that "you should order X" reduces to "the arg max of ... is X".
But I'm confused about what reductionism even means in the context of semantics. The Eliezer post that you linked to from Pluralistic Moral Reductionism defined "reductionism" as:
Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.
But that appears to be a position about ontology, and it not clear to me what implications it has for semantics, especially for the semantics of normative language. (I know you posted a reading list for reductionism, which I have not gone though except to skim the encyclopedia entry. Please let me know if the answer will be apparent once I do read them, or if there is a more specific reference you can point me to that will answer this immediate question.)
Replies from: lukeprog↑ comment by lukeprog · 2011-06-29T21:50:58.726Z · LW(p) · GW(p)
Excellent. We should totally be clarifying such things.
There are many things we might intend to communicate when we talk about the 'meaning' of a word or phrase or sentence. Let's consider some possible concepts of 'the meaning of a sentence', in the context of declarative sentences only:
(1) The 'meaning of a sentence' is what the speaker intended to assert, that assertion being captured by truth conditions the speaker would endorse when asked for them.
(2) The 'meaning of a sentence' is what the sentence asserts if the assertion is captured by truth conditions that are fixed by the sentence's syntax and the first definition of each word that is provided by the Oxford English Dictionary.
(3) The 'meaning of a sentence' is what the speaker intended to assert, that assertion being captured by truth conditions determined by a full analysis of the cognitive algorithms that produced the sentence (which are not accessible to the speaker).
There are several other possibilities, even just for declarative sentences.
I tried to make it clear that when doing austere metaethics, I was taking #1 to be the meaning of a declarative moral judgment (e.g. "Murder is wrong!"), at least when the speaker of such sentences intended them to be declarative (rather than intending them to be, say, merely emotive or in other ways 'non-cognitive').
The advantage of this is that we can actually answer (to some degree, in many cases) the question of what a moral judgment 'means' (in the austere metaethics sense), and thus evaluate whether it is true or untrue. After some questioning of the speaker, we might determine that meaning~1 of "Murder is wrong" in a particular case is actually "Murder is forbidden by Yahweh", in which case we can evaluate the speaker's sentence as untrue given its truth conditions (given its meaning~1).
But we may very well want to know instead what is 'right' or 'wrong' or 'good' or 'bad' when evaluating sentences that use those words using the third sense of 'the meaning of a sentence' listed above. Though my third sense of meaning above is left a bit vague for now, that's roughly what I'll be doing when I start talking about empathic metaethics.
Will Sawin has been talking about the 'meaning' of 'ought' sentences in a fourth sense of the word 'meaning' that is related to but not identical to meaning~3 I gave above. I might interpret Will as saying that:
The meaning~4 of 'ought' in a declarative ought-sentence is determined by the cognitive algorithms that process 'ought' reasoning in a distinctive cognitive module devoted to that task, which do not include normative primitives nor reference to physical phenomena but only relate normative concepts to each other.
I am not going to do a thousand years of conceptual analysis on the English word-tool 'meaning.' I'm not going to survey which definition of 'meaning' is consistent with the greatest number of our intuitions about its meaning given a certain set of hypothetical scenarios in which we might use the term. Instead, I'm going to taboo 'meaning' so that I can use the word along with others to transfer ideas from my head into the heads of others, and take ideas from their heads into mine. If there's an objection to this, I'll be tempte to invent a new word-tool that I can use in the circumstances where I currently want to use the word-tool 'meaning' to transfer ideas between brains.
In discussing austere metaethics, I'm considering the 'meaning' of declarative moral judgment sentences as meaning~1. In discussing empathic metaethics, I'm considering the 'meaning' of declarative moral judgment sentences as (something like) meaning~3. I'm also happy to have additional discussions about 'ought' when considering the meaning of 'ought' as meaning~4, though the empirical assumptions underlying meaning~4 might turn out to be false. We could discuss 'meaning' as meaning~2, too, but I'm personally not that interested to do so.
Before I talk about reductionism, does this comment about meaning make sense?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-06-30T00:09:29.758Z · LW(p) · GW(p)
As I indicated in a recent comment, I don't really see the point of austere metaethics. Meaning~1 just doesn't seem that interesting, given that meaning~1 is not likely to be closely related to actual meaning, as in your example when someone thinks that by "Murder is wrong" they are asserting "Murder is forbidden by Yahweh".
Empathic metaethics is much more interesting, of course, but I do not understand why you seem to assume that if we delve into the cognitive algorithms that produce a sentence like "murder is wrong" we will be able to obtain a list of truth conditions. For example if I examine the algorithms behind an Eliza bot that sometimes says "murder is wrong" I'm certainly not going to obtain a list of truth conditions. It seems clear that information/beliefs about math and physics definitely influence the production of normative sentences in humans, but it's much less clear that those sentences can be said to assert facts about math and physics.
Instead, I'm going to taboo 'meaning' so that I can use the word along with others to transfer ideas from my head into the heads of others, and take ideas from their heads into mine.
Can you show me an example of such idea transfer? (Depending on what ideas you want to transfer, perhaps you do not need to "fully" solve metaethics, in which case our interests might diverge at some point.)
If there's an objection to this, I'll be tempte to invent a new word-tool that I can use in the circumstances where I currently want to use the word-tool 'meaning' to transfer ideas between brains.
This is probably a good idea. (Nesov previously made a general suggestion along those lines.)
Replies from: lukeprog↑ comment by lukeprog · 2011-06-30T00:33:38.885Z · LW(p) · GW(p)
I don't really see the point of austere metaethics. Meaning~1 just doesn't seem that interesting, given that meaning~1 is not likely to be closely related to actual meaning
What do you mean by 'actual meaning'?
The point of pluralistic moral reductionism (austere metaethics) is to resolve lots of confused debates in metaethics that arise from doing metaethics (implicitly or explicitly) in the context of traditional conceptual analysis. It's clearing away the dust and confusion from such debates so that we can move on to figure out what I think is more important: empathic metaethics.
I do not understand why you seem to assume that if we delve into the cognitive algorithms that produce a sentence like "murder is wrong" we will be able to obtain a list of truth conditions
I don't assume this. Whether this can be done is an open research question.
Can you show me an example of such idea transfer?
My entire post 'Pluralistic Moral Reductionism' is an example of such idea transfer. First I specified that one way we can talk about morality is to stipulate what we mean by terms like 'morally good', so as to resolve debates about morality in the same way that we resolve a hypothetical debate about 'sound' by stipulating our definitions of 'sound.' Then I worked through the implications of that approach to metaethics, and suggested toward the end that it wasn't the only approach to metaethics, and that we'll explore empathic metaethics in a later post.
Replies from: Wei_Dai, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2011-06-30T02:11:08.579Z · LW(p) · GW(p)
What do you mean by 'actual meaning'?
I don't know how to explain "actual meaning", but it seems intuitively obvious to me that the actual meaning of "murder is wrong" is not "murder is forbidden by Yahweh", even if the speaker of the sentence believes that murder is wrong because murder is forbidden by Yahweh. Do you disagree with this?
First I specified that one way we can talk about morality is to stipulate what we mean by terms like 'morally good', so as to resolve debates about morality in the same way that we resolve a hypothetical debate about 'sound' by stipulating our definitions of 'sound.'
But the way we actually resolved the debate about 'sound' is by reaching the understanding that there are two distinct concepts (acoustic vibrations and auditory experience) that are related in a certain way and also happen to share the same signifier. If, prior to reaching this understanding, you ask people to stipulate a definition for 'sound' when they use it, they will give you confused answers. I think saying "let's resolve confusions in metaethics by asking people to stipulating definitions for 'morally good'", before we reach a similar level of understanding regarding morality, is to likewise put the cart before the horse.
Replies from: lukeprog, Vladimir_Nesov↑ comment by lukeprog · 2011-06-30T02:51:38.663Z · LW(p) · GW(p)
I don't know how to explain "actual meaning", but it seems intuitively obvious to me that the actual meaning of "murder is wrong" is not "murder is forbidden by Yahweh", even if the speaker of the sentence believes that murder is wrong because murder is forbidden by Yahweh.
That doesn't seem intuitively obvious to me, which illustrates one reason why I prefer to taboo terms rather than bash my intuitions against the intuitions of others in an endless game of intuitionist conceptual analysis. :)
Perhaps the most common 'foundational' family of theories of meaning in linguistics and philosophy of language belong to the mentalist program, according to which semantic content is determined by the mental contents of the speaker, not by an abstract analysis of symbol forms taken out of context from their speaker. One straightforward application of a mentalist approach to meaning would conclude that if the speaker was assuming (or mentally representing) a judgment of moral wrongness in the sense of forbidden-by-God, then the meaning of the speaker's sentence refers in part to the demands of an imagined deity.
But the way we actually resolved the debate about 'sound' is by reaching the understanding that there are two distinct concepts (acoustic vibrations and auditory experience) that are related in a certain way and also happen to share the same signifier. If, prior to reaching this understanding, you ask people to stipulate a definition for 'sound' when they use it, they will give you confused answers. I think saying "let's resolve confusions in metaethics by asking people to stipulating definitions for 'morally good'", before we reach a similar level of understanding regarding morality, is to likewise put the cart before the horse.
But "reaching this understanding" with regard to morality was precisely the goal of 'Conceptual Analysis and Moral Theory' and 'Pluralistic Moral Reductionism.' I repeatedly made the point that people regularly use a narrow family of signifiers ('morally good', 'morally right', etc.) to call out a wide range of distinct concepts (divine attitudes, consequentialist predictions, deontological judgments, etc.), and that this leads to exactly the kind of confusion encountered by two people who are both using the signifier 'sound' to call upon two distinct concepts (acoustic vibrations and auditory experience).
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-06-30T05:29:35.265Z · LW(p) · GW(p)
I repeatedly made the point that people regularly use a narrow family of signifiers ('morally good', 'morally right', etc.) to call out a wide range of distinct concepts (divine attitudes, consequentialist predictions, deontological judgments, etc.), and that this leads to exactly the kind of confusion encountered by two people who are both using the signifier 'sound' to call upon two distinct concepts (acoustic vibrations and auditory experience).
With regard to "sound", the two concepts are complementary, and people can easily agree that "sound" sometimes refers to one or the other or often both of these concepts. The same is not true in the "morality" case. The concepts you list seem mutually exclusive, and most people have a strong intuition that "morality" can correctly refer to at most one of them. For example a consequentialist will argue that a deontologist is wrong when he asserts that "morality" means "adhering to rules X, Y, Z". Similarly a divine command theorist will not answer "well, that's true" if an egoist says "murdering Bob (in a way that serves my interests) is right, and I stipulate 'right' to mean 'serving my interests'".
It appears to me confusion here is not being caused mainly by linguistic ambiguity, i.e., people using the same word to refer to different things, which can be easily cleared up once pointed out. I see the situation as being closer to the following: in many cases, people are using "morality" to refer to the same concept, and are disagreeing over the nature of that concept. Some people think it's equivalent to or closely related to the concept of divine attitudes, and others think it has more to do with well-being of conscious creatures, etc.
Replies from: cousin_it, lukeprog↑ comment by cousin_it · 2011-06-30T09:36:57.108Z · LW(p) · GW(p)
I see the situation as being closer to the following: in many cases, people are using "morality" to refer to the same concept, and are disagreeing over the nature of that concept.
When many people agree that murder is wrong but disagree on the reasons why, you can argue that they're referring to the same concept of morality but confused about its nature. But what about less clear-cut statements, like "women should be able to vote"? Many people in the past would've disagreed with that. Would you say they're referring to a different concept of morality?
↑ comment by lukeprog · 2011-06-30T06:11:34.639Z · LW(p) · GW(p)
I'm not sure what it means to say that people have the same concept of morality but disagree on many of its most fundamental properties. Do you know how to elucidate that?
I tried to explain some of the cause of persistent moral debate (as opposed to e.g. sound debate) in this way:
Replies from: Wei_Dai, Vladimir_Nesov, PeterdjonesThe problem may be worse for moral terms than for (say) art terms. Moral terms have more powerful connotations than art terms, and are thus a greater attractor for sneaking in connotations. Moral terms are used to persuade. "It's just wrong!" the moralist cries, "I don't care what definition you're using right now. It's just wrong: don't do it."
Moral discourse is rife with motivated cognition. This is part of why, I suspect, people resist dissolving moral debates even while they have no trouble dissolving the 'tree falling in a forest' debate.
↑ comment by Wei Dai (Wei_Dai) · 2011-06-30T15:54:37.705Z · LW(p) · GW(p)
I'm not sure what it means to say that people have the same concept of morality but disagree on many of its most fundamental properties. Do you know how to elucidate that?
Let me try an analogy. Consider someone who believes in the phlogiston theory of fire, and another person who believes in the oxidation theory. They are having a substantive disagreement about the nature of fire, and not merely causing unnecessary confusion by using the same word "fire" to refer to different things. And if the phlogiston theorist were to say "by 'fire' I mean the release of phlogiston" then that would just be wrong, and would be adding to the confusion instead of helping to resolve it.
I think the situation with "morality" is closer to this than to the "sound" example.
(ETA: I could also try to define "same concept" more directly, for example as occupying roughly the same position in the graph of relationships between one's concepts, or playing approximately the same role in one's cognitive algorithms, but I'd rather not take an exact position on what "same concept" means if I can avoid it, since I have mostly just an intuitive understanding of it.)
Replies from: lukeprog↑ comment by lukeprog · 2011-07-04T04:24:17.965Z · LW(p) · GW(p)
This is the exact debate currently being hashed out by Richard Joyce and Stephen Finlay (whom I interviewed here). A while back I wrote an article that can serve as a good entry point into the debate, here. A response from Joyce is here and here. Finlay replies again here.
I tend to side with Finlay, though I suspect not for all the same reasons. Recently, Joyce has admitted that both languages can work, but he'll (personally) talk the language of error theory rather than the language of moral naturalism.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-07-04T06:22:11.783Z · LW(p) · GW(p)
I'm having trouble understanding how the debate between Joyce and Finlay, over Error Theory, is the same as ours. (Did you perhaps reply to the wrong comment?)
Replies from: lukeprog↑ comment by lukeprog · 2011-07-04T06:39:13.267Z · LW(p) · GW(p)
Sorry, let me make it clearer...
The core of their debate concerns whether certain features are 'essential' to the concept of morality, and thus concerns whether people share the same concept of morality, and what it would mean to say that people share the concept of morality, and what the implications of that are. Phlogiston is even one of the primary examples used throughout the debate. (Also, witches!)
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-07-04T07:27:45.628Z · LW(p) · GW(p)
I'm still not getting it. From what I can tell, both Joyce and Finlay implicitly assume that most people are referring to the same concept by "morality". They do use phlogiston as an example, but seemingly in a very different way from me, to illustrate different points. Also, two of the papers you link to by Joyce don't cite Finlay at all and I think may not even be part of the debate. Actually the last paper you link to by Joyce (which doesn't cite Finlay) does seem relevant to our discussion. For example this paragraph:
We gave the name “Earth” to the thing we live upon and at one time reckoned it flat (or at least a good many people reckoned it flat); but the discovery that the thing we live upon is a big ball was not taken to be the discovery that we do not live upon Earth. It was once widely thought that gorillas are aggressive brutes, but the discovery that they’re in fact gentle social creatures was not taken to be the discovery that gorillas do not exist.
I will read that paper over more carefully, and in the mean time, please let me know if you still think the other papers are also relevant, and point to specific passages if yes.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-04T22:19:25.825Z · LW(p) · GW(p)
This article by Joyce doesn't cite Finlay, but its central topic is 'concessive strategies' for responding to Mackie, and Finlay is a leading figure in concessive strategies for responding to Mackie. Joyce also doesn't cite Finlay here, but it discusses how two people who accept that Mackie's suspect properties fail to refer might nevertheless speak two different languages about whether moral properties exist (as Joyce and Finlay do).
One way of expressing the central debate between them is to say that they are arguing over whether certain features (like moral 'absolutism' or 'objectivity') are 'essential' to moral concepts. (Without the assumption of absolutism, is X a 'moral' concept?) Another way to say that is to say that they are arguing over the boundaries of moral concepts; whether people can be said to share the 'same' concept of morality but disagree on some of its features, or whether this disagreement means they have 'different' concepts of morality.
But really, I'm just trying to get clear on what you might mean by saying that people have the 'same' concept of morality while disagreeing on fundamental features, and what you think the implications are. I'm sorry my pointers to the literature weren't too helpful.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-07-06T05:04:55.294Z · LW(p) · GW(p)
But really, I'm just trying to get clear on what you might mean by saying that people have the 'same' concept of morality while disagreeing on fundamental features, and what you think the implications are.
Unfortunately I'm not sure how to explain it better than I already did. But I did notice that Richard Chappell made a similar point (while criticizing Eliezer):
His view implies that many normative disagreements are simply terminological; different people mean different things by the term 'ought', so they're simply talking past each other. This is a popular stance to take, especially among non-philosophers, but it is terribly superficial. See my 'Is Normativity Just Semantics?' for more detail.
Does his version makes any more sense?
Replies from: Vladimir_Nesov, lukeprog↑ comment by Vladimir_Nesov · 2011-07-06T10:25:13.273Z · LW(p) · GW(p)
Chappell's discussion makes more and more sense to me lately. Many previously central reasons for disagreement turn out to be my misunderstanding, but I haven't re-read enough to form a new opinion yet.
↑ comment by lukeprog · 2011-07-07T00:03:52.301Z · LW(p) · GW(p)
Sure, except he doesn't make any arguments for his position. He just says:
Normative disputes, e.g. between theories of wellbeing, are surely more substantive than is allowed for by this account.
I don't think normative debates are always "merely verbal". I just think they are very often 'merely verbal', and that there are multiple concepts of normativity in use. Chappel and I, for example, seem to have different intuitions (see comments) about what normativity amounts to.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-07-07T05:28:41.827Z · LW(p) · GW(p)
Let's say a deontologist and a consequentialist are on the board of SIAI, and they are debating which kind of seed AI the Institute should build.
D: We should build a deontic AI.
C: We should build a consequentialist AI.
Surely their disagreement is substantive. But if by "we should do X", the deontologist just means "X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z." and the consequentialist just means "X maximizes expected utility under utility function Y according to decision theory Z" then they are talking past each other and their disagreement is "merely verbal". Yet these are the kinds of meanings you seem to think their normative language do have. Don't you think there's something wrong about that?
(ETA: To any bystanders still following this argument, I feel like I'm starting to repeat myself without making much progress in resolving this disagreement. Any suggestion what to do?)
Replies from: Peterdjones, lukeprog↑ comment by Peterdjones · 2011-07-08T15:02:30.288Z · LW(p) · GW(p)
I completely agree with what you are saying. Disagreement requires shared meaning. Cons. and Deont. are rival theories, not alternative meanings.
(ETA: To any bystanders still following this argument, I feel like I'm starting to repeat myself without making much progress in resolving this disagreement. Any suggestion what to do?)
Good question. There's a lot of momentum behind the "meaning theory".
↑ comment by lukeprog · 2011-07-07T07:35:06.154Z · LW(p) · GW(p)
If the deontologist and the consequentialist have previously stipulated different definitions for 'should' as used in sentences D and C, then they aren't necessarily disagreeing with each other by having one state D and the other state C.
But perhaps we aren't considering propositions D and C using meaning_stipulated. Perhaps we decide to consider propositions D and C using meaning-cognitive-algorithm. And perhaps a completed cognitive neuroscience would show us that they both mean the same thing by 'should' in the meaning-cognitive-algorithm sense. And in that case they would be having a substantive disagreement, when using meaning-cognitive-algorithm to determine the truth conditions of D and C.
Thus:
meaning-stipulated of D is X, meaning-stipulated of C is Y, but X and Y need not be mutually exclusive.
meaning-cognitive-algorithm of D is A, meaning-cognitive-algorithm of C is B, and in my story above A and B are mutually exclusive.
Since people have different ideas about what 'meaning' is, I'm skipping past that worry by tabooing 'meaning.'
[Damn I wish LW would let me use underscores or subscripts instead of hyphens!]
Replies from: Vladimir_Nesov, Wei_Dai, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-07-07T11:07:51.155Z · LW(p) · GW(p)
[Damn I wish LW would let me use underscores or subscripts instead of hyphens!]
You_can_do_that, just use backslash '\' to escape '\_' the underscores, although people quoting your text would need to repeat the trick.
Replies from: lukeprog↑ comment by Wei Dai (Wei_Dai) · 2011-07-07T16:21:46.663Z · LW(p) · GW(p)
Suppose the deontologist and the consequentialist have previously stipulated different definitions for 'should' as used in sentences D and C, but if you ask them they also say that they are disagreeing with each other in a substantive way. They must be wrong about either what their sentences mean, or about whether their disagreement is substantive, right? (*) I think it's more likely that they're wrong about what their sentences mean, because meanings of normative sentences are confusing and lack of substantive disagreement in this particular scenario seems very unlikely.
(*) If we replace "mean" in this sentence by "mean_stipulated", then it no longer makes sense, since clearly it's possible that their sentences mean_stipulated D and C, and that their disagreement is substantive. Actually now that I think about it, I'm not sure that "mean" can ever be correctly taboo'ed into "mean_stipulated". For example, suppose Bob says "By 'sound' I mean acoustic waves. Sorry, I misspoke, actually by 'sound' I mean auditory experiences. [some time later] To recall, by 'sound' I mean auditory experiences." The first "mean" does not mean "mean_stipulated" since Bob hadn't stipulated any meanings yet when he said that. The second "mean" does not mean "mean_stipulated" since otherwise that sentence would just be stating a plain falsehood. The third "mean" must mean the same thing as the second "mean", so it's also not "mean_stipulated".
To continue along this line, suppose Alice inserts after the first sentence, "Bob, that sounds wrong. I think by 'sound' you mean auditory experiences." Obviously not "mean_stipulated" here. Alternatively, suppose Bob only says the first sentence, and nobody bothers to correct him because they've all heard the lecture several times and know that Bob means auditory experiences by 'sound', and think that everyone else knows. Except that Carol is new and doesn't know, and write in her notes "In this lecture, 'sound' means acoustic waves." in her notebook. Later on Alice tells Carol what everyone else knows, and Carol corrects the sentence. If "mean" means "mean_stipulated" in that sentence, then it would be true and there would be no need to correct it.
Since people have different ideas about what 'meaning' is, I'm skipping past that worry by tabooing 'meaning.'
Taboo seems to be a tool that needs to be wielded very carefully, and wanting to "skip pass worry" is probably not the right frame of mind for wielding it. One can easily taboo a word in a wrong way, and end up adding to confusion, for example by giving the appearance that there is no disagreement when there actually is.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-07T17:54:49.505Z · LW(p) · GW(p)
I'm not sure that "mean" can ever be correctly taboo'ed into "mean_stipulated".
It seems a desperate move to say that stipulative meaning just isn't a kind of meaning wielded by humans. I use it all the time, it's used in law, it's used in other fields, it's taught in textbooks... If you think stipulative meaning just isn't a legitimate kind of meaning commonly used by humans, I don't know what to say.
One can easily taboo a word in a wrong way, and end up adding to confusion, for example by giving the appearance that there is no disagreement when there actually is.
I agree, but 'tabooing' 'meaning' to mean (in some cases) 'stipulated meaning' shouldn't be objectionable because, as I said above, it's a very commonly used kind of 'meaning.' We can also taboo 'meaning' to refer to other types of meaning.
And like I said, there often is substantive disagreement. I was just trying to say that sometimes there isn't substantive disagreement, and we can figure out whether or not we're having a substantive disagreement by playing a little Taboo (and by checking anticipations). This is precisely the kind of use for which playing Taboo was originally proposed:
Replies from: Wei_Dai, Wei_Daithe principle [of Tabooing] applies much more broadly:
Albert: "A tree falling in a deserted forest makes a sound." Barry: "A tree falling in a deserted forest does not make a sound."
Clearly, since one says "sound" and one says "~sound", we must have a contradiction, right? But suppose that they both dereference their pointers before speaking:
Albert: "A tree falling in a deserted forest matches [membership test: this event generates acoustic vibrations]." Barry: "A tree falling in a deserted forest does not match [membership test: this event generates auditory experiences]."
Now there is no longer an apparent collision - all they had to do was prohibit themselves from using the word sound. If "acoustic vibrations" came into dispute, we would just play Taboo again and say "pressure waves in a material medium"; if necessary we would play Taboo again on the word "wave" and replace it with the wave equation. (Play Taboo on "auditory experience" and you get "That form of sensory processing, within the human brain, which takes as input a linear time series of frequency mixes.")
↑ comment by Wei Dai (Wei_Dai) · 2011-07-09T00:10:19.720Z · LW(p) · GW(p)
And like I said, there often is substantive disagreement. I was just trying to say that sometimes there isn't substantive disagreement, and we can figure out whether or not we're having a substantive disagreement by playing a little Taboo (and by checking anticipations).
To come back to this point, what if we can't translate a disagreement into disagreement over anticipations (which is the case in many debates over rationality and morality), nor do the participants know how to correctly Taboo (i.e., they don't know how to capture the meanings of certain key words), but there still seems to be substantive disagreement or the participants themselves claim they do have a substantive disagreement?
Earlier, in another context, I suggested that we extend Eliezer's "make beliefs pay rent in anticipated experiences" into "make beliefs pay rent in decision making". Perhaps we can apply that here as well, and say that a substantive disagreement is one that implies a difference in what to do, in at least one possible circumstance. What do you think?
Replies from: Vladimir_Nesov, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-07-10T00:39:10.018Z · LW(p) · GW(p)
But I missed your point in the previous response. The idea of disagreement about decisions in the same sense as usual disagreement about anticipation caused by errors/uncertainty is interesting. This is not bargaining about outcome, for the object under consideration is agents' belief, not the fact the belief is about. The agents could work on correct belief about a fact even in the absence of reliable access to the fact itself, reaching agreement.
↑ comment by Vladimir_Nesov · 2011-07-09T10:21:54.920Z · LW(p) · GW(p)
Perhaps we can apply that here as well, and say that a substantive disagreement is one that implies a difference in what to do, in at least one possible circumstance.
It seems that "what to do" has to refer to properties of a fixed fact, so disagreement is bargaining over what actually gets determined, and so probably doesn't even involve different anticipations.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-09T23:59:51.509Z · LW(p) · GW(p)
Wei Dai & Vladimir Nesov,
Both your suggestions sound plausible. I'll have to think about it more when I have time to work more on this problem, probably in the context of a planned LW post on Chalmer's Verbal Disputes paper. Right now I have to get back to some other projects.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-15T10:14:19.347Z · LW(p) · GW(p)
Also perhaps of interest is Schroeder's paper, A Recipe for Concept Similarity.
↑ comment by Wei Dai (Wei_Dai) · 2011-07-07T18:03:40.428Z · LW(p) · GW(p)
I was just trying to say that sometimes there isn't substantive disagreement, and we can figure out whether or not we're having a substantive disagreement by playing a little Taboo (and by checking anticipations).
But that assumes that two sides of the disagreement are both Taboo'ing correctly. How can you tell? (You do agree that Taboo is hard and people can easily get it wrong, yes?)
ETA: Do you want to try to hash this out via online chat? I added you to my Google Chat contacts a few days ago, but it's still showing "awaiting authorization".
Replies from: lukeprog↑ comment by lukeprog · 2011-07-07T18:45:01.349Z · LW(p) · GW(p)
But that assumes that two sides of the disagreement are both Taboo'ing correctly.
Not sure what 'correctly' means, here. I'd feel safer saying they were both Tabooing 'acceptably'. In the above example, Albert and Barry were both Tabooing 'acceptably.' It would have been strange and unhelpful if one of them had Tabooed 'sound' to mean 'rodents on the moon'. But Tabooing 'sounds' to talk about auditory experiences or acoustic vibrations is fine, because those are two commonly used meanings for 'sound'. Likewise, 'stipulated meaning' and 'intuitive meaning' and a few other things are commonly used meanings of 'meaning.'
If you're saying that there's "only one correct meaning for 'meaning'" or "only one correct meaning for 'ought'", then I'm not sure what to make of that, since humans employ the word-tool 'meaning' and the word-tool 'ought' in a variety of ways. If whatever you're saying predicts otherwise, then what you're saying is empirically incorrect. But that's so obvious that I keep assuming you must be saying something else.
Also relevant:
Just because there's a word "art" doesn't mean that it has a meaning, floating out there in the void, which you can discover by finding the right definition.
It feels that way, but it is not so.
Wondering how to define a word means you're looking at the problem the wrong way - searching for the mysterious essence of what is, in fact, a communication signal.
Another point. Switching back to a particular 'conventional' meaning that doesn't match the stipulative meaning you just gave a word is one of the ways words can be wrong (#4).
And frankly, I'm worried that we are falling prey to the 14th way words can be wrong:
You argue about a category membership even after screening off all questions that could possibly depend on a category-based inference. After you observe that an object is blue, egg-shaped, furred, flexible, opaque, luminescent, and palladium-containing, what's left to ask by arguing, "Is it a blegg?" But if your brain's categorizing neural network contains a (metaphorical) central unit corresponding to the inference of blegg-ness, it may still feel like there's a leftover question.
And, the 17th way words can be wrong:
You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say. The human ability to associate labels to concepts is a tool for communication. When people want to communicate, we're hard to stop; if we have no common language, we'll draw pictures in sand. When you each understand what is in the other's mind, you are done.
Now, I suspect you may be trying to say that I'm committing mistake #20:
You defy common usage without a reason, making it gratuitously hard for others to understand you. Fast stand up plutonium, with bagels without handle.
But I've pointed out that, for example, stipulative meaning is a very common usage of 'meaning'...
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-07-07T18:57:00.499Z · LW(p) · GW(p)
Could you please take a look at this example, and tell me whether you think they are Tabooing "acceptably"?
Replies from: lukeprog↑ comment by lukeprog · 2011-07-07T19:16:11.602Z · LW(p) · GW(p)
That's a great example. I'll reproduce it here for readability of this thread:
Consider a hypothetical debate between two decision theorists who happen to be Taboo fans:
A: It's rational to two-box in Newcomb's problem. B: No, one-boxing is rational. A: Let's taboo "rational" and replace it with math instead. What I meant was that two-boxing is what CDT recommends. B: Oh, what I meant was that one-boxing is what EDT recommends. A: Great, it looks like we don't disagree after all!
What did these two Taboo'ers do wrong, exactly?
I'd rather not talk about 'wrong'; that makes things messier. But let me offer a few comments on what happened:
If this conversation occurred at a decision theory meetup known to have an even mix of CDTers and EDTers, then it was perhaps inefficient (for communication) for either of them to use 'rational' to mean either CDT-rational or EDT-rational. That strategy was only going to cause confusion until Tabooing occurred.
If this conversation occurred at a decision theory meetup for CDTers, then person A might be forgiven for assuming the other person would think of 'rational' in terms of 'CDT-rational'. But then person A used Tabooing to discover that an EDTer had snuck into the party, and they don't disagree about the solutions to Newcomb's problem recommended by EDT and CDT.
In either case, once they've had the conversation quoted above, they are correct that they don't disagree about the solutions to Newcomb's problem recommended by EDT and CDT. Instead, their disagreement lies elsewhere. They still disagree about what action has the highest expected value when an agent is faced with Newcomb's dilemma. Now that they've cleared up their momentary confusion about 'rational', they can move on to discuss the point at which they really do disagree. Tabooing for the win.
↑ comment by Wei Dai (Wei_Dai) · 2011-07-07T19:56:03.380Z · LW(p) · GW(p)
They still disagree about what action has the highest expected value when an agent is faced with Newcomb's dilemma.
An action does not naturally "have" an expected value, it is assigned an expected value by a combination of decision theory, prior, and utility function, so we can't describe their disagreement as "about what action has the highest expected value". It seems that we can only describe their disagreement as about "what is rational" or "what is the correct decision theory" because we don't know how to Taboo "rational" or "correct" in a way that preserves the substantive nature of their disagreement. (BTW, I guess we could define "have" to mean "assigned by the correct decision theory/prior/utility function" but that doesn't help.)
Now that they've cleared up their momentary confusion about 'rational', they can move on to discuss the point at which they really do disagree.
But how do they (or you) know that they actually do disagree? According to their Taboo transcript, they do not disagree. It seems that there must be an alternative way to detect substantive disagreement, other than by asking people to Taboo?
ETA:
Tabooing for the win.
If people actually disagree, but through the process of Tabooing conclude that they do not disagree (like in the above example), that should count as a lose for Tabooing, right? In the case of "morality", why do you trust the process of Tabooing so much that you do not give this possibility much credence?
Replies from: lukeprog↑ comment by lukeprog · 2011-07-07T20:26:16.489Z · LW(p) · GW(p)
An action does not naturally "have" an expected value, it is assigned an expected value by a combination of decision theory, prior, and utility function, so we can't describe their disagreement as "about what action has the highest expected value".
Fair enough. Let me try again: "They still disagree about what action is most likely to fulfill the agents desires when the agent is faced with Newcomb's dilemma." Or something like that.
But how do they (or you) know that they actually do disagree? According to their Taboo transcript, they do not disagree.
According to their Taboo transcript, they don't disagree over the solutions of Newcomb's problem recommended by EDT and CDT. But they might still disagree about whether EDT or CDT is most likely to fulfill the agent's desires when faced with Newcomb's problem.
It seems that there must be an alternative way to detect substantive disagreement, other than by asking people to Taboo?
Yes. Ask about anticipations.
If people actually disagree, but through the process of Tabooing conclude that they do not disagree (like in the above example), that should count as a lose for Tabooing, right?
That didn't happen in this example. They do not, in fact, disagree over the solutions to Newcomb's problem recommended by EDT and CDT. If they disagree, it's about something else, like who is the tallest living person on Earth or which action is most likely to fulfill an agent's desires when faced with Newcomb's dilemma.
Of course Tabooing can go wrong, but it's a useful tool. So is testing for differences of anticipation, though that can also go wrong.
In the case of "morality", why do you trust the process of Tabooing so much that you do not give this possibility much credence?
No, I think it's quite plausible that Tabooing can be done wrong when talking about morality. In fact, it may be more likely to go wrong there than anywhere else. But it's also better to Taboo than to simply not use such a test for surface-level confusion. It's also another option to not Taboo and instead propose that we try to decode the cognitive algorithms involved in order to get a clearer picture of our intuitive notion of moral terms than we can get using introspection and intuition.
Replies from: Vladimir_Nesov, lukeprog↑ comment by Vladimir_Nesov · 2011-07-07T23:51:06.247Z · LW(p) · GW(p)
"They still disagree about what action is most likely to fulfill the agents desires when the agent is faced with Newcomb's dilemma."
This introduces even more assumptions into the picture. Why fulfillment of desires or specifically agent desires is relevant? Why is "most likely" in there? You are trying to make things precise at the expense of accuracy, that's the big taboo failure mode, increasingly obscure lost purposes.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-07T23:58:41.385Z · LW(p) · GW(p)
I'm just providing an example. It's not my story. I invite you or Wei Dai to say what it is the two speakers disagree about even after they agree about the conclusions of CDT and EDT for Newcomb's problem. If all you can say is that they disagree about what they 'should' do, or what it would be 'rational' to do, then we'll have to talk about things at that level of understanding, but that will be tricky.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-07-08T00:40:54.503Z · LW(p) · GW(p)
If all you can say is that they disagree about what they 'should' do, or what it would be 'rational' to do, then we'll have to talk about things at that level of understanding, but that will be tricky.
What other levels of understanding do we have? The question needs to be addressed on its own terms. Very tricky. There are ways of making this better, platonism extended to everything seems to help a lot, for example. Toy models of epistemic and decision-theoretic primitives also clarify things, training intuition.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T00:49:47.544Z · LW(p) · GW(p)
What other levels of understanding do we have?
We're making progress on what it means for brains to value things, for example. Or we can talk in an ends-relational sense, and specify ends. Or we can keep things even more vague but then we can't say much at all about 'ought' or 'rational'.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-07-08T01:02:05.533Z · LW(p) · GW(p)
The problem is that it doesn't look any better than figuring out what CDT or EDT recommend. What the brain recommends is not automatically relevant to the question of what should be done.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T01:14:00.619Z · LW(p) · GW(p)
The problem is that it doesn't look any better than figuring out what CDT or EDT recommend. What the brain recommends is not automatically relevant to the question of what should be done.
If by 'should' in this sense you mean the 'intended' meaning of 'should' that we don't have access too, then I agree.
↑ comment by lukeprog · 2011-07-07T23:09:45.257Z · LW(p) · GW(p)
Note: Wei Dai and I chatted for a while, and this resulted in three new clarifying paragraphs at the end of the is-ought section of my post 'Pluralistic Moral Reductionism.
Replies from: Wei_Dai, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2011-07-08T03:44:07.603Z · LW(p) · GW(p)
Some remaining issues:
Even given your disclaimer, I suspect we still disagree on the merits of Taboo as it apply to metaethics. Have you tried having others who are metaethically confused play Taboo in real life, and if so, did it help?
People like Eliezer and Drescher, von Neumann and Savage, have been able to make clear progress in understanding the nature of rationality, and the methods they used did not involve much (if any) neuroscience. On "morality" we don't have such past successes to guide us, but your focus on neuroscience still seems misguided according to my intuitions.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T04:41:00.761Z · LW(p) · GW(p)
Have you tried having others who are metaethically confused play Taboo in real life, and if so, did it help?
Yes. The most common result is that people come to realize they don't know what they mean by 'morally good', unless they are theists.
People like Eliezer and Drescher, von Neumann and Savage, have been able to make clear progress in understanding the nature of rationality, and the methods they used did not involve much (if any) neuroscience. On "morality" we don't have such past successes to guide us, but your focus on neuroscience still seems misguided according to my intuitions.
If it looks like I'm focusing on neuroscience, I think that's an accident of looking at work I've produced in a 4-month period rather than over a longer period (that hasn't occurred yet). I don't think neuroscience is as central to metaethics or rationality as my recent output might suggest. Humans with meat-brains are strange agents who will make up a tiny minority of rational and moral agents in the history of intelligent agents in our light-cone (unless we bring an end to intelligent agents in our light-cone).
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-07-08T06:16:05.418Z · LW(p) · GW(p)
Yes. The most common result is that people come to realize they don't know what they mean by 'morally good', unless they are theists.
Huh, I think that would have been good to mention in one of your posts. (Unless you did and I failed to notice it.)
It occurs to me that with a bit of tweaking to Austere Metaethics (which I'll call Interim Metaethics), we can help everyone realize that they don't know what they mean by "morally good".
For example:
Deontologist: Should we build a deontic seed AI?
Interim Metaethicist: What do you mean by "should X"?
Deontologist: "X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z."
Interim Metaethicist: Are you sure? If that's really what you mean, then when a consequentialist says "should X" he probably means "X maximizes expected utility according to decision theory Y and utility function Z". In which case the two of you do not actually disagree. But you do disagree with him, right?
Deontologist: Good point. I guess I don't really mean that by "should". I'm confused.
(Doesn't that seem like an improvement over Austere Metaethics?)
Replies from: lukeprog, Vladimir_Nesov↑ comment by lukeprog · 2011-07-08T07:09:18.120Z · LW(p) · GW(p)
I guess one difference between us is that I don't see anything particularly 'wrong' with using stipulative definitions as long as you're aware that they don't match the intended meaning (that we don't have access to yet), whereas you like to characterize stipulative definitions as 'wrong' when they don't match the intended meaning.
But perhaps I should add one post before my empathic metaethics post which stresses that the stipulative definitions of 'austere metaethics' don't match the intended meaning - and we can make this point by using all the standard thought experiments that deontologists and utilitarians and virtue ethicists and contractarian theorists use against each other.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-07-08T08:04:09.041Z · LW(p) · GW(p)
After the above conversation, wouldn't the deontologist want to figure out what he actually means by "should" and what its properties are? Why would he want to continue to use the stipulated definition that he knows he doesn't actually mean? I mean I can imagine something like:
Deontologist: I guess I don't really mean that by "should", but I need to publish a few more papers for tenure, so please just help me figure out whether we should build an deontic seed AI under that stipulated definition of "should", so I can finish my paper and submit it to the Journal of Machine Deontology.
But even in this case it would make more sense for him to avoid "stipulative definition" and instead say
Deontologist: Ok, by "should" I actually mean a concept that I can't define at this point. But I guess it has something to do with deontic logic, and it would be useful to explore the properties of deontic logic in more detail. So, can you please help me figure out whether building a deontic seed AI is obligatory (by deontic logic) if we assume axiomatic imperatives Y and Z?
This way, he clarifies to himself and others that ""X is obligatory (by deontic logic) if you assume axiomatic imperatives Y and Z." is not what he means by "should X", but instead a guess about the nature of morality (a concept that we can't yet precisely define).
Perhaps you'd answer that a stipulated meaning is just that, a guess about the nature of something. But as you know, words have connotations, and I think the connotation of "guess" is more appropriate here than "meaning".
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T14:35:05.620Z · LW(p) · GW(p)
Perhaps you'd answer that a stipulated meaning is just that, a guess about the nature of something. But as you know, words have connotations, and I think the connotation of "guess" is more appropriate here than "meaning".
The problem is that we have to act in the world now. We can't wait around for metaethics and decision theory to be solved. Thus, science books have glossaries in the back full of highly useful operationalized and stiuplated definitions for hundreds of terms, whether or not they match the intended meanings (that we don't have access to) of those terms for person A, or the intended meanings of those terms for person B, or the intended meanings for those terms for person C.
I think this glossary business is a familiar enough practice that calling that thing a glossary of 'meanings' instead of a glossary of 'guesses at meanings' is fine. Maybe 'meaning' doesn't have the connotations for me that it has for you.
Science needs doing, laws need to be written and enforced, narrow AIs need to be programmed, best practices in medicine need to be written, agents need to act... all before metaethics and decision theory are solved. In a great many cases, we need to have meaning_stipulated before we can figure out meaning_intended.
Replies from: Wei_Dai, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2011-07-08T15:06:09.496Z · LW(p) · GW(p)
Sigh... Maybe I should just put a sticky note on my monitor that says
REMEMBER: You probably don't actually disagree with Luke, because whenever he says "X means Z by Y", he might just mean "X stipulated Y to mean Z", which in turn is just another way of saying "X guesses that the nature of Y is Z".
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T15:10:54.447Z · LW(p) · GW(p)
That might work.
We humans have different intuitions about the meanings of terms and the nature of meaning itself, and thus we're all speaking slightly different languages. We always need to translate between our languages, which is where Taboo and testing for anticipations come in handy.
I'm using the concept of meaning from linguistics, which seems fair to me. In linguistics, stipulated meaning is most definitely a kind of meaning (and not merely a kind of guessing at meaning), for it is often "what is expressed by the writer or speaker, and what is conveyed to the reader or listener, provided that they talk about the same thing."
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-07-08T15:27:39.788Z · LW(p) · GW(p)
Whatever the case, this language looks confusing/misleading enough to avoid. It conflates the actual search for intended meaning with all those irrelevant stipulations, and assigns misleading connotations to the words referring to these things. In Eliezer's sequences, the term was "fake utility function". The presence of "fake" in the term is important, it reminds of incorrectness of the view.
So far, you've managed to confuse me and Wei with this terminology alone, probably many others as well.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T15:33:22.668Z · LW(p) · GW(p)
So far, you've managed to confuse me and Wei with this terminology alone, probably many others as well.
Perhaps, though I've gotten comments from others that it was highly clarifying for them. Maybe they're more used to the meaning of 'meaning' from linguistics.
Does this new paragraph at the end of this section in PMR help?
Replies from: Vladimir_NesovBut one must not fall into the trap of thinking that a definition you've stipulated (aloud or in your head) for 'ought' must match up to your intuitive concept of 'ought'. In fact, I suspect it never does, which is why the conceptual analysis of 'ought' language can go in circles for thousands of years, and why any stipulated meaning of 'ought' is a fake utility function. To see clearly to our intuitive concept of ought, we'll have to try empathic metaethics (see below).
↑ comment by Vladimir_Nesov · 2011-07-08T15:39:17.551Z · LW(p) · GW(p)
It's not clear from this paragraph whether "intuitive concept" refers to the oafish tools in human brain (which have the same problems as stipulated definitions, including irrelevance) or the intended meaning that those tools seek. Conceptual analysis, as I understand, is concerned with analysis of the imperfect intuitive tools, so it's also unclear in what capacity you mention conceptual analysis here.
(I do think this and other changes will probably make new readers less confused.)
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T16:12:25.535Z · LW(p) · GW(p)
Here's the way I'm thinking about it.
Roger has an intuitive concept of 'morally good', the intended meaning of which he doesn't fully have access to (but it could be discovered by something like CEV). Roger is confused enough to think that his intuitive concept of 'morally good' is 'that which produces the greatest pleasure for the greatest number'.
The conceptual analyst comes along and says: "Suppose that an advanced team of neuroscientists and computer scientists could hook everyone's brains up to a machine that gave each of them maximal, beyond-orgasmic pleasure for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?"
ROGER: Huh. I guess that's not quite what I mean by 'morally good'. I think what I mean by 'morally good' is 'that which produces the greatest subjective satisfaction of wants in the greatest number'.
CONCEPTUAL ANALYST: Okay, then. Suppose that an advanced team of neuroscientists and computer scientists could hook everyone's brains up to 'The Matrix' and made them believe and feel that all their wants were being satisfied, for the rest of their abnormally long lives. Then they will blast each person and their pleasure machine into deep space at near light-speed so that each person could never be interfered with. Would this be morally good?
ROGER: No, I guess that's not what I mean, either. What I really mean is...
And around and around we go, for centuries.
The problem with trying to access our intended meaning for 'morally good' by this intuitive process is that it brings into play, as you say, all the 'oafish tools' in the human brain. And philosophers have historically not paid much attention to the science of how intuitions work.
Does that make sense?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-07-08T16:25:39.467Z · LW(p) · GW(p)
Roger is confused enough to think that his intuitive concept of 'morally good' is 'that which produces the greatest pleasure for the greatest number'.
That intuition says the same thing as "pleasure-maximization", or that intended meaning can be captured as "pleasure-maximization"? Even if intuition is saying exactly "pleasure-maximization", it's not necessarily the intended meaning, and so it's unclear why one would try to replicate the intuitive tool, rather than search for a characterization of the intended meaning that is better than the intuitive tool. This is the distinction I was complaining about.
(This is an isolated point unrelated to the rest of your comment.)
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T16:35:58.213Z · LW(p) · GW(p)
Understood. I think I'm trying to figure out if there's a better way to talk about this 'intended meaning' (that we don't yet have access to) than to say 'intended meaning' or 'intuitive meaning'. But maybe I'll just have to say 'intended meaning (that we don't yet have access to)'.
New paragraph version:
But one must not fall into the trap of thinking that a definition you've stipulated (aloud or in your head) for 'ought' must match up to your intended meaning of 'ought' (to which you don't have introspective access). In fact, I suspect it never does, which is why the conceptual analysis of 'ought' language can go in circles for centuries, and why any stipulated meaning of 'ought' is a fake utility function. To see clearly to our intuitive concept of ought, we'll have to try empathic metaethics (see below).
↑ comment by Vladimir_Nesov · 2011-07-08T14:53:08.692Z · LW(p) · GW(p)
You think this applies to figuring out decision theory for FAI? If not, how is that relevant in this context?
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T15:05:47.613Z · LW(p) · GW(p)
Vladimir,
I've been very clear many times that 'austere metaethics' is for clearing up certain types of confusions, but won't do anything to solve FAI, which is why we need 'empathic metaethics'.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-07-08T15:15:25.579Z · LW(p) · GW(p)
I was discussing that particular comment, not rehashing the intention behind 'austere metaethics'.
More specifically, you made a statement "We can't wait around for metaethics and decision theory to be solved." It's not clear to me what purpose is being served by what alternative action to "waiting around for metaethics to be solved". It looks like you were responding to Wei's invitation to justify the use of word "meaning" instead of "guess", but it's not clear how your response relates to that question.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T15:47:50.210Z · LW(p) · GW(p)
Like I said over here, I'm using the concept of 'meaning' from linguistics. I'm hoping that fewer people are confused by my use of 'meaning' as employed in the field that studies meaning than if I had used 'meaning' in a more narrow and less standard way, like Wei Dai's. Perhaps I'm wrong about that, but I'm not sure.
My comment above about how "we have to act in the world now" gives one reason why, I suspect, the linguist's sense of 'meaning' includes stipulated meaning, and why stipulated meaning is so common.
In any case, I think you and Wei Dai have helped me think about how to be more clear to more people by adding such clarifications as this.
↑ comment by Vladimir_Nesov · 2011-07-08T14:48:51.403Z · LW(p) · GW(p)
Yes. The most common result is that people come to realize they don't know what they mean by 'morally good', unless they are theists.
Huh, I think that would have been good to mention in one of your posts.
(This is similar to my reaction expressed here.)
↑ comment by Vladimir_Nesov · 2011-07-08T00:20:22.288Z · LW(p) · GW(p)
In those paragraphs, you add intuition as an alternative to stipulated meaning. But this is not what we are talking about, we are talking about some unknown, but normative meaning that can't be presently stipulated, and is referred partly through intuition in a way that is more accurate than any currently available stipulation. What intuition tells is as irrelevant as what the various stipulations tell, what matters is the thing that the imperfect intuition refers. This idea doesn't require a notion of automated stipulation ("empathic" discussion).
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T00:28:53.494Z · LW(p) · GW(p)
"some unknown, but normative meaning that can't be presently stipulated" is what I meant by "intuitive meaning" in this case.
automated stipulation ("empathic" discussion)
I've never thought of 'empathic' discussion as 'automated stipulation'. What do you mean by that?
Even our stipulated definitions are only promissory notes for meaning. Luckily, stipulated definitions can be quite useful for achieving our goals. Figuring out what we 'really want', or what we 'rationally ought to do' when faced with Newcomb's problem, would also be useful. Such terms are carry even more vague promissory notes for meaning than stipulated definitions, and yet they are worth pursuing.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-07-08T00:57:11.414Z · LW(p) · GW(p)
My understanding of this topic is as follows.
Treat intuition as just another stipulated definition, that happens to be expressed as a pattern of mind activity, as opposed to a sequence of words. The intuition itself doesn't define the thing it refers to, it can be slightly wrong, or very wrong. The same goes for words. Both intuition and various words we might find are tools for referring to some abstract structure (intended meaning), that is not accurately captured by any of these tools. The purpose of intuition, and of words, is in capturing this structure accurately, accessing its properties. We can develop better understanding by inventing new words, training new intuitions, etc.
None of these tools hold a privileged position with respect to the target structure, some of them just happen to more carefully refer to it. At the beginning of any investigation, we would typically only have intuitions, which specify the problem that needs solving. They are inaccurate fuzzy lumps of confusion, too. At the same time, any early attempt at finding better tools will be unsuccessful, explicit definitions will fail to capture the intended meaning, even as intuition doesn't capture it precisely. Attempts at guiding intuition to better precision can likewise make it a less accurate tool for accessing the original meaning. On the other hand, when the topic is well-understood, we might find an explicit definition that is much better than the original intuition. We might train new intuitions that reflect the new explicit definition, and are much better tools than the original intuition.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T01:12:33.125Z · LW(p) · GW(p)
As far as I can tell, I agree with all of this.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-07-08T01:26:21.615Z · LW(p) · GW(p)
And as far as I can tell, you don't agree. You express agreement too much, like your stipulated-meaning thought experiments, this is one of the problems. But I'd probably need a significantly more clear presentation of what feels wrong to make progress on our disagreement.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-08T05:40:15.403Z · LW(p) · GW(p)
I look forward to it.
I'm not sure what you mean by "you agree too much", though. Like I said, as far as I can tell I agree with everything in this comment of yours.
↑ comment by Vladimir_Nesov · 2011-07-07T23:45:19.641Z · LW(p) · GW(p)
Instead, their disagreement lies elsewhere. They still disagree about what action has the highest expected value when an agent is faced with Newcomb's dilemma.
I agree with Wei. There is no reason to talk about "highest expected value" specifically, that would be merely a less clear option on the same list as CDT and EDT recommendations. We need to find the correct decision instead, expected value or not.
Playing Eliezer-post-ping-pong, you are almost demanding "But what do you mean by truth?". When an idea is unclear, there will be ways of stipulating a precise but even less accurate definition. Thus, you move away from the truth, even as you increase the clarity of discussion and defensibility of the arguments.
Replies from: lukeprog↑ comment by lukeprog · 2011-07-07T23:55:36.099Z · LW(p) · GW(p)
I updated the bit about expected value here.
you are almost demanding "But what do you mean by truth?". When an idea is unclear, there will be ways of stipulating a precise but even less accurate definition. Thus, you move away from the truth, even as you increase the clarity of discussion and defensibility of the arguments.
No, I agree there are important things to investigate for which we don't have clear definitions. That's why I keep talking about 'empathic metaethics.'
Also, by 'less accurate definition' do you just mean that a stipulated definition can differ from the intuitive definition that we don't have access to? Well of course. But why privilege the intuitive definition by saying a stipulated definition is 'less accurate' than it is? I suspect that intuitive definitions are often much less successful at capturing an empirical cluster than some stipulated definitions. Example: 'planet'.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-07-08T00:34:48.290Z · LW(p) · GW(p)
Also, by 'less accurate definition' do you just mean that a stipulated definition can differ from the intuitive definition that we don't have access to?
Not "just". Not every change is an improvement, but every improvement is a change. There can be better definitions of whatever the intuitions are talking about, and they will differ from the intuitive definitions. But when the purpose of discussion is referred by an unclear intuition with no other easy ways to reach it, stipulating a different definition would normally be a change that is not an improvement.
I suspect that intuitive definitions are often much less successful at capturing an empirical cluster than some stipulated definitions.
It's not easy to find a more successful definition of the same thing. You can't always just say "taboo" and pick the best thought that decades of careful research failed to rule out. Sometimes the intuitive definition is still better, or, more to the point, the precise explicit definition still misses the point.
↑ comment by Vladimir_Nesov · 2011-07-07T11:13:01.285Z · LW(p) · GW(p)
If the deontologist and the consequentialist have previously stipulated different definitions for 'should' as used in sentences D and C...
(They perhaps shouldn't have done that.)
↑ comment by Vladimir_Nesov · 2011-06-30T09:08:43.425Z · LW(p) · GW(p)
An analogy for "sharing common understanding of morality". In the sound example, even though the arguers talk about different situations in a confusingly ambiguous way, they share a common understanding of what facts hold in reality. If they were additionally ignorant about reality in different ways (even though there would still be the same truth about reality, they just wouldn't have reliable access to it), that would bring the situation closer to what we have with morality.
Replies from: lukeprog↑ comment by Peterdjones · 2011-06-30T11:11:45.840Z · LW(p) · GW(p)
I'm not sure what it means to say that people have the same concept of morality but disagree on many of its most fundamental properties.
Everyone understands "moral" to entail "should be praised/encouraged" and everyone understands "immoral" to entail "should be blamed/discouraged"
Replies from: MixedNuts↑ comment by MixedNuts · 2011-06-30T11:35:37.174Z · LW(p) · GW(p)
"Should"?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-30T12:40:29.060Z · LW(p) · GW(p)
Of course "should". It's a definition, not a reduction
↑ comment by Vladimir_Nesov · 2011-06-30T08:45:00.802Z · LW(p) · GW(p)
If, prior to reaching this understanding, you ask people to stipulate a definition for 'sound' when they use it, they will give you confused answers.
Even by getting such confused answers out in the open, we might get them to break out of complacency and recognize the presence of confusion. (Fat chance, of course.)
↑ comment by Vladimir_Nesov · 2011-06-30T08:32:30.686Z · LW(p) · GW(p)
The point of pluralistic moral reductionism (austere metaethics) is to resolve lots of confused debates in metaethics that arise from doing metaethics (implicitly or explicitly) in the context of traditional conceptual analysis. It's clearing away the dust and confusion from such debates so that we can move on to figure out what I think is more important: empathic metaethics.
This makes sense. My impression of the part of the sequence written so far would've been significantly affected if I had this intention understood (I don't fully believe it now, but more so than I did before reading your comment).
Replies from: lukeprog↑ comment by lukeprog · 2011-06-30T08:47:33.207Z · LW(p) · GW(p)
I don't fully believe it now, but more so than I did before reading your comment)
What is 'it', here? My intention? If you have doubts that my intention has been (for many months) to first clear away the dust and confusion of mainstream metaethics so that we can focus more clearly on the more important problems of metaethics, you can ask Anna Salamon, because I spoke to her about my intentions for the sequence before I put up the first post in the sequence. I think I spoke to others about my intentions, too, but I can't remember which parts of my intentions I spoke about to which people (besides Anna). There's also this comment from me more than a month ago.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-30T09:16:56.739Z · LW(p) · GW(p)
I believe that you believe it, but I'm not sure it's so. There are many reasons for any event. Specifically, you use austere debating in real arguments, which suggests that you place more weight on the method than just as a tool for exposing confusion.
(You seem to have reacted emotionally to a question of simple fact, and thus conflated the fact with your position on the fact, which status intuitions love to make people do. I think it's a bad practice.)
Replies from: lukeprog↑ comment by Wei Dai (Wei_Dai) · 2011-06-13T15:59:12.688Z · LW(p) · GW(p)
I'm not sure if we totally agree, but if there is any disagreement left in this thread, I don't think it's substantial enough to keep discussing at this point. I'd rather that we move on to talking about how you propose to do empathic metaethics.
BTW, I'd like to give another example that shows the difficulty of reducing (some usages of) normative language to math/physics.
Suppose I'm facing Newcomb's problem, and I say to my friend Bob, "I'm confused. What should I do?" Bob happens to be a causal decision theorist, so he says "You should two-box." It's clear that Bob can not just mean "the arg max of ... is 'two-box'" (where ... is the formula given by CDT), since presumably "you should two-box" is false and "the arg max of ... is 'two-box'" is true. Instead he probably means something like "CDT is the correct decision theory, and the arg max of ... is 'two-box'", but how do we reduce the first part of this sentence to physics/math?
Replies from: lukeprog↑ comment by lukeprog · 2011-06-13T16:39:21.245Z · LW(p) · GW(p)
I'm not saying that reducing to physics/math is easy. Even ought language stipulated to refer to, say, the well-being of conscious creatures is pretty hard to reduce. We just don't have that understanding yet. But it sure seems to be pointing to things that are computed by physics. We just don't know the details.
I'm just trying to say that if I'm right about reductionism, and somebody uses ought language in a way that isn't likely to reduce to physics/math, then their ought language isn't likely to refer successfully.
We can hold off the rest of the dialogue until after another post or two; I appreciate your help so far. As a result of my dialogue with you, Sawin, and Nesov, I'm going to rewrite the is-ought part of 'Pluralistic Moral Reductionism' for clarity.
↑ comment by Will_Sawin · 2011-06-13T15:28:55.049Z · LW(p) · GW(p)
For example I could use a variant of CEV (call it Coherent Extrapolated Pi Estimation) to answer "What is the trillionth digit of pi?" but that doesn't imply that by "the trillionth digit of pi" I actually mean "the output of CEPE"
(I notice an interesting subtlety here. Even though what I infer from "you should order X" is (1) "according to Bob's computation, the arg max of ... is X", what Bob means by "you should order X" must be (2) "the arg max of ... is X", because if he means (1), then "you should order X" would be true even if Bob made an error in his computation.)
Do you accept the conclusion I draw from my version of this argument?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-06-13T18:48:00.747Z · LW(p) · GW(p)
I agree with you up to this part:
But this is certainly not the definition of water! Imagine if Bob used this criterion to evaluate what was and was not water. He would suffer from an infinite regress. The definition of water is something else. The statement "This is water" reduces to a set of facts about this, not a set of facts about this and Bob's head.
I made the same argument (perhaps not very clearly) at http://lesswrong.com/lw/44i/another_argument_against_eliezers_metaethics/
But I'm confused by the rest of your argument, and don't understand what conclusion you're trying to draw apart from "CEV can't be the definition of morality". For example you say:
Well, why does it have a long definition? It has a long definition because that's what we believe is important.
I don't understand why believing something to be important implies that it has a long definition.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-13T18:55:07.650Z · LW(p) · GW(p)
Ah. So this is what I am saying.
If you say "I define should as [Eliezers long list of human values]"
then I say: "That's a long definition. How did you pick that definition?"
and you say: 'Well, I took whatever I thought was morally important, and put it into the definition."
In the part you quote I am arguing that (or at least claiming that) other responses to my query are wrong.
I would then continue:
"Using the long definition is obscuring what you really mean when you say 'should'. You really mean 'what's important', not [the long list of things I think are important]. So why not just define it as that?"
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-13T20:33:41.596Z · LW(p) · GW(p)
One more way to describe this idea. I ask, "What is morality?", and you say, "I don't know, but I use this brain thing here to figure out facts about it; it errs sometimes, but can provide limited guidance. Why do I believe this "brain" is talking about morality? It says it does, and it doesn't know of a better tool for that purpose presently available. By the way, it's reporting that are morally relevant, and is probably right."
Replies from: Wei_Dai, Will_Sawin↑ comment by Wei Dai (Wei_Dai) · 2011-06-14T18:30:03.915Z · LW(p) · GW(p)
By the way, it's reporting that are morally relevant, and is probably right.
Where do you get "is probably right" from? I don't think you can get that if you take an outside view and consider how often a human brain is right when it reports on philosophical matters in a similar state of confusion...
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-14T22:04:14.117Z · LW(p) · GW(p)
Salt to taste, the specific estimate is irrelevant to my point, so long as the brain is seen as collecting at least some moral information, and not defining the whole of morality. The level of certainty in brain's moral judgment won't be stellar, but more reliable for simpler judgments. Here, I referred "morally relevant", which is a rather weak matter-of-priority kind of judgment, as opposed to deciding which of the given options are better.
↑ comment by Will_Sawin · 2011-06-13T22:17:53.603Z · LW(p) · GW(p)
Beautiful. I would draw more attention to the "Why.... ? It says it does" bit, but that seems right.
↑ comment by Vladimir_Nesov · 2011-06-11T19:28:10.889Z · LW(p) · GW(p)
Maybe this is because I'm fairly confident of physicalism? Of course I'll change my mind if presented with enough evidence, but I'm not anticipating such a surprise.
You'd need the FAI able to change its mind as well, which requires that you retain this option in its epistemology. To attack the communication issue from a different angle, could you give examples of the kinds of facts you deny? (Don't say "god" or "magic", give a concrete example.)
Replies from: lukeprog↑ comment by lukeprog · 2011-06-12T08:31:58.574Z · LW(p) · GW(p)
Yes, we need the FAI to be able to change its mind about physicalism.
I don't think I've ever been clear about what people mean to assert when they talk about things that don't reduce to physics/math.
Rather, people describe something non-natural or supernatural and I think, "Yeah, that just sounds confused." Specific examples of things I deny because of my physicalism are Moore's non-natural goods and Chalmers' conception of consciousness.
Replies from: Peterdjones, Vladimir_Nesov↑ comment by Peterdjones · 2011-06-12T19:50:34.192Z · LW(p) · GW(p)
I don't think I've ever been clear about what people mean to assert when they talk about things that don't reduce to physics/math.
SInce you can't actually reduce[*] 99.99% of your vocabulary, you're either so confused you couldn't possibly think or communicate...or you're only confused about the nature of confusion.
[*] Try reducing "shopping" to quarks, electrons and photons.You can't do it, and if you could, it would tell you nothing useful. Yet there is nothing that is not made of quarks,electrons and photons involved.
↑ comment by Vladimir_Nesov · 2011-06-12T11:04:19.442Z · LW(p) · GW(p)
Specific examples of things I deny because of my physicalism are Moore's non-natural goods and Chalmers' conception of consciousness.
Not much better than "magic", doesn't help.
Replies from: lukeprog↑ comment by lukeprog · 2011-06-12T11:05:38.268Z · LW(p) · GW(p)
Is this because you're not familiar with Moore on non-natural goods and Chalmers on consciousness, or because you agree with me that those ideas are just confused?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-12T11:30:02.915Z · LW(p) · GW(p)
They are not precise enough to carefully examine. I can understand the distinction between a crumbling bridge and 3^^^^3>3^^^3, it's much less clear what kind of thing "Chalmers' view on consciousness" is. I guess I could say that I don't see these things as facts at all unless I understand them, and some things are too confusing to expect understanding them (my superpower is to remain confused by things I haven't properly understood!).
(To compare, a lot of trouble with words is incorrectly assuming that they mean the same thing in different contexts, and then trying to answer questions about their meaning. But they might lack a fixed meaning, or any meaning at all. So the first step before trying to figure out whether something is true is understanding what is meant by that something.)
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-12T19:56:11.935Z · LW(p) · GW(p)
They are not precise enough to carefully examine.
How are you on dark matter?
(No new idea is going to be precise, because precise definitions come from established theories, and established theories come from speculative theories, and speculative theories are theories about something that is defined relatively vaguely. The Oxygen theory of combustion was a theory about "how burning works"-- it was not, circularly, the Oxygen theory of Oxidisation).
↑ comment by Peterdjones · 2011-06-12T20:11:28.173Z · LW(p) · GW(p)
Dude, you really need to start distinguishing between reducible-in-principle and usefully-reducible and doesn't need-reducing.
↑ comment by Will_Sawin · 2011-06-13T15:23:51.536Z · LW(p) · GW(p)
If you're talking about something that doesn't reduce (even theoretically) into physics and/or a logical-mathematical function, then what are you talking about? Fiction? Magic?
That's making a pre-existing assumption that everyone speaks in physics language. It's circular.
Speaking in physic language about something that isn't in the actual physics is fiction. I'm not sure what magic is.
What is physics language? Physics language consists of statements that you can cash out, along with a physical world, to get "true" or "false"
What is moral language? Moral language consists of statements that you can cash out, along with a preference order on the set of physical worlds, to get "true" or "false".
ETA: IF you don't accept this, the first step is accepting that the statement "Flibber fladoo." does not refer to anything in physics, and is not a fiction.
Replies from: lukeprog↑ comment by lukeprog · 2011-06-13T16:30:40.569Z · LW(p) · GW(p)
That's making a pre-existing assumption that everyone speaks in physics language
No, of course lots of people use 'ought' terms and other terms without any reduction to physics in mind. All I'm saying is that if I'm right about reductionism, those uses of ought language will fail to refer.
What is moral language? Moral language consists of statements that you can cash out, along with a preference order on the set of physical worlds, to get "true" or "false".
Sure, that's one way to use moral language. And your preference order is computed by physics.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-13T17:01:55.281Z · LW(p) · GW(p)
Sure, that's one way to use moral language.
That's the way I'm talking about, so you should be able to ignore the other ways in your discussion with me.
And your preference order is computed by physics.
You are proposing a function MyOrder from {states of the world} to {preference orders}
This gives you a natural function from {statements in moral language} to {statements in physical language}
but this is not a reduction, it's not what those statements mean, because it's not what they're defined to mean.
Replies from: lukeprog↑ comment by lukeprog · 2011-06-14T17:32:37.691Z · LW(p) · GW(p)
I think I must be using the term 'reduction' in a broader sense than you are. By reduction I just mean the translation of (in this case) normative language to natural language - cashing things out in terms of lower-level natural statements.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-14T17:59:52.687Z · LW(p) · GW(p)
But you can't reduce an arbitrary statement. You can only do so when you have a definition that allows you to reduce it. There are several potential functions from {statements in moral language} to {statements in physical language}. You are proposing that for each meaningful use of moral language, one such function must be correct by definition.
I am saying, no, you can just make statements in moral language which do not correspond to any statements in physical language.
Replies from: lukeprog↑ comment by lukeprog · 2011-06-14T18:19:51.091Z · LW(p) · GW(p)
You are proposing that for each meaningful use of moral language, one such function must be correct by definition
Not what I meant to propose. I don't agree with that.
you can just make statements in moral language which do not correspond to any statements in physical language.
Of course you can. People do it all the time. But if you're a physicalist (by which I mean to include Tegmarkian radical platonists), then those statements fail to successfully refer. That's all I'm saying.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-14T18:23:10.327Z · LW(p) · GW(p)
I am standing up for the usefulness and well-definedness of statements that fail to successfully refer.
Replies from: lukeprog↑ comment by lukeprog · 2011-06-14T18:49:35.122Z · LW(p) · GW(p)
I am standing up for the usefulness and well-definedness of statements that fail to successfully refer.
Okay, we're getting nearer to understanding each other, thanks. :)
Perhaps you could give an example of a non-normative statement that is well-defined and useful even though it fails to refer? Perhaps then I can grok better where you're coming from.
Elsewhere, you said:
The problem is that the word "ought" has multiple definitions. You are observing that all the other definitions of ought are physically reducible. That puts them on the "is" side. But now there is a gap between hypothetical-ought-statements and categorical-ought-statements, and it's just the same size as before. You can reduce the word "ought" in the following sentence: "If 'ought' means 'popcorn', then I am eating ought right now." It doesn't help.
Goodness, no. I'm not arguing that all translations of 'ought' are equally useful as long as they successfully refer!
But now you're talking about something different than the is-ought gap. You're talking about a gap between "hypothetical-ought-statements and categorical-ought-statements." Could you describe the gap, please? 'Categorical ought' in particular leaves me with uncertainty about what you mean, because that term is used in a wide variety of ways by philosophers, many of them incoherent.
I genuinely appreciate you sticking this out with me. I know it's taking time for us to understand each other, but I expect serious fruit to come of mutual understanding.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-14T18:56:59.002Z · LW(p) · GW(p)
Perhaps you could give an example of a non-normative statement that is well-defined and useful even though it fails to refer? Perhaps then I can grok better where you're coming from.
I don't think any exist, so I could not do so.
Goodness, no. I'm not arguing that all translations of 'ought' are equally useful as long as they successfully refer!
I'm saying that the fact that you can use a word to have a meaning in class X does not provide much evidence that the other uses of that word have a meaning in class X.
Could you describe the gap, please? 'Categorical ought' in particular leaves me with uncertainty about what you mean, because that term is used in a wide variety of ways by philosophers, many of them incoherent.
Hypothetical-ought statements are a certain kind of statement about the physical world. They're the kind that contain the word "ought", but they're just an arbitrary subset of the "is"-statements.
Categorical-ought statements are statements of support for a preference order. (not statements about support.)
Since no fact can imply a preference order, no is-statement can imply a categorical-ought-statement.
Replies from: Vladimir_Nesov, lukeprog↑ comment by Vladimir_Nesov · 2011-06-15T00:03:36.268Z · LW(p) · GW(p)
Since no fact can imply a preference order, no is-statement can imply a categorical-ought-statement.
(Physical facts can inform you about what the right preference order is, if you expect that they are related to the moral facts.)
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-15T00:18:32.145Z · LW(p) · GW(p)
perhaps the right thing to say is "No fact can alone imply a preference order."
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-15T00:22:14.957Z · LW(p) · GW(p)
But no fact can alone imply anything (in this sense), it's not a point specific to moral values, and in any case a trivial uninteresting point that is easily confused with a refutation of the statement I noted in the grandparent.
Replies from: torekp, Will_Sawin↑ comment by torekp · 2011-06-15T01:33:01.040Z · LW(p) · GW(p)
No fact alone can imply anything: true and important. For example, a description of my brain at the neuronal level does not imply that I'm awake. To get the implication, we need to add a definition (or at least some rule) of "awake" in neuronal terms. And this definition will not capture the meaning of "awake." We could ask, "given that a brain is , is it awake?" and intuition will tell us that it is an open question.
But that is beside the point, if what we want to know is whether the definition succeeds. The definition does not have to capture the meaning of "awake". It only needs to get the reference correct.
Reduction doesn't typically involve capturing the meaning of the reduced terms - Is the (meta)ethical case special? If so, why and how?
Replies from: Wei_Dai, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2011-06-15T04:33:17.961Z · LW(p) · GW(p)
Reduction doesn't typically involve capturing the meaning of the reduced terms - Is the (meta)ethical case special? If so, why and how?
Great question. It seems to me that normative ethics involves reducing the term "moral" without necessarily capturing the meaning, whereas metaethics involves capturing the meaning of the term. And the reason we want to capture the meaning is so that we know what it means to do normative ethics correctly (instead of just doing it by intuition, as we do now). It would also allow an AI to perform normative ethics (i.e., reduce "moral") for us, instead of humans reducing the term and programming a specific normative ethical theory into the AI.
Replies from: torekp↑ comment by torekp · 2011-06-16T01:25:53.585Z · LW(p) · GW(p)
I doubt that metaethics can wholly capture the meaning of ethical terms, but I don't see that as a problem. It can still shed light on issues of epistemics, ontology, semantics, etc. And if you want help from an AI, any reduction that gets the reference correct will do, regardless of whether meaning is captured. A reduction need not be a full-blown normative ethical theory. It just needs to imply one, when combined with other truths.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-17T00:03:59.252Z · LW(p) · GW(p)
I doubt that metaethics can wholly capture the meaning of ethical terms, but I don't see that as a problem.
This is not a problem in the same sense as astronomical waste that will occur during the rest of this year is not a problem: it's not possible to do something about it.
↑ comment by Vladimir_Nesov · 2011-06-15T01:50:28.845Z · LW(p) · GW(p)
(I agree with your comment.)
Reduction doesn't typically involve capturing the meaning of the reduced terms - Is the (meta)ethical case special? If so, why and how?
A formal logical definition often won't capture the full meaning of a mathematical structure (there may be non-standard models of the logical theory, and true statements it won't infer), yet it has the special power of allowing you to correctly infer lots of facts about that structure without knowing anything else about the intended meaning. If we are given just a little bit less, then the power to infer stuff gets reduced dramatically.
It's important to get a definition of morality in a similar sense and for similar reasons: it won't capture the whole thing, yet it must be good enough to generate right actions even in currently unimaginable contexts.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-06-16T17:15:25.167Z · LW(p) · GW(p)
Formal logic does seem very powerful, yet incomplete. Would you be willing to create an AI with such limited understanding of math or morality (assuming we can formalize an understanding of morality on par with math), given that it could well obtain supervisory power over humanity? One might justify it by arguing that it's better than the alternative of trying to achieve and capture fuller understanding, which would involve further delay and risk. See for example Tim Freeman's argument in this line, or my own.
Another alternative is to build an upload-based FAI instead, like Stuart Armstrong's recent proposal. That is, use uploads as components in a larger system, with lots of safety checks. In a way Eliezer's FAI ideas can also be seen as heavily upload based, since CEV can be interpreted (as you did before) as uploads with safety checks. (So the question I'm asking can be phrased as, instead of just punting normative ethics to CEV, why not punt all of meta-math, decision theory, meta-ethics, etc., to a CEV-like construct?)
Of course you're probably just as unsure of these issues as I am, but I'm curious what your current thoughts are.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-16T17:53:28.095Z · LW(p) · GW(p)
Humans are also incomplete in this sense. We already have no way of capturing the whole problem statement. The goal is to capture it as well as possible using some reflective trick of looking at our own brains or behavior, which is probably way better than what an upload singleton that doesn't build a FAI is capable of.
If there are uploads, they could be handed the task of solving the problem of FAI in the same sense in which we try to, but this doesn't get us any closer to the solution. There should probably be a charity dedicated to designing upload-based singletons as a kind of high-impact applied normative ethics effort (and SIAI might want to spawn one, since rational thinking about morality is important for this task; we don't want fatalistic acceptance of a possible Malthusian dystopia or unchecked moral drift), but this is not the same problem as FAI.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-06-16T18:11:12.696Z · LW(p) · GW(p)
Humans are also incomplete in this sense.
Humans are at least capable of making some philosophical progress, and until we solve meta-philosophy, no de novo AI is. Assuming that we don't solve meta-philosophy first, any de novo AIs we build will be more incomplete than humans. Do you agree?
If there are uploads, they could be handed the task of solving the problem of FAI in the same sense in which we try to, but this doesn't get us any closer to the solution.
It gets closer to the solution in the sense that there is no longer a time pressure, since it's easier for an upload-singleton to ensure their own value stability, and they don't have to worry about people building uFAIs and other existential risks while they work on FAI. They can afford to try harder to get to the right solution than we can.
Replies from: Vladimir_Nesov, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-16T18:47:01.361Z · LW(p) · GW(p)
It gets closer to the solution in the sense that there is no longer a time pressure, since it's easier for an upload-singleton to ensure their own value stability, and they don't have to worry about people building uFAIs and other existential risks while they work on FAI. They can afford to try harder to get to the right solution than we can.
There is a time pressure from existential risk (also, astronomical waste). Just as in FAI vs. AGI race, we would have a race between FAI-building and AGI-building uploads (in the sense of "who runs first", but also literally while restricted by speed and costs). And fast-running uploads pose other risks as well, for example they could form an unfriendly singleton without even solving AGI, or build runaway nanotech.
(Planning to make sure that we run a prepared upload FAI team before a singleton of any other nature can prevent it is an important contingency, someone should get on that in the coming decades, and better metaethical theory and rationality education can help in that task.)
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-06-16T19:18:30.190Z · LW(p) · GW(p)
I should have made myself clearer. What I meant was assuming that an organization interested in building FAI can first achieve an upload-singleton, it won't be facing competition from other uploads (since that's what "singleton" means). It will be facing significantly less time pressure than a similar organization trying to build FAI directly. (Delay will still cause astronomical waste due to physical resources falling away into event horizons and the like, but that seems negligible compared to the existential risks that we face now.)
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-16T21:42:17.114Z · LW(p) · GW(p)
What I meant was assuming that an organization interested in building FAI can first achieve an upload-singleton, it won't be facing competition from other uploads.
But this assumption is rather unlikely/difficult to implement, so in the situation where we count on it, we've already lost a large portion of the future. Also, this course of action (unlikely to succeed as it is in any case) significantly benefits from massive funding to buy computational resources, which is a race. The other alternative, which is educating people in a way that increases the chances of a positive upload-driven outcome, is also a race, for development of better understanding of metaethics/rationality and for educating more people better.
↑ comment by Vladimir_Nesov · 2011-06-16T19:00:06.099Z · LW(p) · GW(p)
Humans are at least capable of making some philosophical progress, and until we solve meta-philosophy, no de novo AI is.
Philosophical progress is just a special kind of physical action that we can perform, valuable for abstract reasons that feed into what constitutes our values. I don't see how this feature is fundamentally different from pointing to any other complicated aspect of human values and saying that AI must be able to make that distinction or destroy all value with its mining claws. Of course it must.
↑ comment by Will_Sawin · 2011-06-15T00:41:36.761Z · LW(p) · GW(p)
Agreed, however, it is somewhat useful in pointing out a specific, common, type of bad argument.
↑ comment by lukeprog · 2011-06-14T19:09:22.654Z · LW(p) · GW(p)
I don't think any exist, so I could not do so.
Okay, so you think that the only class of statements that are well-defined and useful but fail to refer is the class of normative statements? Why are they special in this regard?
I'm saying that the fact that you can use a word to have a meaning in class X does not provide much evidence that the other uses of that word have a meaning in class X.
Agreed.
Categorical-ought statements are statements of support for a preference order. (not statements about support.)
What do you mean by this? Do you mean that a categorical-ought statement is a statement of support as in "I support preference-ordering X", as opposed to a statement about support as in "preference-ordering X is 'good' if 'good' is defined as 'maximizes Y'"?
Since no fact can imply a preference order, no is-statement can imply a categorical-ought-statement.
What do you mean by 'preference order' such that no fact can imply a preference order? I'm thinking of a preference order as a brain state, including parts of the preference ordering that are extrapolated from that brain state. Surely physical facts about that brain state and extrapolations from it imply (or entail, or whatever) the preference order...
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-14T20:02:25.932Z · LW(p) · GW(p)
Okay, so you think that the only class of statements that are well-defined and useful but fail to refer is the class of normative statements? Why are they special in this regard?
Because a positive ('is") statement + a normative ("ought) statement is enough information to determine an action, and once actions are determined you don't need further information.
"information" may not be the right word.
What do you mean by this? Do you mean that a categorical-ought statement is a statement of support as in "I support preference-ordering X", as opposed to a statement about support as in "preference-ordering X is 'good' if 'good' is defined as 'maximizes Y'"?
I believe "I ought to do X" if and only if I support preference-ordering X.
What do you mean by 'preference order' such that no fact can imply a preference order? I'm thinking of a preference order as a brain state, including parts of the preference ordering that are extrapolated from that brain state. Surely physical facts about that brain state and extrapolations from it imply (or entail, or whatever) the preference order...
I'm thinking of a preference order as just that: a map from the set of {states of the world} x {states of the world} to the set {>, =, <}. The brain state encodes a preference order but it does not constitute a preference order.
I believe "this preference order is correct" if and only if there is an encoding in my brain of this preference order.
Much like how:
I believe "this fact is true" if and only if there is an encoding in my brain of this fact.
Replies from: lukeprog, Vladimir_Nesov, lukeprog, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-19T23:31:30.837Z · LW(p) · GW(p)
I believe "this fact is true" if and only if there is an encoding in my brain of this fact.
What if it's encoded outside your brain, in a calculator for example, while your brain only knows that calculator shows indication "28" on display iff the fact is true? Or, say, I know that my computer contains a copy of "Understand" by Ted Chiang, even though I don't remember its complete text. Finally, some parts of my brain don't know what other parts of my brain know. The brain doesn't hold a privileged position with respect of where the data must be encoded to be referred, it can as easily point elsewhere.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-20T05:22:58.909Z · LW(p) · GW(p)
Well if I see the screen then there's an encoding of "28" in my brain. Not of the reason why 28 is true, but at least that the answer is "28".
You believe that "the computer contains a copy of Understand", not "the computer contains a book with the following text: [text of Understand]".
Obviously, on the level of detail in which the notion of "belief" starts breaking down, the notion of "belief" starts breaking down.
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.
Replies from: Vladimir_Nesov, Peterdjones↑ comment by Vladimir_Nesov · 2011-06-22T22:21:04.896Z · LW(p) · GW(p)
but at least that the answer is "28".
Yet you might not know the question. "28" only certifies that the question makes a true statement.
You believe that "the computer contains a copy of Understand", not "the computer contains a book with the following text: [text of Understand]".
Exactly. You don't know [text of Understand], yet you can reason about it, and use it in your designs. You can copy it elsewhere, and you'll know that it's the same thing somewhere else, all without having an explicit or any definition of the text, only diverse intuitions describing its various aspects and tools for performing operations on it. You can get an md5 sum of the text, for example, and make a decision depending on its value, and you can rely on the fact that this is an md5 sum of exactly the text of "Understand" and nothing else, even though you don't know what the text of "Understand" is.
But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What's "just a statement"?)
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T01:42:51.339Z · LW(p) · GW(p)
This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What's "just a statement"?)
In certain AI designs, this problem is trivial. In humans, this problem is not simple.
The complexities of the human version of this problem do not have relevance to anything in this overarching discussion (that I am aware of).
↑ comment by Peterdjones · 2011-06-22T23:04:29.093Z · LW(p) · GW(p)
Replies from: Will_SawinBut still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statemen
↑ comment by Will_Sawin · 2011-06-23T01:33:37.010Z · LW(p) · GW(p)
Obviously, this doesn't prevent me from saying that I know something without an argument.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-23T02:30:48.993Z · LW(p) · GW(p)
You can say that you are the Queen of Sheba.
It remains the case that knowledge is not lucky guessing, so an argument, evidence or some other justification is required.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T02:47:21.339Z · LW(p) · GW(p)
Yes, but this is completely and totally irrelevant to the point I was making, that:
I will profess that a statement, X, is true, if and only if "X" is encoded in a certain manner in my brain.
Yet "X is true" does not mean "X is encoded in this manner in my brain."
↑ comment by lukeprog · 2011-06-16T23:48:12.292Z · LW(p) · GW(p)
Been really busy, will respond to this in about a week. I want to read your earlier discussion post, first, too.
↑ comment by Vladimir_Nesov · 2011-06-15T00:10:16.854Z · LW(p) · GW(p)
I believe "this preference order is correct" if and only if there is an encoding in my brain of this preference order.
Encodings are relative to interpretations. Something has to decide that a particular fact encodes particular other fact. And brains don't have a fundamental role here, even if they might contain most of the available moral information, if you know how to get it.
The way in which decisions are judged to be right or wrong based on moral facts and facts about the world, where both are partly inferred with use of empirical observations, doesn't fundamentally distinguish the moral facts from the facts about the world, so it's unclear how to draw a natural boundary that excludes non-moral facts without excluding moral facts also.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-15T00:18:03.034Z · LW(p) · GW(p)
My ideas work unless it's impossible to draw the other kind of boundary, including only facts about the world and not moral facts.
Is it? If it's impossible, why?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-19T23:34:02.641Z · LW(p) · GW(p)
My ideas work unless it's impossible to draw the other kind of boundary, including only facts about the world and not moral facts.
It's the same boundary, just the other side. If you can learn of moral facts by observing things, if your knowledge refers to a joint description of moral and physical facts, state of your brain say as the physical counterpart, and so your understanding of moral facts benefits from better knowledge and further observation of physical facts, you shouldn't draw this boundary.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-20T05:20:21.023Z · LW(p) · GW(p)
There is an asymmetry. We can only make physical observations, not moral observations.
This means that every state of knowledge about moral and physical facts maps to a state of knowledge about just physical facts, and the evolution of the 2nd is determined only by evidence, with no reference to moral facts.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-22T22:06:50.654Z · LW(p) · GW(p)
We can only make physical observations, not moral observations.
To the extent we haven't defined what "moral observations" are exactly, so that the possibility isn't ruled out in a clear sense, I'd say that we can make moral observations, in the same sense in which we can make arithmetical observations by looking at a calculator display or consulting own understanding of mathematical facts maintained by brain.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-22T22:11:03.617Z · LW(p) · GW(p)
That is, by deducing mathematical facts from new physical facts.
Can you deduce physical facts from new moral facts?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-22T22:30:29.279Z · LW(p) · GW(p)
That is, by deducing mathematical facts from new physical facts.
Not necessarily, you can just use physical equipment without having any understanding of how it operates or what it is, and the only facts you reason about are non-physical (even though you interact with physical facts, without reasoning about them).
Can you deduce physical facts from new moral facts?
Why not?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T01:34:39.743Z · LW(p) · GW(p)
Why not?
Because your only sources of new facts are your senses.
Replies from: Peterdjones, Vladimir_Nesov↑ comment by Peterdjones · 2011-06-23T13:06:05.697Z · LW(p) · GW(p)
Can you deduce physical facts from new moral facts? >
Why not?
Because your only sources of new facts are your senses.
You can't infer new (to you) facts from information you already have? You can't just be told things? A martian,. being told that pre marital sex became less of an issue after the sixities might be able to deduce the physical fact that contraceptive technology was improved in the sixities.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T14:13:13.163Z · LW(p) · GW(p)
I guess you could but you couldn't be a perfect Bayesian.
Generally, when one is told something, one becomes aware of this from one's senses, and then infers things from the physical fact that one is told.
I'm definitely not saying this right. The larger point I'm trying to make is that it makes sense to consider an agent's physical beliefs and ignore their moral beliefs. That is a well-defined thing to do.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-23T14:22:53.931Z · LW(p) · GW(p)
I guess you could but you couldn't be a perfect Bayesian.
Where does it say that? One needs good information, but the sense can err, and hearsay can be reliable.
Generally, when one is told something, one becomes aware of this from one's senses, and then infers things from the physical fact that one is told.
The sense are of course involved in acquiring second hand information, but there is still a categoreal difference between showing and telling.
The larger point I'm trying to make is that it makes sense to consider an agent's physical beliefs and ignore their moral beliefs.
In order to achieve what?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T18:28:10.954Z · LW(p) · GW(p)
In order to achieve what?
Simplicity, maybe?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-23T18:40:03.529Z · LW(p) · GW(p)
A simple way of doing what?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T20:01:13.088Z · LW(p) · GW(p)
Answering questions like "What are true beliefs? What is knowledge? How does science work?'
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-23T20:41:13.275Z · LW(p) · GW(p)
How can you answer questions about true moral beliefs whilst ignoring moral beliefs?
Replies from: Will_Sawin, wedrifid↑ comment by Will_Sawin · 2011-06-24T03:04:37.182Z · LW(p) · GW(p)
Well, that's one of the things you can't do whilst ignoring moral beliefs.
↑ comment by wedrifid · 2011-06-23T22:29:40.095Z · LW(p) · GW(p)
How can you answer questions about true moral beliefs whilst ignoring moral beliefs?
All the same comprehension of the state of the world, including how beliefs about "true morals" remain accessible. They are simply considered to be physical facts about the construction of certain agents.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-25T11:43:10.368Z · LW(p) · GW(p)
That's an answer to the question "how do you deduce moral beliefs from physical facts",not the question in hand: "how do you deduce moral beliefs from physical beliefs".
Replies from: wedrifid↑ comment by wedrifid · 2011-06-25T18:31:32.372Z · LW(p) · GW(p)
That's an answer to the question "how do you deduce moral beliefs from physical facts",not the question in hand: "how do you deduce moral beliefs from physical beliefs".
Physical beliefs are constructed from physical facts. Just like everything else!
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-25T19:08:07.360Z · LW(p) · GW(p)
But the context of the discussion was what can be inferred from physical beliefs.
↑ comment by Vladimir_Nesov · 2011-06-23T10:03:53.836Z · LW(p) · GW(p)
Also your thoughts, your reasoning, which is machinery for perceiving abstract facts, including moral facts.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T14:16:07.949Z · LW(p) · GW(p)
How might one deduce new physical facts from new moral facts produced by abstract reasoning?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-23T14:46:12.477Z · LW(p) · GW(p)
You can predict that (physical) human babies won't be eaten too often. Or that a calculator will have a physical configuration displaying something that you inferred abstractly.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T18:26:23.784Z · LW(p) · GW(p)
You can make those arguments in an entirely physical fashion. You don't need the morality.
You do need the mathematical abstraction to bundle and unbundle physical facts.
Replies from: Vladimir_Nesov, Peterdjones↑ comment by Vladimir_Nesov · 2011-06-23T18:45:37.368Z · LW(p) · GW(p)
You can make those arguments in an entirely physical fashion. You don't need the morality.
You can use calculators without knowing abstract math too, but it makes sense to talk of mathematical facts independent of calculators.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T19:59:34.641Z · LW(p) · GW(p)
But it also makes sense to talk about calculators without abstract math.
That's all I'm saying.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-23T20:32:34.600Z · LW(p) · GW(p)
I agree. But it's probably not all that you're saying, since this possibility doesn't reveal problems with inferring physical facts from moral facts.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-24T03:07:04.771Z · LW(p) · GW(p)
There is a mapping from physical+moral belief structures to just-physical belief structures.
Correct physical-moral deductions map to correct physical deductions.
The end physical beliefs are purely explained by the beginning physical beliefs + new physical observations.
↑ comment by Peterdjones · 2011-06-23T18:43:46.031Z · LW(p) · GW(p)
You can make those arguments in an entirely physical fashion.
Meaning what? Are you saying you can get oughts form ises?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T20:00:20.447Z · LW(p) · GW(p)
No, I'm saying you can distinguish oughts from ises.
I am saying that you can move from is to is to is and never touch upon oughts.
That you can solve all is-problems while ignoring oughts.
↑ comment by Vladimir_Nesov · 2011-06-11T10:54:51.364Z · LW(p) · GW(p)
Also, I should clarify that when I talk about reducing ought statements into physical statements, I'm including logic. On my view, logic is just a feature of the language we use to talk about physical facts.
Logic can be used to talk about non-physical facts. Do you allow referring to logic even where the logic is talking about non-physical facts, or do you only allow referring to the logic that is talking about physical facts? Or maybe you taboo intended interpretation, however non-physical, but still allow the symbolic game itself to be morally relevant?
Replies from: lukeprog↑ comment by lukeprog · 2011-06-11T17:03:19.331Z · LW(p) · GW(p)
Alas, I think this is getting us into the problem of universals. :)
With you, too, Vladimir, I suspect our anticipations do not differ, but our language for talking about these subtle things is slightly different, and thus it takes a bit of work for us to understand each other.
By "logic referring to non-physical facts", do you have in mind something like "20+7=27"?
Replies from: Vladimir_Nesov, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-11T19:51:35.430Z · LW(p) · GW(p)
By "logic referring to non-physical facts", do you have in mind something like "20+7=27"?
"3^^^^3 > 3^^^3", properties of higher cardinals, hyperreal numbers, facts about a GoL world, about universes with various oracles we don't have.
Things for which you can't build a trivial analogy out of physical objects, like a pile of 27 rocks (which are not themselves simple, but this is not easy to appreciate in the context of this comparison).
Replies from: lukeprog↑ comment by lukeprog · 2011-06-12T08:38:40.614Z · LW(p) · GW(p)
Certainly, one could reduce normative language into purely logical-mathematical facts, if that was how one was using normative language. But I haven't heard of people doing this. Have you? Would a reduction of 'ought' into purely mathematical statements ever connect up again to physics in a possible world? If so, could you give an example - even a silly one?
Since it's hard to convey tone through text, let me explicitly state that my tone is a genuinely curious and collaboratively truth-seeking one. I suspect you've done more and better thinking on metaethics than I have, so I'm trying to gain what contributions from you I can.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-13T20:07:19.831Z · LW(p) · GW(p)
Certainly, one could reduce normative language into purely logical-mathematical facts, if that was how one was using normative language.
Why do you talk of "language" so much? Suppose we didn't have language (and there was only ever a single person), I don't think the problem changes.
Would a reduction of 'ought' into purely mathematical statements ever connect up again to physics in a possible world?
Say, I would like to minimize ((X-2)*(X-2)+3)^^^3, where X is the number I'm going to observe on the screen. This is a pretty self-contained specification, and yet it refers to the world. The "logical" side of this can be regarded as a recipe, a symbolic representation of your goals. It also talks about a number that is too big to fit into the physical world.
Replies from: lukeprog↑ comment by lukeprog · 2011-06-14T17:34:43.923Z · LW(p) · GW(p)
Say, I would like to minimize ((X-2)*(X-2)+3)^^^3, where X is the number I'm going to observe on the screen. This is a pretty self-contained specification, and yet it refers to the world. The "logical" side of this can be regarded as a recipe, a symbolic representation of your goals. It also talks about a number that is too big to fit into the physical world.
Okay, sure. We agree about this, then.
↑ comment by Vladimir_Nesov · 2011-06-11T19:37:55.070Z · LW(p) · GW(p)
With you, too, Vladimir, I suspect our anticipations do not differ, but our language for talking about these subtle things is slightly different, and thus it takes a bit of work for us to understand each other.
This would require that we both have positions that accurately reflect reality, or are somehow synchronously deluded. This is a confusing territory, I know that I don't know enough to be anywhere confident in my position, and even that position is too vague to be worth systematically communicating, or to describe some important phenomena (I'm working on that). I appreciate the difficulty of communication, but I don't believe that we would magically meet at the end without having to change our ideas in nontrivial ways.
Replies from: lukeprog↑ comment by lukeprog · 2011-06-12T08:33:31.039Z · LW(p) · GW(p)
I just mean that our anticipations do not differ in a very local sense. As an example, imagine that we were using 'sound' in different ways like Albert and Barry. Surely Albert and Barry have different anticipations in many ways, but not with respect to the specific events closely related to the tree falling in a forest when nobody is around.
↑ comment by Peterdjones · 2011-06-12T19:24:03.362Z · LW(p) · GW(p)
If you're talking about something that doesn't reduce (even theoretically) into physics and/or a logical-mathematical function, then what are you talking about? Fiction? Magic?
Or maybe things that just don't usefully reduce.
↑ comment by [deleted] · 2011-06-10T03:45:03.672Z · LW(p) · GW(p)
I'd be very gracious if you could take a look at my recent question and the comments. Your statement
"X is true" (where X is a mathematical statement) means something, and that thing is not "I think X is true" or "I would think that X is true if I were smarter and some other stuff".
is interesting to me. What is a counter-argument to the claim that the only way that one could claim that " "X is true" means something" is to unpack the statement "X is true" all the way down to amplitudes over configurations (perhaps in a subspace of configuration space that highly factorizes over 'statistically common arrangements of particles in human brains correlating to mathematical conclusions' or something.
Where do the intuition-sympathizers stand on the issue of logical names?
I don't think something like 'ought' can intuitively point to something that has ontological ramifications. If there is any "intuition" to it, why is it unsatisfactory to think it's merely an evolutionary effect?
From the original post above, I find a point of contention with
People who do feel that intuition run into trouble. This is because "I ought to do X' does not refer to anything that exists. How can you make a statement that doesn't refer to anything that exists?
'I ought to do X' does correspond to something that exists... namely, some distribution over configurations of human minds. It's a proposition like any other, like 'that sign is red' for example. You can track down a fully empirical and quantifiable descriptor of 'I ought to do X' with some sufficiently accurate model and measuring devices with sufficient precision. States of knowledge about what one 'ought' to do are states of knowledge like any others. When tracking down the physics of 'Ought', it will be fleshed out with some nuanced, perhaps situationally specific, definition that relates it to other existing entities.
I guess more succinctly, there is no abstract concept of 'ought'. The label 'ought' just refers to an algorithm A, an outcome desired from that algorithm O, an input space of things the algorithm can operate on, X, an assessment of the probability that the outcome happens under the algorithm, P(A(X) = O). Up to the limit of sensory fidelity, this is all in principle experimentally detectable, no?
Replies from: Will_Sawin, None↑ comment by Will_Sawin · 2011-06-10T11:02:55.391Z · LW(p) · GW(p)
I don't think something like 'ought' can intuitively point to something that has ontological ramifications.
I don't believe in an ontology of morals, only an epistemology of them.
namely, some distribution over configurations of human minds.
Do you think that "The sign is red" means something different from "I believe the sign is red"? (In the technical sense of believe, not the pop sense.)
Do you think that "Murder is wrong" means something different from "I believe that murder is wrong."?
Replies from: None↑ comment by [deleted] · 2011-06-10T17:48:49.315Z · LW(p) · GW(p)
The verb believe goes without saying when making claims about the world. To assert that 'the sign is red' is true would not make sense if I did not believe it, by definition. I would either be lying or unaware of my own mental state. To me, your question borders more on opinions and their consequences.
Quoting from there: "But your beliefs are not about you; beliefs are about the world. Your beliefs should be your best available estimate of the way things are; anything else is a lie."
What I'm trying to say is that the statement (Murder is wrong) implies the further slight linguistic variant (I believe murder is wrong) (modulo the possibility that someone is lying or mentally ill, etc.) The question then is whether (I believe murder is wrong) -> (murder is wrong). Ultimately, from the perspective of the person making these claims, the answer is 'yes'. It makes no sense for me to feel that my preferences are not universally and unequivocally true.
I don't find this at odds with a situation where a notorious murderer who is caught, say Hannibal Lecter, can simultaneously choose his actions and say "murder is wrong". Maybe the person is mentally insane. But even if they aren't, they could simply choose a preference ordering such that the local wrongness of failing to gratify their desire to murder is worse than the local wrongness of murder itself in their society. Thus, they can see that to people who don't have the same preference for murdering someone for self-gratification, the computation of beliefs works out that (murder is wrong) is generally true, but not true when you substitute their local situations into their personal formula for computing the belief. In this case it just becomes an argument over words because the murderer is tacitly substituting his personal local definitions for things when making choices, but then using more general definitions when making statements of beliefs. In essence, the murderer believes it is not wrong for him to murder and get the gratification, but that murder, as society defines it and views it, is "wrong" where "wrong" is a society-level description, not the murderer's personal description. I put a little more about the "words" problem below.
The apparent difference between this way of thinking and the way we all experience our thinking is that, among our assertions is the meta-assertion that (over-asserting beliefs is bad) -> (I believe over-asserting beliefs is bad) or something similar to this. All specific beliefs, including such meta-beliefs, are intertwined. You can't have independent beliefs about whether murder is right that don't depend on your beliefs about whether beliefs should be acted upon like they are cold hard facts.
But at the root, all beliefs are statements about physics. Mapping a complicated human belief down to the level of making statistical pattern recognition claims about amplitude distributions is really hard and inaccessible to us. Further, evolutionarily, we can't afford to burn computation time exploring a fully determined picture of our beliefs. After some amount of computation time, we have to make our chess moves or else the clock runs out and we lose.
It only feels like saying (I believe murder is wrong) fails to imply the claim (murder is wrong). Prefacing a claim with "I believe" is a human-level way or trying to mitigate the harshness of the claim. It could be a statement that tries to roughly quantify how much evidence I can attest to for the claim which the belief describes. It certainly sounds more assured to say (murder is wrong) than to say (I believe murder is wrong), but this is a phantom distinction.
The other thing, which I think you are trying to take special pains to avoid, is that you can very easily run into a battle of words here. If someone says, "I believe murder is wrong" and what they really mean is something like "I believe that it does an intolerable amount of social disservice in the modern society that I live in for anyone to act as if murdering is acceptable, and thus to always make sure to punish murderers," basically, if someone translates "murder" into "the local definition of murder in the world that I frequently experience" and they translate "wrong" into "the local definition of wrong (e.g. punishable in court proceedings or something)", then they are no longer talking about the cognitive concept of murder. An alien race might not define murder the same or "wrong" the same.
If someone uses 'believe' to distinguish between making a claim about the most generalized form of murder they can think of, applicable to the widest array of potential sentient beings, or something like that, then the two statements are different, but only artificially.
If I say "I believe murder is wrong" and I really mean "I believe (my local definition of murder) is (my local definition of wrong)" then this implies the statement (The concept described by my local definition of murder is locally wrong), with no "quantifier" of belief required.
In the end, all statements can be reduced this way. If a statement has "I believe" as a "quantifier", then either it is only an artificial facet of language that restricts the definitions of words in the claim to some (usually local) subset on which the full, unprefaced claim can be made... or else if local definitions of words aren't being implicated, then the "I believe" prefix literally contains no additional information about the state of your mind than the raw assertion would yield.
This is why rhetoric professors go nuts when students write argumentative papers and drop "I think that" or "I believe that" all over the place. Assertions are assertions. It's a social custom that you can allude to the fact that you might not have 100% confidence in your assertion by prefacing it with "I believe". It's also a social custom that you can allude to respect for other beliefs or participation in a negotiation process by prefacing claims with "I believe", but in the strictest sense of what information you're conveying to third parties (separate from any social custom dressings), the "I believe" preface adds no information content.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-10T18:34:19.148Z · LW(p) · GW(p)
The difference is here
Alice: "I bet you $500 that the sign is red" Bob: "OK" later, they find out it's blue Bob: "Pay up!"
Alice: "I bet you $500 that I believe the sign is red" Bob: "OK" later, they find out it's blue Alice: "But I thought it was red! Pay up!"
That's the difference between "X" and "I believe X". We say them in the same situation, but they mean different things.
But even if they aren't, they could simply choose a preference ordering such that the local wrongness of failing to gratify their desire to murder is worse than the local wrongness of murder itself in their society.
The way statements like "murder is wrong" communicate facts about preference orders is pretty ambiguous. But suppose someone says that "Murder is wrong, and this is more important than gratifying my desire, possible positive consequences of murder, and so on" and then murders, without changing their mind. Would they therefore be insane? If yes, you agree with me.
It makes no sense for me to feel that my preferences are not universally and unequivocally true.
Correct is at issue, not true.
But at the root, all beliefs are statements about physics
Why? Why do you say this?
It only feels like saying (I believe murder is wrong) fails to imply the claim (murder is wrong).
Does "i believe the sky is green" imply "the sky is green"? Sure, you believe that, when you believe X, X is probably true, but that's a belief, not a logical implication.
I am suggesting a similar thing for morality. People believe that "(I believe murder is wrong) => (murder is wrong)" and that belief is not reducible to physics.
literally contains no additional information about the state of your mind than the raw assertion would yield.
Assertions aren't about the state of your mind! At least some of them are about the world - that thing, over there.
Replies from: None↑ comment by [deleted] · 2011-06-10T19:00:51.987Z · LW(p) · GW(p)
The difference is here
Alice: "I bet you $500 that the sign is red" Bob: "OK" later, they find out it's blue Bob: "Pay up!"
Alice: "I bet you $500 that I believe the sign is red" Bob: "OK" later, they find out it's blue Alice: "But I thought it was red! Pay up!"
I don't understand this. If Alice bet Bob that she believed that the sign was red, then going and looking at the sign would in no way settle the bet. They would have to go look at her brain to settle that bet, because the claim, "I believe the sign is red" is a statement about the physics of Alice's brain.
I want to think more about this and come up with a more coherent reply to the other points. I'm very intrigued. Also, I think that I accidentally hit the 'report' button when trying to reply. Please disregard any communication you might get about that. I'll take care of it if anyone happens to follow up.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-10T20:11:54.617Z · LW(p) · GW(p)
You are correct in your first paragraph, I oversimplified.
Replies from: None↑ comment by [deleted] · 2011-06-13T04:46:55.225Z · LW(p) · GW(p)
I think this address this topic very well. The first person experience of belief is one in the same with fact-assertion. 'I ought to do X' refers to a 4-tuple of actions, outcomes, utility function, and conditional probability function.
W.r.t. your question about whether a murderer who, prior to and immediately after committing murder, attests to believing that murder is wrong, I would say it is a mistaken question to bring their sanity into it. You can't decide that question without debating what is meant by 'sane'. How a person's preference ordering and resulting actions look from the outside does not necessarily reveal that the person failed to behave rationally, according to their utility function, on the inside. If I choose to label them as 'insane' for seeming to violate their own belief, this is just a verbal distinction about how I will label such third-person viewings of that occurrence. Really though, their preference ordering might have been temporarily suspended due to clouded judgment from rage or emotion. Or, they might not be telling the full truth about their preference ordering and may not even be aware of some aspects of it.
The point is that beliefs are always statements of physics. If I say, "murder is wrong", I am referring to some quantified subset of states of matter and their consequences. If I say, "I believe murder is wrong", I am telling you that I assert that "murder is wrong" is true, which is a statement about my brain's chemistry.
Replies from: Will_Sawin, asr↑ comment by Will_Sawin · 2011-06-13T11:15:01.959Z · LW(p) · GW(p)
The point is that beliefs are always statements of physics
Everyone keeps saying that, but they never give convincing arguments for it.
Replies from: lukeprog, None↑ comment by [deleted] · 2011-06-13T17:17:27.111Z · LW(p) · GW(p)
If I say, "murder is wrong", I am referring to some quantified subset of states of matter and their consequences. If I say, "I believe murder is wrong", I am telling you that I assert that "murder is wrong" is true, which is a statement about my brain's chemistry.
Pardon me, but I believe the burden of proof here is for you to supply something non-physical that's being specified and then produce evidence that this is the case. If the thing you're talking about is supposed to be outside of a magisterium of evidence, then I fail to see how your claim is any different than that we are zombies.
At a coarse scale, we're both asking about the evidence that we observe, which is the first-person experience of assertions about beliefs. Over models that can explain this phenomenon, I am attempting to select the one with minimum message length, as a computer program for producing the experience of beliefs out of physical material can have some non-zero probability attached to it through evidence. How are we to assign probability to the explanation that beliefs do not point to things that physically exist? Is that claim falsifiable? Are there experiments we can do which depend on the result? If not, then the burden of proof here is squarely on you to present a convincing case why the same-old same-old punting to complicated physics is not good enough. If it's not good enough for you, and you insist on going further, that's fine. But physics is good enough for me here and that's not a cop out or an unjustified conclusion in the slightest.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-13T17:50:02.499Z · LW(p) · GW(p)
Suppose I say "X is red".
That indicates something physical - it indicates that I believe X is red
but it means something different, and also physical - it means that X is red
Now suppose I say "X is wrong"
That indicates something physical - it indicates that I believe X is wrong
using the same-old, same-old principle, we include that it means something different.
but there is nothing else physical that we could plausibly say it means.
Replies from: None↑ comment by [deleted] · 2011-06-13T18:26:12.169Z · LW(p) · GW(p)
but there is nothing else physical that we could plausibly say it means.
Why do you say this? Flesh out the definition of 'wrong' and you're done. 'Wrong' refers to arrangements of matter and their consequences. It doesn't attempt to refer to intrinsic properties of objects that exist apart from their physicality. If (cognitive object X) is (attribute Y) this just means that (arrangements of matter that correspond to what I give the label X) have (physical properties that I group together into the heading Y). It doesn't matter if you're saying "freedom is good" or "murder is wrong" or "that sign is red". 'Freedom' refers to arrangements of matter and physical laws governing them. 'Good' refers to local physical descriptions of the ways that things can yield fortunate outcomes, where fortunate outcomes can be further chased down in its physical meaning, etc.
"X is wrong" unpacks to statements about the time evolution of physical systems. You can't simply say
there is nothing else physical that we could plausibly say it means.
Have you gone and checked every possible physical thing? Have you done experiments showing that making correspondences between cognitive objects and physical arrangements of matter somehow "fails" to capture its "meaning"?
This seems to me to be one of those times where you need to ask yourself: is it really the case that cognitive objects are not just linguistic devices for labeling arrangements of matter and laws governing the matter......... or do I just think that's the case?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-13T18:49:55.483Z · LW(p) · GW(p)
Have you gone and checked every possible physical thing?
Your whole argument rests on this, since you have not provided a counterexample to my claim. You've just repeated the fact that there is some physical referent, over and over.
This is not how burden of proof works! It would be simply impossible for me to check every possible physical thing. Is it, therefore, impossible for you to be convinced that I am right?
I expect better from lesswrong posters.
Replies from: None↑ comment by [deleted] · 2011-06-13T19:36:19.893Z · LW(p) · GW(p)
Is it, therefore, impossible for you to be convinced that I am right?
This is what it means for a claim to fail falsifiability. It's easy to generate claims whose proof would only be constituted by fact-checking against every physical thing. This is a far cry from a decision-theoretic claim where, though we can't have perfect evidence, we can make useful quantifications of the evidence we do have and our uncertainty about it.
The empty set has many interesting properties.
It's impossible to quantify your claim without having all of the evidence up front.
You've just repeated the fact that there is some physical referent, over and over.
What I'm trying to say is that I can test the hypothesis of whether or not there is a physical referent. If someone says to me, "Is there or isn't there a physical referent?" and I have to respond, then I have to do so on the strength of evidence alone. I may not be able to provide a referent explicitly, but I know that non-zero probability can be assigned to a physical system in which cognitive objects are placeholders for complicated sets of matter and governing laws of physics. I cannot make the same claim about the hypothesis that cognitive objects do not have utterly physical referents, and therefore, whether or not I have explicit examples of referents, the hypothesis that there must be underlying physical referents wins hands down.
The criticism you're making of me, that I insist there are referents without supplying the actual referents, is physically backwards in this case. For example, someone might say "consciousness is a process that does not correspond to any physically existing thing." If I then reply,
"But consciousness is a property of material and varies directly with changes in that material (or some similar, more detailed argument about cognition), and therefore, I can assign non-zero probability to its being a physical computation, and since I do not have the capacity to assign probabilities to non-physical entities, the hypothesis that consciousness is physical wins."
this is a convincing argument, up to the quantification of the evidence. If you personally don't feel like it's convincing, that's fine, but then you're outside of decision theory and the claim you're making contains literally no semantic information.
The same can be said of the referent of a belief. I think you're failing to appreciate that you're making the very mistake you're claiming that I am making. You're just asserting that beliefs can't plausibly correspond to physically existing things. That's just an assertion. It might be a good assertion or might not even be a coherent assertion. In order to check, I am going to go draft up some sort of probability model that relates that claim to what I know about thoughts and beliefs. Oh, snap, when I do that, I run into the unfortunate wall that if beliefs don't have physical referents, then talking about their referents at all suddenly has no physical meaning. Therefore, I will stick with my hypothesis that they are physical, pending explicit evidence that they aren't.
The convincingness of the argument lies in the fact that one side of this can be made quantitative and experimentally relevant and the other cannot. The burden of proof, as I view it in this situation, is on making a non-zero probability connection between beliefs and some type of referent. I don't see anything in your argument that prevents this connection being made to physical things. I do, however, fail to see any part of your argument that makes this probabilistic connection with non-physical referents.
Maybe it's better to think of it like an argument from non-cognitivism. You're trying to make up a solution space for the problem (non-physical referents) that is incompatible with the whole system in which the problem takes place (physics). Until you make an explicit physical definition of what a "non-physical referent" actually is, then I will not entertain it as a possible hypothesis.
Ultimately, even though your epistemology is more complicated, you argument might as well be: beliefs are pointers to magical unicorns outside of space and time and these magical unicorns are what determine human values. 'Non-physical referents' simply are not. I can't assign a probability to the existence of something which is itself hypothesized to fail to exist, since existence and "being a physical part of reality" are one in the same thing.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-13T20:01:22.213Z · LW(p) · GW(p)
It's easy to generate claims whose proof would only be constituted by fact-checking against every physical thing
That's the good kind of claim, the falsifiable kind, like the Law of Universal Gravitation. That's the kind of claim I'm making.
It's impossible to quantify your claim without having all of the evidence up front.
Your argument seems to depend on the idea that the only way to evaluate a claim is to list the physical universes in which it is true and the physical universes in which it is not true.
This, obviously, is circular.
Do you acknowledge that your reasoning is circular and defend it, presumably with Eliezer's defense of circular reasoning? Or do you claim that it is not circular?
I cannot make the same claim about the hypothesis that cognitive objects do not have utterly physical referents
Sure you can. You take a world, find all the cognitive objects in it, then find all the corresponding physical referents, cross those objects off the list.
I am saying that there are beliefs (strings of symbols with meaning) endowed meaning by their place in a functional mind but for which the set of physical referents they correspond to is the empty set.
Surely you can admit the existence of strings of symbols without physical referents, like this one: "fj4892fjsoidfj390ds;j9d3". There's nothing non-physical about it.
The convincingness of the argument lies in the fact that one side of this can be made quantitative and experimentally relevant and the other cannot.
If "X" is quantitative and experimentally relevant, how could "not-X" be irrelevant? If X makes predictions, how could not-X not make the opposite predictions?
I do, however, fail to see any part of your argument that makes this probabilistic connection with non-physical referents.
Who said that all beliefs have referents?
Replies from: None↑ comment by [deleted] · 2011-06-13T21:58:13.419Z · LW(p) · GW(p)
Sure you can. You take a world, find all the cognitive objects in it
My claim is that if one had really done this, then by definition of "find", they have the physical referents for the cognitive objects. If a cognitive object has the empty set as the set of physical referents, then it is the null cognitive object. The string of symbols "fj4892fjsoidfj390ds;j9d3" might have no meaning to you when thinking in English, say, but then it just means it is an instantiation of the empty cognitive object, any string of symbols failing to point to a physical referent.
I'm trying to say that if the cognitive object is to be considered as pointing to something, that is, it is in some sense not the null cognitive object, then the thing which is its referent is physical. It's incoherent to say that a string of symbols refers to something that's not physical. What do you mean by 'refer' in that setting? There is no existing thing to be referred to, hence the symbol does no action of referring. So when someone speaks about "X" being right or wrong, either they are speaking about physical events or else "X" fails to be a cognitive object.
I claim that my reasoning is not circular.
Your argument seems to depend on the idea that the only way to evaluate a claim is to list the physical universes in which it is true and the physical universes in which it is not true.
It depends on what you mean by 'evaluate'. What I'm taking for that definition right now is that if I want to assess whether proposition P is true, then I can only do so in a setting of decision theory and degrees of evidence and uncertainty. This means that I need a model for the action P and a way of corresponding probabilities to the various hypotheses about P. In this case, P = "Some cognitive objects have referents that are not the null referent and are also not physical". I claim that all referents are either physical or else they are the null referent. The set of non-physical referents is empty.
Just because a string fails to have a physical referent does not mean that it succeeds in having a non-physical one. What evidence do I have that there exist non-physical referents. What model of cognitive objects exists with which it is possible to achieve experimental evidence of a non-physical referent?
I am saying that there are beliefs (strings of symbols with meaning) endowed meaning by their place in a functional mind but for which the set of physical referents they correspond to is the empty set.
What do you mean by 'endowed meaning'? If a cognitive object has no physical referent, to me, that is the definition of meaningless. It fails to correspond to reality.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-13T22:32:48.511Z · LW(p) · GW(p)
What do you mean by 'endowed meaning'? If a cognitive object has no physical referent, to me, that is the definition of meaningless. It fails to correspond to reality.
This is the heart of the matter. You are saying that the only relevant properties of a cognitive object are its referents. Thus, no referents = no relevant properties = null object.
I say that, on the contrary, a cognitive object has other relevant properties. One such property is its place on the code of a brain.
Imagine an AI with a set of objects that it marks as either "true" or "false". These objects have no referents, but they influence the formation of objects with referents. I think it's fair to say that:
- Such an Ai could exist, with the objects having referents/no referents as appropriate.
- These objects are not just the null object.
- The AI is thinking irrationally.
Now imagine an AI with a set of objects that it marks as either "true" or "false". These objects have no referents, but they influence the AI's choices. I think it's fair to say that:
- Such an Ai could exist, with the objects having referents/no referents as appropriate.
- These objects are not just the null object.
- The AI could be thinking rationally.
↑ comment by [deleted] · 2011-06-13T22:47:37.548Z · LW(p) · GW(p)
I think that the meaning of "The AI could be thinking rationally" is that it could turn out to be the case that the objects labeled true and false have a correspondence to physically existing things and that correspondence allows the A.I. to construct decision rules which correspond to reality within some computable range of uncertainty.
If we are unable to map the inputs to the A.I.'s decision process (in this case objects labeled true or false and whose referents, if any, are unknown to us at the start) back to physical reality, then it is still mysterious to us and we can't claim that it's rational in any sense other than pure statistical experience (it could just be that when asked to make a series of decisions using the true/false labeled objects, the A.I. got incredibly lucky).
In order to conclude (in any more than a superficial way) that the A.I. is rational, there must be an explicit correspondence between the labeled objects and the physics world, and hence we would have found their referents. If this is, in principle, an impossible task (as you claim), then the concept of rationality doesn't apply to the A.I. In what sense would it be said to actually be rational, rather than just produce a sequence of outputs that appear to be rational to us for mysterious reasons?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-13T23:18:03.319Z · LW(p) · GW(p)
I think that the meaning of "The AI could be thinking rationally" is that it could turn out to be the case that the objects labeled true and false are integral components of a program that provably calculates rational decisions every time.
Replies from: None↑ comment by [deleted] · 2011-06-13T23:56:07.255Z · LW(p) · GW(p)
A proof that a program calculates rational decisions every time necessarily provides the physical referents of its calculation. There's no difference between knowing that a program calculates rational decisions every time and knowing how it is that it calculates rational decisions every time. If you don't know the explicit correspondence between its calculations and reality then your state of knowledge cannot include the fact that the program always yields rational conclusions. You can have degrees of certainty that it is rational without having full knowledge of its referents, but not factual knowledge as in a mathematical proof.
It may be that a slick mathematical argument reduces the connection to symbols that don't readily convey the physical connection, but
don't tell me that knowledge is "subjective". Knowledge has to be represented in a brain, and that makes it as physical as anything else. For M to physically represent an accurate picture of the state of Y, M's physical state must correlate with the state of Y. You can take thermodynamic advantage of that - it's called a Szilard engine.
Or as E.T. Jaynes put it, "The old adage 'knowledge is power' is a very cogent truth, both in human relations and in thermodynamics."
And conversely, one subsystem cannot increase in mutual information with another subsystem, without (a) interacting with it and (b) doing thermodynamic work. Otherwise you could build a Maxwell's Demon and violate the Second Law of Thermodynamics - which in turn would violate Liouville's Theorem - which is prohibited in the standard model of physics.
Which is to say: To form accurate beliefs about something, you really do have to observe it. It's a very physical, very real process: any rational mind does "work" in the thermodynamic sense, not just the sense of mental effort.
If your state of knowledge (brain chemistry) is updated to include special knowledge of the rationality of an agent, then there is entanglement between you and that agent, for that is what knowledge is. You can't know that an agent is rational without knowing the physical connection between its cognitive objects and reality. To whatever degree you lack knowledge about the physical referents of its cognitive objects, that is the degree to which you lack knowledge about whether or not it is rational.
↑ comment by asr · 2011-06-13T23:02:28.591Z · LW(p) · GW(p)
The point is that beliefs are always statements of physics. If I say, "murder is wrong", I am referring to some quantified subset of states of matter and their consequences. If I say, "I believe murder is wrong", I am telling you that I assert that "murder is wrong" is true, which is a statement about my brain's chemistry.
Hm? It's easy to form beliefs about things that aren't physical. Suppose I tell you that the infinite cardinal aleph-1 is strictly larger than aleph-0. What's the physical referent of the claim?
I'm not making a claim about the messy physical neural structures in my head that correspond to those sets -- I'm making a claim about the nonphysical infinite sets.
Likewise, I can make all sorts of claims about fictional characters. Those aren't claims about the physical book, they're claims about its nonphysical implications.
Replies from: None↑ comment by [deleted] · 2011-06-13T23:19:47.392Z · LW(p) · GW(p)
Why do you think that nonphysical implications are ontologically existing things? I argue that what you're trying to get at by saying "nonphysical implications" are actual quantified subsets of matter. Ideas, however abstract, are referring to arrangements of matter. The vision in your mind when you talk about aleph-1 is of a physically existing thing. When's the last time you imagined something that wasn't physical? A unicorn? You mean a horse with wings glued onto it? Mathematical objects represent states of knowledge, which are as physical as anything else. The color red refers to a particular frequency of light and the physical processes by which it is a common human experience. There is no idea of what red is apart from this. Red is something different to a blind man than it is to you, but by speaking about your physical referent, the blind man can construct his own useful physical referent.
Claims about fictional characters are no better. What do you mean by Bugs Bunny other than some arrangement of colors brought to your eyes by watching TV in the past. That's what Bugs Bunny is. There's no separately existing entity which is Bugs Bunny that can be spoken about as if it ontologically was. Every person who refers to Bugs Bunny refers to physical subsets of matter from their experience, whether that's because they witnessed the cartoon and were told through supervised learning what cognitive object to attach it to or they heard about it later through second hand experience. A blind person can have a physical referent when speaking about Bugs Bunny, albeit one that I have a very hard time mentally simulating.
In any case, merely asserting that something fails to have a physical referent is not a convincing reason to believe so. Ask yourself why you think there is no physical referent and whether one could construct a computational system that behaves that way.
Replies from: Alicorn, asr, None↑ comment by asr · 2011-06-14T00:20:45.941Z · LW(p) · GW(p)
I have no very firm ontological beliefs. I don't want to make any claim about whether fictional characters or mathematical abstractions "really exist".
I do claim that I can talk about abstractions without there being any set of physical referents for that abstraction. I think it's utterly routine to write software that manipulates things without physical referents. A type-checker, for instance, isn't making claims about the contents of memory; it's making higher-order claims about how those values will be used across all possible program executions -- including ones that can't physically happen.
I would cheerfully agree with you that the cognitive process (or program execution) is carried out by physical processes. Of course. But the subject of that process isn't the mechanism. There's nothing very strange about this, as far as I can tell. It's routine for programs and programmers to talk about "infinite lists"; obviously there is no such thing in the physical world, but it is a very useful abstraction.
By the way, I think your Bugs Bunny example fails. When I talk to somebody about Bugs Bunny, I am able to make myself understood. The other person and I are able to talk, in every sense that matters, about the same thing. But we don't share the same mental states. Conversely, my mental picture isn't isomorphic to any particular set of photons; it's a composite. Somehow, that doesn't defeat practical communication.
The case might be clearer for purely literary characters. When I talk about the character King Lear, I certainly am not saying something about the physical copy I read! Consider the perfectly ordinary (and true) sentence "King Lear had three daughters." That's not a claim about ink, it's a claim about the mental models created in competent speakers of English by the work (which itself is an abstraction, not a physical thing). Those models are physically embodied, but they are not physical things! There's no set of quarks you can point to and say "there's the mental model."
Replies from: None↑ comment by [deleted] · 2011-06-14T01:54:25.571Z · LW(p) · GW(p)
mental models created in competent speakers of English by the work (which itself is an abstraction, not a physical thing)
This is where we disagree. Those mental models are simply arrangements of matter. The fact that it feels like you're referring something separate from an arrangement of matter-memory in your brain is another thing all together. The reason that practical communication works at all is that there is an extreme amount of mutual information held between the set of features which you use to categorize the physical memory of, say, Bugs Bunny, and the features used to categorize Bugs in someone else's mind. You can reference your brain's physical memory in such a way as to cause another's physical memory to reference something, and if an algorithm sorts the mutual information of these concepts until it finds a maximum, and common experience then forms all sorts of additional memories about what wound up being referenced, it is not surprising at all that a purely physical model of concepts would allow communication. I don't see how anything you've said represents more than an assertion that it feels to you as if abstractions are not simply the brain matter that they are made out of in your mind. It's not a convincing reason for me to think abstractions have ontological properties. I think the hypothesis that it just feels that way since my brain is made of meat and I can't look at the wiring schematics is more likely.
Replies from: asr↑ comment by asr · 2011-06-14T02:26:09.758Z · LW(p) · GW(p)
This is starting to feel like a shallow game of definition-bending. I don't think we're disagreeing about any testable claim. So I'm not going to argue about why your definition is wrong, but I will describe why I think it's less useful in expressing the sorts of claims we make about the world.
When we talk about whether two mental models are similar, the similarity function we use is representation-independent. You and I might have very similar mental models, even if you are thinking with superconducting wires in liquid helium and our physical brains have nothing in common. Not being willing to talk honestly about abstractions makes it hard to ask how closely aligned two mental models are -- and that's a useful question to ask, since it helps predict speech-acts.
Conversely, saying that "everything is a physical property" deprives us of what was previously a useful category. A toaster is physical in a way that an eight-dimensional vector space is not and in a way that a not-yet-produced toaster is not. I want a word to capture that difference.
In particular, physical objects, as most of the world uses the term, means that objects have position and mass that evolve in predictable ways. It's sensible to ask what a toaster weighs. It's not sensible to ask what a mental model weighs.
I think your definitions here mean that you can't actually explain ordinary ostensive reference. There is a toaster over there, and a mental model over here, and there is some correspondence. And the way most of the world uses language, I can have the same referential relationship to a fictional person as to a real person, as to a toaster.
And I think I'm now done with the topic.
Replies from: None↑ comment by [deleted] · 2011-06-14T02:43:06.762Z · LW(p) · GW(p)
When we talk about whether two mental models are similar, the similarity function we use is representation-independent. ...
Not being willing to talk honestly about abstractions makes it hard to ask how closely aligned two mental models are -- and that's a useful question to ask, since it helps predict speech-acts.
Conversely, saying that "everything is a physical property" deprives us of what was previously a useful category. A toaster is physical in a way that an eight-dimensional vector space is not and in a way that a not-yet-produced toaster is not. I want a word to capture that difference.
First, I didn't say anything at all about the usefulness of treating abstractions the way we do. I don't believe in actual free will but I certainly believe that the way we walk around acting as if free will was a real attribute that we have is very useful. You can arrange a network of neurons in such a way that it will allow identification of a concept, and we use natural language to talk about this sort of arrangement of matter. Talking about it that way is just fine, and indeed very useful. But this thread was about a defense of metaethics and partially about the defense of beliefs as non-physical, but still really existing, entities. For purposes of debating that point, I think it starts to matter whether someone does or does not recognize that concepts are just arrangements of matter: information which can be extracted from brain states but does not in and of itself point to any actual, ontological entity.
I think I am quite willing to talk about abstractions and their usefulness ... just not willing to agree that they are fundamental parts of reality rather than merely hallucinations the same way that free will is.
In conversations about the ontology of physical categories, it's better to say that the category of toasters in my brain is just a pattern of matter that happens to score high correlations with image, auditory, and verbals feature vectors generated by toasters. In conversations about making toast, it's better to talk about the abstraction of the category of toasters as if it was itself something.
It's the same as talking about the wing of an airplane.
Replies from: asr↑ comment by asr · 2011-06-14T05:16:16.157Z · LW(p) · GW(p)
But this thread was about a defense of metaethics and partially about the defense of beliefs as non-physical, but still really existing, entities. For purposes of debating that point, I think it starts to matter whether someone does or does not recognize that concepts are just arrangements of matter: information which can be extracted from brain states but does not in and of itself point to any actual, ontological entity.
Thank you, that explained where you were coming from.
But I don't see that any of this ontology gets you the meta-ethical result you want to show. I think all you've shown is that ethical claims aren't more true than, say, mathematical truth or physical law. But by any normal standard, "as true as the proof of Fermat's last theorem" is a very high degree of truth.
I think to get the ethical result you want, you should be showing that moral terms are strictly less meaningful than mathematical ones. Certainly you need to somehow separate mathematical truth from "ethical truth" -- and I don't see that ontology gets you there.
Replies from: None↑ comment by [deleted] · 2011-06-14T06:37:42.439Z · LW(p) · GW(p)
Actually, I am opposed to the argument of ontology of belief, which is why I was trying to argue that beliefs are encoded states of matter. If I assert that "X is wrong" it must mean I assert "I believe X is wrong" as well. If I assert "I believe X is wrong" but don't assert "X is wrong", something's clearly a miss. As pointed out here, beliefs are reflections of best available estimates about physically existing things. If I do assert that I believe X is wrong but don't assert that X is wrong, then either I am lying about the belief, or else there's some muddling of definitions and maybe I mean some local version of X or some local version of "wrong", or I am unaware of my actual state of beliefs (possibly due to insanity, etc.) But my point is that in a sane person, from that person's first-person experience, the two statements, "I believe X is wrong" and "X is wrong" contain exactly the same information about the state of my brain. They are the same statement.
My point in all this was that "I believe X is wrong" has the same first-person referent as "X is wrong". If X = murder, say, and I assert that "murder is wrong", then once you unpack whatever definitions in terms of physical matter and consequence that I mean by "murder" and "wrong", you're left with a pointer to a physical arrangement of matter in my brain that resonates when feature vectors of my sensory input correlate with the pattern that stores "murder" and "wrong" in my brain's memory. It's a physical thing. The wrongness of murder is that thing, it isn't an ontological concept that exists outside my brain as some non-physical attribute of reality. Even though other humans have remarkably similar brain-matter-patterns of wrongness and murder, enough so that the mutual information between the pattern allows effective communication, this doesn't suddenly cause the idea that murder is wrong to stop being just a local manifestation in my brain and start being a separate idea that many humans share pointers to.
If someone wanted to establish metaethical claims based on the idea that there exist non-physical referents being referred to by common human beliefs, and that this set of referents somehow reflects an inherent property of reality, I think this would be misguided and experimentally either not falsifiable or at the very least unsupported by evidence. I don't guess that this makes too much practical difference, other than being a sort of Pandora's box for religious-type reasoning (but what isn't?).
↑ comment by [deleted] · 2011-06-13T23:25:34.399Z · LW(p) · GW(p)
I think more salient examples that make this question hard are not going to be borne out of trying to come up with something increasingly abstract. The more puzzling cognitive objects to explain are when you apply unphysical transformations to obvious objects... like taking a dog and imagining it stretched out to the length of a football field. Or a person with a torus-like hole in their abdomen. But these are simply images in the brain. That the semantic content of the image can be interpreted as strange unions of other cognitive objects is not a reason to think that the cognitive object itself isn't physical.
↑ comment by [deleted] · 2011-06-10T04:06:55.409Z · LW(p) · GW(p)
I guess more succinctly, there is no abstract concept of 'ought'. The label 'ought' just refers to an algorithm A, an outcome desired from that algorithm O, an input space of things the algorithm can operate on, X, an assessment of the probability that the outcome happens under the algorithm, P(A(X) = O). Up to the limit of sensory fidelity, this is all in principle experimentally detectable, no?
Just to be a little clearer: saying that "I ought to do X" means "There exists some goal Y such that I want to achieve Y; there exists some set of variables D which I can manipulate to bring about the achievement of Y; X is an algorithm for manipulating variables in D to produce effect Y, and according to my current state of knowledge, I assess that the probability of this model of X(D) yielding Y is high enough such that whatever physical resources it costs me to attempt X(D), as a Bayesian, the trade-off works out in favor of actually doing it. That is, Payoff(Y) P(I was right in modeling the algorithm X(D) as producing Y) > Cost(~Y)P(I was incorrect in modeling the algorithm X(D)), or some similar decision rule.
comment by atucker · 2011-06-09T19:23:24.398Z · LW(p) · GW(p)
People who do feel that intuition run into trouble. This is because "I ought to do X' does not refer to anything that exists. How can you make a statement that doesn't refer to anything that exists?
It refers to my preferences which are physically encoded in my brain. It feels like it doesn't refer to anything that exists because I don't have complete introspective access to the mechanisms by which my brain decides that it wants something.
On top of that, ought refers to lots of different things, and as far as I can tell, most ought statements are summaries of specific preferences (and some signals) rather than the even more complicated description of what I'm actually going to choose to do.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-09T20:36:45.253Z · LW(p) · GW(p)
What is a preference? How do you suggest I infer your preferences?
If it is from your actions, then your definition sounds very similar to mine.
If it is from your statements, then your definition is circular.
If it is from your emotions, then how do people express moral beliefs that contradict their emotions?
Is it something else?
Replies from: Manfred, Matt_Simpson↑ comment by Manfred · 2011-06-10T05:14:37.513Z · LW(p) · GW(p)
This is all irrelevant to atucker's comment, unless you're denying that preferences are patterns in your brain. If his definition sounds similar to yours, good, that means you don't believe "ought does not refer to anything that exists."
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-10T10:55:39.740Z · LW(p) · GW(p)
It's all irrelevant if his answer is "from your actions". Is it obvious that it is? If so, I apologize.
Here are some problems:
The correlates of a phrase are not its meaning. If I say "X will happen", I'm not saying "The current physical world has a pattern that is likely to cause X to happen", I'm just saying "X is going to happen".
If I say "you should do X, but I know you're going to do Y" it doesn't seem like I mean "Parts of your brain want to do X but the rest will overrule them" or "In your situation, I would do X" or "I will punish you for doing Y"
You don't accurately describe internal reasoning. There are many X such that I prefer to do X because of my explicit belief that X is right, not vice versa.
↑ comment by atucker · 2011-06-10T14:40:54.836Z · LW(p) · GW(p)
Intuitively, I feel like I have various competing desires/preferences floating around in my head that I process further in order to decide what to do. They're actually physically encoded, but that's just asserting that they exist for real.
Some salient desires of mine right now are:
The desire to eat (I want my breakfast)
The desire to finish this comment
The desire to scratch my stomach
The desire to look up when graduation rehearsal is
As you can see, many of these are contradictory, so you can't infer all of them from my actions.
Some of these desires are fairly basic, like the one to eat. Neural circuitry controlling hunger has been found.
Others are far more complicated, like the one about finishing this comment. I think that that is probably aggregated from various smaller desires, like "explain myself clearly", "get karma points", or "contribute to Less Wrong".
I think that desires might be packaged by my unconscious mind so that my conscious mind can figure out how to accomplish them, without me needing to think through what I should want before doing anything.
The word preference/desire probably refers to multiple things.
For many of my preferences, you could infer them by changing the world to fulfill them, and then seeing if I'm happier. Normally I will be happier/more satisfied (a physical arrangement of my brain) as a result of my preferences being fulfilled.
Other preferences like "speak in English" or "always be nice to people" seem to be more like imperatives that I should follow for coordination or signalling purposes. But they still feel pretty much the same as my normal preferences.
But it's all still physically encoded in my brain, and does refer to the world-as-is, even if it doesn't feel like it does.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-10T16:45:07.267Z · LW(p) · GW(p)
Do you contribute to charity? Do you make explicit long-term plans about how you will help the world? (Or other, similar things)
↑ comment by Matt_Simpson · 2011-06-09T22:55:30.238Z · LW(p) · GW(p)
How do you suggest I infer your actions?
What is this sentence asking? Is actions supposed to be preferences?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-10T01:56:38.566Z · LW(p) · GW(p)
Yes.
comment by lukeprog · 2011-06-26T19:18:38.043Z · LW(p) · GW(p)
Will and I just spoke on the phone, so here's another way to present our discussion:
Imagine a species of artificial agents. These agents have a list of belief statements that relate physical phenomena to normative properties (let's call them 'moral primitives'):
- 'Liking' reward signals in human brains are good.
- Causing physical pain in human infants is forbidden.
- etc.
These agents also have a list of belief statements about physical phenomena in general:
- Sweet tastes on the tongue produces reward signals in human brains.
- Cutting the fingers of infants produces physical pain in infants.
- Things are made of atoms.
- etc.
These agents also have an 'ought' function that includes a series of logical statements that relate normative concepts to each other, such as:
- A thing can't be both permissible and forbidden.
- A thing can't be both obligatory and non-obligatory.
- etc.
Finally, these robots have actuators that are activated by a series of rules like:
- When the agent observes an opportunity to perform an action that is 'obligatory', then it will take that action.
- An agent will avoid any action that is labeled as 'forbidden.'
Some of these rules might include utility functions that encode ordinal or cardinal value for varying combinations of normative properties.
These agents can't see their own source code. The combination of the moral primitives and the ought function and the non-ought belief statements and a set of rules about behavior produces their action and their verbal statements about what ought to be done.
From their behavior and verbal ought statements these robots can infer to some degree how their ought function works, but they can't fully describe their ought function because they haven't run enough tests or the ought function is just too complicated or the problem is made worse because they also can't see their moral primitives.
The ought function doesn't reduce to physics because it's a set of purely logical statements. The 'meaning' of ought in this sense is determined by the role that the ought function plays in producing intentional behavior by the robots.
Of course, the robots could speak in ought language in stipulated ways, such that 'ought' means 'that which produces pleasure in human brains' or something like that, and this could be a useful way to communicate efficiently, but it wouldn't capture what the ought function is doing or how it is contributing to the production of behavior by these agents.
What Will is saying is that it's convenient to use 'ought' language to refer to this ought function only, and not also to a combination of the ought function and statements about physics, as happens when we stipulatively use 'ought' to talk about 'that which produces well-being in conscious creatures' (for example).
I'm saying that's fine, but it can also be convenient (and intuitive) for people to use 'ought' language in ways that reduce to logical-physical statements, and not only in ways that express a logical function that contains only transformations between normative properties. So we don't have substantive disagreement on this point; we merely have different intuitions about the pragmatic value of particular uses for 'ought' language.
We also drew up a simplified model of the production of human action in which there is a cognitive module that processes the 'ought' function (made of purely logical statements like in the robots' ought function), a cognitive module that processes habits, a cognitive module that processes reflexes, and so on. Each of these produces an output, and another module runs arg(max) on these action options to determine which actions 'wins' and actually occurs.
Of course, the human 'ought' function is probably spread across multiple modules, as is the 'habit' function.
Will likes to think of the 'meaning' of 'ought' as being captured by the algorithm of this 'ought' function in the human brain. This ought function doesn't contain physical beliefs, but rather processes primitive normative/moral beliefs (from outside the ought function) and outputs particular normative/moral judgments, which contribute to the production of human behavior (including spoken moral judgments). In this sense, 'ought' in Will's sense of the term doesn't reduce to physical facts, but to a logical function.
I'm fine with Will using 'ought' in that sense if he wants. I'll try to be clear how I am using the term when I use it.
Will also thinks that the 'ought' function (in his sense) inside human brains is probably very similar between humans - ones that aren't brain damaged or neurologically deranged. I don't know how probable this is because cognitive neuroscience hasn't progressed that far. But if the 'ought' function is the same in all healthy humans, then there needn't be a separate 'meaning' of ought (in Will's sense) for each speaker, but instead there could be a shared 'meaning' of ought (in Will's sense) that is captured by the algorithms of the 'ought' cognitive module that is shared by healthy human brains.
Will, did I say all of that correctly?
Replies from: Wei_Dai, Will_Sawin, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2011-06-27T19:07:34.236Z · LW(p) · GW(p)
I'm fine with Will using 'ought' in that sense if he wants. I'll try to be clear how I am using the term when I use it.
That doesn't seem right. Compare (note that I don't necessarily endorse the rest of this paper) :
What does the word ‘ought’ mean? Strictly speaking, this is an empirical question, about the meaning of a word in English. Such empirical semantic questions should ideally be answered on the basis of extensive empirical evidence about the use of the word by native speakers of English.
As a philosopher, I am primarily interested, not in empirical questions about the meanings of words, but in the nature of the concepts that those words can be used to express — especially when those concepts are central to certain branches of philosophy, as the concepts expressed by ‘ought’ are central to ethics and to the theory of rational choice and rational belief. Still, it is often easiest to approach the task of giving an account of the nature of certain concepts by studying the meanings of the words that can express those concepts. This is why I shall try here to outline an account of the meaning of ‘ought’.
If you examine just one particular sense of the word "ought", even if you make clear which sense, but without systematically enumerating all of the meanings of the word, how can you know that the concept you end up studying is the one that is actually important, or one that other people are most interested in?
Replies from: lukeprog↑ comment by lukeprog · 2011-06-27T20:31:08.041Z · LW(p) · GW(p)
I suspect there are many senses of a word like 'ought' that are important. As 'pluralistic moral reductionism' states, I'm happy to use and examine multiple important meanings of a word.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-06-28T01:24:28.798Z · LW(p) · GW(p)
Let me expand my comment a bit, because it didn't quite capture what I wanted to say.
I'm fine with Will using 'ought' in that sense if he wants.
If Will is anything like a typical human, then by "ought" he often means something other than, or more than, the sense referred to by "that sense", and it doesn't make sense to say that perhaps he wants to use "ought" in that sense.
When you say "I'm fine with ..." are you playing the role of the Austere Metaethicist who says "Tell me what you mean by 'right', and I will tell you what is the right thing to do."? But I think Austere Metaethics is not a tenable metaethical position, because when you ask a person to tell you what they mean by "right", they will almost certainly fail to give you a correct answer, simply because nobody really understands (much less can articulate) what they mean by "right". So what is the point of that?
Or perhaps what you meant to say instead was "I'm fine with Will studying 'ought' in that sense if he wants"? In that case see my grandparent comment (but consider it directed mostly towards Will instead of you).
↑ comment by Will_Sawin · 2011-06-27T14:54:08.581Z · LW(p) · GW(p)
I don't love all your terminology, but obviously my preferred terminology's ability to communicate my ideas on this matter has been shown to be poor.
I would emphasize less relationships between similar moral beliefs:
A thing can't be both permissible and forbidden.
and more the assembly-line process converting general to specific
This ought function doesn't contain physical beliefs, but rather processes primitive normative/moral beliefs (from outside the ought function) and outputs particular normative/moral judgments, which contribute to the production of human behavior (including spoken moral judgments)
I'm pretty sure the first statement here only makes sense as a consequence of the second:
The ought function doesn't reduce to physics because it's a set of purely logical statements. The 'meaning' of ought in this sense is determined by the role that the ought function plays in producing intentional behavior by the robots.
↑ comment by Vladimir_Nesov · 2011-06-26T21:25:40.775Z · LW(p) · GW(p)
The ought function doesn't reduce to physics because it's a set of purely logical statements. The 'meaning' of ought in this sense is determined by the role that the ought function plays in producing intentional behavior by the robots.
This doesn't make sense to me. Does 28 reduce to physics in this sense? How is this "ought" thing distinguished from all the other factors (moral errors, say) that contribute to behavior (that is, how is its role located)?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-27T14:39:45.874Z · LW(p) · GW(p)
First, I would say that reducibility is a property of statements. In the sense I use it:
The statement "14+14=28" is reducible to aether.
The statement "I have 28 apples" is reducible to phyisics.
The statement "There are 28 fundamental rules that one must obey to lead a just life" is reducible to ethics.
Moral statements are irreducible to physics in the sense that "P is red" is irreducible to physics - for any particular physical "P", it is reducible. The logical properties of P-statements, like "P is red or P is not red" are given as a set of purely logical statements - that's their analogue of the ought-function. If P-statements had some useful role in producing behavior, they would have a corresponding meaning.
Random, probably unnecessary math:
A reducible-class is a subalgebra of the Boolean algebra of statements, closed under logical equivalence. The statements reducible to aether are those in the reducible-class generated by True and False. The statements reducible to physics are those in the reducible-class generated by "The world is in exactly state X". The statements reducible to morality are those in the reducible-class generated by "Exactly set-of-actions Y are forbidden and set-of-actions Z are obligatory".
comment by lukeprog · 2011-06-10T20:06:16.046Z · LW(p) · GW(p)
How can you make a statement that doesn't refer to anything that exists? I've done it, and my reasoning process is still intact, and nothing has blown up. Everything seems to be fine. No one has explained to me what isn't fine about this. Since it's intuitive, why would you not want to do it that way?
Clearly, you can make statements about things that don't exist. People do it all the time, and I don't object to it. I enjoy works of fiction, too. But if the aim of our dialogue is true claims about reality, then you've got to talk about things that exist - whether the subject matter is 'oughts' or not.
What one is forced to do by this argument, if one wants to speak only in physical statements, is to say that "should" has a really, really long definition that incorporates all components of human value. When a simple word has a really, really long definition, we should worry that something is up.
I don't see why this needs to be the case. I can stipulate short meanings of 'should' as I use the term. People do this all the time (implicitly, at least) when using hypothetical imperatives.
Also, in general I find myself confused by your way of talking about these things. It's not a language I'm familiar with, so I suspect I'm still not fully understanding you. I'm not sure which of our anticipations differ because of the disagreement you're trying to express.
Replies from: Will_Sawin, Peterdjones↑ comment by Will_Sawin · 2011-06-13T11:19:35.035Z · LW(p) · GW(p)
But if the aim of our dialogue is true claims about reality, then you've got to talk about things that exist - whether the subject matter is 'oughts' or not.
But the aim of our dialogue isn't really true claims. It's useful claims - claims that one can incorporate into one's decision-making process. Claims about Darth Vader, you can't, but claims about ought, you can.
I can stipulate short meanings of 'should' as I use the term.
What about that other word (that is also spelled "should") that you don't have to stipulate the meaning of because people already know what it means?
What about the regular kind of imperatives?
If I define "fa" to mean "any object which more than 75% of the claims in this long book of no previous importance accurately describes", I have done something very strange, even if I letter say "If 'fa' means 'red', that's fa."
Replies from: lukeprog↑ comment by lukeprog · 2011-06-13T16:34:37.332Z · LW(p) · GW(p)
But the aim of our dialogue isn't really true claims. It's useful claims - claims that one can incorporate into one's decision-making process.
I don't understand what you mean, here. I'm not sure what you mean by 'true' or 'useful', I guess. I'm talking about true claims in this sense.
What about that other word (that is also spelled "should") that you don't have to stipulate the meaning of because people already know what it means?
Which one is that, and what does everybody already know it to mean?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-13T17:08:08.614Z · LW(p) · GW(p)
I don't understand what you mean, here. I'm not sure what you mean by 'true' or 'useful', I guess. I'm talking about true claims in this sense.
I mean what you mean by "true", or maybe something very similar.
By "useful" I mean "those claims that could help someone come to a decision about their actions"
Which one is that, and what does everybody already know it to mean?
It's what people say when they say "should" but don't precede it with "if". Some people on lesswrong think it means:
[you should do X] = [X maximizes this complicated function that can be computed from my brain state]
Some think it means:
[you should do X] = [X maximizes whatever complicated function is computed from my brain state]
and I think:
[you should do X] = [the statement that, if believed, would cause one to do X]
Replies from: Vladimir_Nesov, Peterdjones↑ comment by Vladimir_Nesov · 2011-06-13T19:12:40.103Z · LW(p) · GW(p)
and I think:
[you should do X] = [the statement that, if believed, would cause one to do X]
You can find that there is a bug in your brain that causes you to react to a certain belief, but you'd fix it if you notice it's there, since you don't think that belief should cause that action.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-13T19:41:10.894Z · LW(p) · GW(p)
I could say
[the statement that, if believed by a rational agent, would cause it to do X]
but that's circular.
But one of the points I've been trying to make is that it's okay for the definition of something to be, in some sense, circular. As long as you can describe the code for a rational agent that manipulates that kind of statement.
Replies from: Vladimir_Nesov, Peterdjones↑ comment by Vladimir_Nesov · 2011-06-13T20:53:13.735Z · LW(p) · GW(p)
Some things you can't define exactly, only refer to them with some measure of accuracy. Physical facts are like this. Morality is like this. Rational agents don't define morality, they respond to it, they are imperfect detectors of moral facts who would use their moral expertise to improve own ability to detect moral facts or build other tools capable of that. There is nothing circular here, just constant aspiration for referencing the unreachable ideal through changeable means.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-13T22:22:58.227Z · LW(p) · GW(p)
But there aren't causal arrows pointing from morality to rational agents, are there? Just acausal/timeless arrows.
You do have to define "morality" as meaning "that thing that we're trying to refer to with some measure of accuracy", whereas "red" is not defined to refer to the same thing.
If you agree, I think we're on the same page.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-19T23:18:44.434Z · LW(p) · GW(p)
But there aren't causal arrows pointing from morality to rational agents, are there? Just acausal/timeless arrows.
I think the idea of acausal/logical control captures what causality was meant to capture in more detail, and is a proper generalization of it. So I'd say that there are indeed "causal" arrows from morality to decisions of agents, to the extent the idea of "causal" dependence is used correctly and not restricted to the way we define physical laws on a certain level of detail.
You do have to define "morality" as meaning "that thing that we're trying to refer to with some measure of accuracy"
Why would I define it so? It's indeed what we are trying to refer to, but what it is exactly we cannot know.
whereas "red" is not defined to refer to the same thing.
Lost me here. We know enough about morality to say that it's not the same thing as "red", yes.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-20T05:25:17.790Z · LW(p) · GW(p)
I think the idea of acausal/logical control captures what causality was meant to capture in more detail, and is a proper generation of it. So I'd say that there are indeed "causal" arrows from morality to decisions of agents, to the extent the idea of "causal" dependence is used correctly and not restricted to the way we define physical laws on a certain level of detail.
Sure.
Why would I define it so? It's indeed what we are trying to refer to, but what it is exactly we cannot know.
Let me rephrase a bit.
"That thing, over there (which we're trying to refer to with some measure of accuracy), point point".
I'm defining it extensionally, except for the fact that it doesn't physically exist.
There has to be some kind of definition or else we wouldn't know what we were talking about, even if it's extensional and hard to put into words.
Lost me here. We know enough about morality to say that it's not the same thing as "red", yes.
"red" and "right" have different extensional definitions.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-22T22:24:52.821Z · LW(p) · GW(p)
There has to be some kind of definition or else we wouldn't know what we were talking about
I suspect there is a difference between knowing things and being able to use them, neither generally implying the other.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T01:39:16.714Z · LW(p) · GW(p)
This is true, but my claim that words have to have a (possibly extensional) definition for us to use them, and that "right" has an extensional definition, stands.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-23T10:09:12.631Z · LW(p) · GW(p)
Does "whatever's written in that book" work as the appropriate kind of "extensional definition" for this purpose? If so, I agree, that's what I mean by "using without knowing". (As I understand it, it's not the right way of using the term "extensional definition", since you are not giving examples, you are describing a procedure for interacting with the fact in question.)
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T14:13:51.365Z · LW(p) · GW(p)
It's sort of subtle.
"Whatever's written in the book at the location given by this formula: "
defines a word totally in terms of other words, which I would call intensional.
"Whatever's written in THAT book, point point"
points at the meaning, what I would call extensional.
↑ comment by Peterdjones · 2011-06-23T14:06:08.223Z · LW(p) · GW(p)
All definitions should be circular. "The president is the Head of State" is a correct definition. "The president is Obama" is true, but not a definition.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T14:10:05.902Z · LW(p) · GW(p)
Non-circular definitions can certainly be perfectly fine:
"A bachelor is an unmarried man.'
This style is used in math to define new concepts to simplify communication and thought.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-23T14:34:20.373Z · LW(p) · GW(p)
"A bachelor is an unmarried man.'
If that is non circular, so is [the statement that, if believed by a rational agent, would cause it to do X]
I'm quite confused. By circular do you mean anaylitcal, or recursive? (example of the latter: a setis something that can contain elemetns or other sets)
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T18:27:32.958Z · LW(p) · GW(p)
I'm not sure what I mean.
The definition I am using is in the following category:
It may appear problematically self-referential, but it is in fact self-referential in a non-problematic manner.
Agreed?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-23T18:41:39.649Z · LW(p) · GW(p)
I don't think your statement was self referential or problematic,.
↑ comment by Peterdjones · 2011-06-23T13:37:48.401Z · LW(p) · GW(p)
or rather [you should do X] = [the statement that, if believed, would cause one to do X if one were an ideal and completely non akrasic agent]
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T14:10:52.758Z · LW(p) · GW(p)
Correct.
↑ comment by Peterdjones · 2011-06-23T13:49:33.961Z · LW(p) · GW(p)
But if the aim of our dialogue is true claims about reality, then you've got to talk about things that exist - whether the subject matter is 'oughts' or not.
Which would mean either that mathematical knowledge is false, or that there is a Platonic word of mathematical objects for it to correspond to.
OTOH, one could just adopt the Dogma of Empiricism that there is analytical truth which is neither 'about' physical realitty nor 'about' about any metaphysical one ( and that mathematical truth is anayltical). (and that mathematical truth is anayltical).
And if it is an analytical truth that, for instance, that you should do as you would be done by, then that is still applicable to real world situations by fulling "as you would be done by" for your own case.
comment by [deleted] · 2011-06-10T03:53:53.339Z · LW(p) · GW(p)
I made a more topical comment to Wei_Dai's reply to this thread, but I felt it was worth adding that if anyone is interested in a work of fiction that touches on this subject, the novel The Broom of the System, by David Foster Wallace, is worth a look.
comment by Wei Dai (Wei_Dai) · 2011-06-09T21:22:45.620Z · LW(p) · GW(p)
Can you explain what implications (if any) this "naive" metaethics has on the problem how to build an FAI?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-09T21:29:07.185Z · LW(p) · GW(p)
Arguably, none. (If you already believe in CEV.)
Replies from: hairyfigment↑ comment by hairyfigment · 2011-06-09T21:58:22.162Z · LW(p) · GW(p)
Well, you say below you don't believe that (in my words)
a FOOM'd self-modifying AI that cares about humanity's CEV would likely do what you consider 'right' .
Specifically, you say
The AI would not do so, because it would not be programmed with correct beliefs about morality, in a way that evidence and logic could not fix.
You also say, in a different comment, you nevertheless believe this process
would produce an AI that gives very good answers.
Do you think humans can do better when it comes to AI? Do you think we can do better in philosophy? If you answer yes to the latter, would this involve stating clearly how we physical humans define 'ought'?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-10T01:58:01.701Z · LW(p) · GW(p)
Did I misread you? I meant to say:
a FOOM'd self-modifying AI would not likely do what I consider 'right' .
a FOOM'd self-modifying AI that cares about humanity's CEV would likely do what I consider 'right' .
I probably misread you.
comment by lukeprog · 2011-06-21T19:13:37.780Z · LW(p) · GW(p)
Let me have another go at this, since I've now rewritten the is-ought section of 'Pluralistic Moral Reductionism' (PMR).
This time around, I was more clear that of course it's true that, as you say:
we can make statements about what should be done and what should not be done that cannot be reduced, by definition, to statements about the physical world
We can make reducible 'ought' statements. We can make irreducible 'ought' statements. We can exhibit non-cognitive (non-asserting) verbal behaviors employing the sound 'ought'.
For the purposes of the PMR post, I'm interested to investigate frameworks that allow us to determine whether a certain subset of 'ought' statements are true or false.
Thus, in the context of PMR, non-asserting verbal behaviors employing 'ought' sounds are simply a different subject matter. They do not belong to the subset of 'ought' statements I am investigating for that particular post. (I discussed this in the cognitivism vs. non-cognitivism section.) At the same time, clearly such statements can be useful. For example, you can use them to affect others' behavior. You can affect others' behavior and attitudes with non-asserting verbal behaviors, such as when you say "female circumcisions" with a gasp and a frown.
Also in the context of PMR, some 'ought' statements can quickly be tossed in the 'false' bin, because the speaker uses 'ought' language assertively to refer to things that clearly don't exist, like divine commands.
In the context of PMR, other 'ought' statements can be tossed into the 'false' bin if you are a physicalist, because the speaker uses 'ought' language assertively to refer to things that don't fit within a physicalist ontology, like non-natural moral properties. If you're not a physicalist, then our debate about such 'ought' statements can shift to a debate about physicalism vs. non-physicalism.
Will, you seem to be saying that 'ought' has only one meaning, or one definition. You also seem to be saying that this one meaning of 'ought' (or 'should') is captured by what we 'actually want' in the CEV sense. Is that right so far?
If so, I'm still not clear on your arguments for this conclusion. Your writing here has a very high ratio of unstated premises to stated premises. My own writing does that all the time for communication efficiency, in the hopes that my unstated premises are shared or else successfully inferred. But many times I find that the unstated premises didn't make it into the other's mind, and thus my enthymeme is unclear to it. And when I care enough about my argument being clear to certain people, I take the time to draw my unstated premises into the light and state them explicitly.
Since I'm having trouble guessing at the unstated premises in your arguments for singularism about the meaning of 'should' or 'ought', I'll request that you state them explicitly. Would you please state your most central argument for this singularism, without unstated premises?
Replies from: Vladimir_Nesov, Will_Sawin↑ comment by Vladimir_Nesov · 2011-06-21T22:56:48.892Z · LW(p) · GW(p)
Will, you seem to be saying that 'ought' has only one meaning, or one definition. ... If so, I'm still not clear on your arguments for this conclusion.
What are your alternatives (at this level of detail)? If I could be using two different definitions, ought1 and ought2, then I expect there are distinguishing arguments that form a decision problem about which of the two I should've been using, which in turn determines which of these definitions is the one.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-22T00:07:00.691Z · LW(p) · GW(p)
Well there are cases when I should be using two different words.
For instance, if morality is only one component of the correct decision procedure, then MoralOught and CorrectOught are two different things.
But you're not talking about those types of cases, right?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-22T22:31:42.620Z · LW(p) · GW(p)
But you're not talking about those types of cases, right?
Don't understand what you said. Probably not.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-23T01:44:58.040Z · LW(p) · GW(p)
Well, suppose that sometimes, depending on context cues, I use "ought" to mean "paperclip-maximizing", "prime-pile-maximizing", and "actually-ought".
There's nothing wrong about the first two definitions, they're totally reasonable definitions a word might have, they just shouldn't be confused with the third definition, which specifies correct actions.
↑ comment by Will_Sawin · 2011-06-21T22:43:43.679Z · LW(p) · GW(p)
Well, I am saying that there is a meaning of "ought" that is hugely different in meaning from the other senses.
PMR identifies a sort of cluster of different meanings of the word "ought". I am saying, hey, over here, there's this one, singular meaning.
This meaning is special because it has a sense but no referent. It doesn't refer to any property of the physical world, or obviously, of any property of any non-physical world. It just means.
[Not CEV, will explain later with time.]
Replies from: lukeprog↑ comment by lukeprog · 2011-06-21T22:56:32.122Z · LW(p) · GW(p)
Not CEV, will explain later with time.
Okay. I look forward to it.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-22T00:26:21.760Z · LW(p) · GW(p)
So in this perspective what I "want" is really a red herring. I want to do lots of things that I oughtn't do.
What matters is my beliefs about what is right and wrong.
Now, by necessity, I believe that my EV is the best possible approximation of what is right. Because, If I knew of a better approximation, I would incorporate it into my beliefs, and if I didn't know of it, my volition must not have been extrapolated far enough.
But this is not a definition of what is right. To do so would be circular.
If I believe that my EV is very close to humanity's CEV, then I believe that humanity's CEV is almost the best approximation as to what is right. I do, so I do.
So, to start reasoning, I need assumptions. My assumptions would look like:
{these moral intuitions} are fundamentally accurate
or
All my moral intuitions are fundamentally accurate
or something else, just as the assumptions I use to generate physical beliefs would consist of my intuitions about the proper techniques for induction (Bayesianism, Occam's Razor, and so on.)
There doesn't have to be any Book O' Right sitting around for me to engage in this reasoning, I can just, you know, do it.
(It is very ironic that I first developed this edifice because I was bothered by unstated moral assumptions.)
Replies from: lukeprog↑ comment by lukeprog · 2011-06-22T00:33:43.724Z · LW(p) · GW(p)
I'm confused by your way of presenting your arguments and conclusion. On my end this comment looks like a list of unconnected thoughts, with no segues between them. Does somebody else think they know what Will is saying, such that they can explain it to me?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-22T00:55:33.965Z · LW(p) · GW(p)
I drew some boundaries between largely-though-not-totally unconnected thoughts.
Does everything within those boundaries look connected to you?
I think Vladimir Nesov agrees with me on this.
Replies from: lukeprog↑ comment by lukeprog · 2011-06-22T07:33:08.907Z · LW(p) · GW(p)
Thanks, but it's still not clear to me. Nesov, do you want to take a shot and arguing for Will's position, especially if you agree with it?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-22T22:41:14.274Z · LW(p) · GW(p)
No, I have very little idea about what Will is talking about, and strongly suspect that he doesn't either (I only have a vague idea of what I'm talking about as well, recent discussion uses relatively recent ideas). His intuitions seem to be pointing roughly in the direction I believe much more aligned with reality than your pluralistic moral "everyone call a rigid designator" reductionism though (waiting for that emphatic metaethics post for a possible correction in understanding your position), so I can understand why there would be grounds for an argument.
comment by Vladimir_Nesov · 2011-06-09T21:02:24.422Z · LW(p) · GW(p)
Suppose an AI wants to find out what Bob means when he says "water".
This thought experiment can be sharpened by asking what Bob means by "1027". By using Bob as an intermediary, we inevitably lose some precision.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-09T21:30:18.523Z · LW(p) · GW(p)
Morals seem less abstract then numbers, but more abstract than substances. Is that the dimension you are trying to vary?
What does "Bob as an intermediary" mean here?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-09T21:57:49.840Z · LW(p) · GW(p)
We know what 1027 is very well, better than what water is, which simplifies this particular aspect of the thought experiment. We can try constructing mirror-like definitions in terms of what Bob believes "1027" is, what he should believe it is, what he would believe it is on reflection, and so on. These can serve as models of various "extrapolated volition" constructions. By examining these definitions, we can see their limitations, problems with achieving high reliability at capturing the concept of 1027.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-10T02:03:30.345Z · LW(p) · GW(p)
Alright. So 1027 is defined by its place in the axioms of arithmetic. In any system modeled by the axioms of arithmetic, 1027 has a local meaning. The global meaning of 1027, then, is given by those axioms.
Bob imperfectly implements the axioms of arithmetic. If he's a mathematician, and you asked him what they were, he would get it right with a strong likelihood. A non-mathematician, exposed to various arguments that arithmetic was different things would eventually figure out the correct axioms. So Extrapolated 1027 would work.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-11T18:58:13.136Z · LW(p) · GW(p)
The global meaning of 1027, then, is given by those axioms.
What counts as an axiom? You could as well burn them instead of appraising their correctness. There are many ways of representing knowledge of an abstract fact, but those representations won't themselves embody the fact, there is always an additional step where you have an interpretation in mind, so that the representation only matters as a reference to the fact through your interpretation, or a reflection of that fact in a different form.
It might be useful to have a concrete representation, as it can be used as an element of a plan and acted upon, while an abstract fact isn't readily available for that. For example, if your calculator (or brain) declares that "12*12<150" is true, its decision can be turned into action. 1027 items could be lined up in a field, so that you can visually (or by running from one side to the other) appreciate the amount. Alternatively, a representation of a reasoning process can be checked for errors, yielding a more reliable conclusion. But you never reach the fact itself, with the rare exception of physical facts that are interesting in themselves and not as tools for inferring or representing some other facts, physical or not (then a moment passes, and you can only hold to a memory).
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-11T22:35:51.979Z · LW(p) · GW(p)
I don't understand what point you're making here.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-06-11T23:08:42.231Z · LW(p) · GW(p)
You can't get 1027 itself out of an extrapolated volition procedure, or any other procedure. All you can get (or have in your brain) is a representation, that is only meaningful to the extent you expect it to be related to the answer.
Similarly, if you want to get information about morality, all you can get is an answer that would need to be further interpreted. As a special exception (that is particularly relevant for morality and FAI), you can get the actual right actions getting done, so that no further interpretation is necessary, but you still won't produce the idea of morality itself.
comment by Zetetic · 2011-06-10T06:10:26.994Z · LW(p) · GW(p)
People who do feel that intuition run into trouble. This is because "I ought to do X' does not refer to anything that exists. How can you make a statement that doesn't refer to anything that exists? I've done it, and my reasoning process is still intact, and nothing has blown up. Everything seems to be fine. No one has explained to me what isn't fine about this.
Ok, I'll bite. Why does "I ought to X" have to refer to any thing?
When I see atucker's comment, for instance;
It refers to my preferences which are physically encoded in my brain. It feels like it doesn't refer to anything that exists because I don't have complete introspective access to the mechanisms by which my brain decides that it wants something.
I think "this isn't quite right, but it's kind of close"
So what, then, is "ought"? In reality it seems to derive its meaning from a finite set of functional signals. We could take it to mean:
A signal demonstrating group cohesion via expressing shared cherished beliefs "What's that? You say you aren't doing X? You ought to do X!" (murmured approval; yeah, we all do X around here, buddy!)
A signal demonstrating anxiety or indignation "Someone ought to do something to fix this!"
A signal demonstrating disapproval in the present "You oughtn't do that!", or referring to future events "You ought to do that differently from now on" or referring to the past "You ought to have done something else"
etc.
In this light we see that "ought" has no cleanly discernible referent (which makes sense because it isn't a noun), but rather derives meaning via its socially accepted usage in a set of signaling mechanisms.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-10T16:50:35.319Z · LW(p) · GW(p)
How deriving meaning from signaling works is sort of unclear. The strategic value of a signal depends only on what facts cause it, which is not the same as what it means. If the causal graph is
X => Y => Z => I say "fruit loop"
then "fruit loop" could mean Z, or Y, or X, or none of the above.
So I think your meaning is compatible with mine?
Replies from: Zetetic↑ comment by Zetetic · 2011-06-10T22:14:21.576Z · LW(p) · GW(p)
So I think your meaning is compatible with mine?
If what is stated above is your meaning then I think yes. However, if that is the case then this:
Not every set of claims is reducible to every other set of claims. There is nothing special about the set "claims about the state of the world, including one's place in it and ability to affect it." If you add, however, ought-claims, then you will get a very special set - the set of all information you need to make correct decisions.
Doesn't make as much sense to me. Maybe you could clarify it for me?
In particular; it is unclear to me why Ought-claims in general, as opposed to some strict subset of ought-claims like "Action X affords me maximum expected utility relative to my utility function" <=> "I ought to do X", are relevant to making decisions. If that is the case, why not dispense with "ought" altogether? Or is that what you're actually aiming at?
Maybe because the information they signal is useful? But then there are other utterances that fall into this category too some of which are not, strictly speaking, words. So taking after that sense, the set would be incomplete. So I assume that probably isn't what you mean either.
Also judging by this:
In this essay I talk about what I believe about rather than what I care about. What I care about seems like an entirely emotional question to me. I cannot Shut Up And Multiply about what I care about. If I do, in fact, Shut Up and Multiply, then it is because I believe that doing so is right. Suppose I believe that my future emotions will follow multiplication. I would have to, then, believe that I am going to self-modify into someone who multiplies. I would only do this because of a belief that doing so is right.
Would it be safe to say that your stance is essentially an emotivist one? Or is there a distinction I am missing here?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-11T02:12:58.513Z · LW(p) · GW(p)
In particular; it is unclear to me why Ought-claims in general, as opposed to some strict subset of ought-claims like "Action X affords me maximum expected utility relative to my utility function" <=> "I ought to do X", are relevant to making decisions. If that is the case, why not dispense with "ought" altogether? Or is that what you're actually aiming at?
Well I guess strictly speaking not all "ought" claims are relevant to decision-making. So then I guess the argument that they form a natural category is more subtle.
I mean, technically, you don't have to describe all aspects of the correct utility function. but the boundary around "the correct utility function" is simpler than the boundary around "the relevant parts of the correct utility function"
Would it be safe to say that your stance is essentially an emotivist one? Or is there a distinction I am missing here?
No. I think it's propositional, not emotional. I'm arguing against an emotivist stance on the grounds that it doesn't justify certain kinds of moral reasoning.
comment by Manfred · 2011-06-10T05:46:22.196Z · LW(p) · GW(p)
"A Flatland Argument" includes a multitude of problems. It contains a few strawmen, which then turn into non sequiturs when you say that they demonstrate that we often say irreducible things. At least that's what I think you mean by the trivial statement "Not every set of claims is reducible to every other set of claims."
And yet your examples are answerable or full of holes! Why not mention the holes rather than blaming the reductionism (or, in the case of some strawmen, total lack thereof), or mention the reduction and forget the whole thing?
"I'm not interested in words, I'm interested in things. Words are just sequences of sounds or images. There's no way a sequence of arbitrary symbols could imply another sequence, or inform a decision."
"I understand how logical definitions work. I can see how, from a small set of axioms, you can derive a large number of interesting facts. But I'm not interested in words without definitions. What does "That thing, over there?" mean? Taboo finger-pointing."
"You can make statements about observations, that much is obvious. You can even talk about patterns in observations, like "the sun rises in the morning". But I don't understand your claim that there's no chocolate cake at the center of the sun. Is it about something you can see? If not, I'm not interested."
"Claims about the past make perfect sense, but I don't understand what you mean when you say something is going to happen. Sure, I see that chair, and I remember seeing the chair in the past, but what do you mean that the chair will still be there tomorrow? Taboo "will"."
Some answers, in order:
"Imply" is a fact about cognitive processes, which occupy a very general space that includes all sorts of weird algorithms, even associating sounds or symbols, therefore the blanket statement "there's no way" is false. Given that cognitive processes and words are also things, you have also claimed that you are interested in them.
This second example is made of only vaguely related sentences. Did you mean "that thing, over there" to not have a definition? Would tabooing finger pointing really be so hard? Just refer to a cardinal or relative direction.
Reductionists are not, I hope, automatically ignorant of induction. "No cake at the center of the sun" is about something you can see, since the evidence is visible. And though seeing the sun is good evidence that the sun exists, that can never be perfect either - it's all evidence, whether it's the outer layer of the sun or the middle of it.
"Will" refers to the fact that you have evidence of a 3+1-dimensional universe at least in the past and have no reason to believe that you're at a special moment, and so think that the time-dimension extends ahead of you. To say that something "will" be there tomorrow refers to a prediction about the state of the world at a time-coordinate greater than the most recent one that you know of.
↑ comment by Will_Sawin · 2011-06-10T16:57:05.557Z · LW(p) · GW(p)
It contains a few strawmen, which then turn into non sequiturs when you say that they demonstrate that we often say irreducible things. At least that's what I think you mean by the trivial statement "Not every set of claims is reducible to every other set of claims."
What I'm trying to do is to make people question whether "All meaningful statements are reducible, by definition, to facts about the world". I do this by proposing some categories which all meaningful statements are certainly NOT reducible by definition to. The argument is by analogy, sort of an outside view thing. I ask: Why stop here? Why not stop there?
"Imply" is a fact about cognitive processes, which occupy a very general space that includes all sorts of weird algorithms, even associating sounds or symbols, therefore the blanket statement "there's no way" is false. Given that cognitive processes and words are also things, you have also claimed that you are interested in them.
My attempt was to explain the Tortoise's position in What The Tortoise Said To Achilles. If you think I did not do so properly, I apologize. If you think that position is stupid, you're right, if you think it's incoherent, I'm pretty sure you're wrong.
This second example is made of only vaguely related sentences. Did you mean "that thing, over there" to not have a definition? Would tabooing finger pointing really be so hard? Just refer to a cardinal or relative direction.
The central theme is tabooing all facts about the world. How do you define what such a fact means under such a taboo?
Reductionists are not, I hope, automatically ignorant of induction. "No cake at the center of the sun" is about something you can see, since the evidence is visible. And though seeing the sun is good evidence that the sun exists, that can never be perfect either - it's all evidence, whether it's the outer layer of the sun or the middle of it.
The evidence is something you can see. The thing is not. If there were a cake, the evidence would be no different. The person I am quoting would see "the sun exists" as a statement of a pattern in perceptions, "I see this-kind-of-image in this-kind-of-situation and I call this pattern 'the sun'".
To say that something "will" be there tomorrow refers to a prediction about the state of the world at a time-coordinate greater than the most recent one that you know of.
So it's a prediction. What's a prediction?
Do you see the game I'm playing here? I hope you do. It is a silly game, but it's logically consistent, and that's my point.
Replies from: Manfred↑ comment by Manfred · 2011-06-10T19:05:31.984Z · LW(p) · GW(p)
Hm, no, I don't see it yet. Help me with this:
What I'm trying to do is to make people question whether "All meaningful statements are reducible, by definition, to facts about the world". I do this by proposing some categories which all meaningful statements are certainly NOT reducible by definition to.
For starters, what do these categories you mention contain? I didn't notice them in the Flatland section - I guess I only saw the statements, and not the argument.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-10T20:13:37.328Z · LW(p) · GW(p)
A. Nothing
B. Definitions & Logic
C. Also observations, not unobserved or unobservable differences
D. Just the past and present, not the future
which I compare to:
E. Just the physical world, not morality
Replies from: Manfred↑ comment by Manfred · 2011-06-10T20:25:37.876Z · LW(p) · GW(p)
Doesn't seem very compelling, frankly.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-10T20:45:58.184Z · LW(p) · GW(p)
Oh well. What about my other arguments? Also not compelling?
Replies from: Manfred↑ comment by Manfred · 2011-06-10T22:15:57.180Z · LW(p) · GW(p)
Less confusing, at least :P
Beating up lukeprog's "is and is not" doctrine is pretty easy but not very representative, I think.
The water argument seems to be more about CEV than reductionism of ethics, and is more convincing, but I think you hit a bit of a pothole when you contrast disagreeing about definitions with "disagree[ing] about what's important" at the end. After all, they're disagreeing about what's "important," since importance is something they assign to things and not an inherent property of the things. Maybe it would help to not call it "the definition of 'should,'" but instead call it "the titanic moral algorithm." I can see it now:
When people disagree about morals, it's not that they disagree about the definition of "should" - after all, that's deprecated terminology. No, they disagree about the titanic moral algorithm.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-11T02:08:35.785Z · LW(p) · GW(p)
Right. But they DON'T disagree about the definition of the titanic moral agorithm. They disagree about its nature.
Replies from: Manfredcomment by hairyfigment · 2011-06-09T19:42:32.417Z · LW(p) · GW(p)
It seems clear to me that I can multiply about what I care about, so I don't know quite what you want to say.
Since it's intuitive, why would you not want to do it that way?
What seems wrong with the obvious answer?
Do you think a FOOM'd self-modifying AI that cares about humanity's CEV would likely do what you consider 'right'? Why or why not? (If you object to the question, please address that issue separately.)
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-09T20:24:07.571Z · LW(p) · GW(p)
It seems clear to me that I can multiply about what I care about, so I don't know quite what you want to say.
Well, do you care about 20 deaths twice as much as you care about 10 deaths?
Do you think that you should care about 20 deaths twice as much as you care about 10 deaths?
Do you think a FOOM'd self-modifying AI that cares about humanity's CEV would likely do what you consider 'right'? Why or why not? (If you object to the question, please address that issue separately.)
The AI would not do so, because it would not be programmed with correct beliefs about morality, in a way that evidence and logic could not fix.
EDIT: This is incorrect. Somehow, I forgot to read the part about "cares about humanity's CEV'. It would in fact do what I consider right, because it would be programmed with moral beliefs very similar to mine.
In the same way, an AI programmed to do anti-induction instead of induction would not form correct beliefs about the world.
Pebblesorters are programmed to have an incorrect belief about morality. Their AI would have different, incorrect beliefs.(Unless they programmed it to have the same beliefs.)
Replies from: hairyfigment↑ comment by hairyfigment · 2011-06-09T20:37:14.233Z · LW(p) · GW(p)
You edited this comment and added parentheses in the wrong place.
Do you think that you should care about 20 deaths twice as much as you care about 10 deaths?
More or less, yes, because I care about not killing 'unthinkable' numbers of people due to a failure of imagination.
The AI would not do so, because it would not be programmed with correct beliefs about morality, in a way that evidence and logic could not fix.
(Unless they programmed it to have the same beliefs.)
Can you say more about this? I agree with what follows about anti-induction, but I don't see the analogy. A human-CEV AI would extrapolate the desires of humans as (it believes) they existed right before it got the ability to alter their brains, afaict, and use this to predict what they'd tell it to do if they thought faster, better, stronger, etc.
ETA: okay, the parenthetical comment actually went at the end. I deny that the AI the pebblesorters started to write would have beliefs about morality at all. Tabooing this term: the AI would have actions, if it works at all. It would have rules governing its actions. It could print out those rules and explain how they govern its self-modification, if for some odd reason its programming tells it to explain truthfully. It would not use any of the tabooed terms to do so, unless using them serves its mechanical purpose. Possibly it would talk about a utility function. It could probably express the matter simply by saying, 'As a matter of physical necessity determined by my programming, I do what maximizes my intelligence (according to my best method for understanding reality). This includes killing you and using the parts to build more computing power for me.'
'The' human situation differs from this in ways that deserve another comment.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-06-09T21:27:03.654Z · LW(p) · GW(p)
More or less, yes, because I care about not killing 'unthinkable' numbers of people due to a failure of imagination.
That's the answer I wanted, but you forgot to answer my other question.
A human-CEV AI would extrapolate the desires of humans as (it believes) they existed right before it got the ability to alter their brains, afaict, and use this to predict what they'd tell it to do if they thought faster, better, stronger, etc.
I would see a human-CEV AI as programmed with the belief "The human CEV is correct". Since I believe that the human CEV is very close to correct, I believe that this would produce an AI that gives very good answers.
A Pebblesorter-CEV Ai would be programmed with the belief "The pebblesorter CEV is correct", which I believe is false but pebblesorters believe is true or close to true.
Replies from: None↑ comment by [deleted] · 2011-06-14T00:05:13.886Z · LW(p) · GW(p)
Since I believe that the human CEV is very close to correct, I believe that this would produce an AI that gives very good answers.
This presumes that the problem of specifying a CEV is well-posed. I haven't seen any arguments around SI or LW about this very fundamental idea. I'm probably wrong and this has been addressed and will be happy to read more, but it would seem to me that it's quite reasonable to assume that a tiny tiny error in specifying the CEV could lead to disastrously horrible results as perceived by the CEV itself.