Normativity and Meta-Philosophy
post by Wei Dai (Wei_Dai) · 2013-04-23T20:35:16.319Z · LW · GW · Legacy · 56 commentsContents
56 comments
I find Eliezer's explanation of what "should" means to be unsatisfactory, and here's an attempt to do better. Consider the following usages of the word:
- You should stop building piles of X pebbles because X = Y*Z.
- We should kill that police informer and dump his body in the river.
- You should one-box in Newcomb's problem.
All of these seem to be sensible sentences, depending on the speaker and intended audience. #1, for example, seems a reasonable translation of what a pebblesorter would say after discovering that X = Y*Z. Some might argue for "pebblesorter::should" instead of plain "should", but it's hard to deny that we need "should" in some form to fill the blank there for a translation, and I think few people besides Eliezer would object to plain "should".
Normativity, or the idea that there's something in common about how "should" and similar words are used in different contexts, is an active area in academic philosophy. I won't try to survey the current theories, but my current thinking is that "should" usually means "better according to some shared, motivating standard or procedure of evaluation", but occasionally it can also be used to instill such a standard or procedure of evaluation in someone (such as a child) who is open to being instilled by the speaker/writer.
It seems to me that different people (including different humans) can have different motivating standards and procedures of evaluation, and apparent disagreements about "should' sentences can arise from having different standards/procedures or from disagreement about whether something is better according to a shared standard/procedure. In most areas my personal procedure of evaluation is something that might be called "doing philosophy" but many people apparently do not share this. For example a religious extremist may have been taught by their parents, teachers, or peers to follow some rigid moral code given in their holy books, and not be open to any philosophical arguments that I can offer.
Of course this isn't a fully satisfactory theory of normativity since I don't know what "philosophy" really is (and I'm not even sure it really is a thing). But it does help explain how "should" in morality might relate to "should" in other areas such as decision theory, does not require assuming that all humans ultimately share the same morality, and avoids the need for linguistic contortions such as "pebblesorter::should".
56 comments
Comments sorted by top scores.
comment by cousin_it · 2013-04-24T01:03:03.278Z · LW(p) · GW(p)
If we try to translate sentences involving "should" into descriptive sentences about the world, they will probably sound like "action A increases the value of utility function U". If I was a consistent utility maximizer, and U was my utility function, then believing such a statement would make me take action A. No further verbal convincing would be necessary.
Since we are not consistent utility maximizers, we run an approximate implementation of that mechanism which is vulnerable to verbal manipulation, often by sentences involving "should". So the murkiness in the meaning of "should" is proportional to the difference between us and utility maximizers. Does that make sense?
(It may or may not be productive to describe a person as a utility maximizer plus error. But I'm going with that because we have no better theory yet.)
Replies from: TimS, Wei_Dai, None, MindTheLeap↑ comment by TimS · 2013-04-24T01:11:16.330Z · LW(p) · GW(p)
I agree that all "ought" statements can be easily translated into "is" statements about maximizing some utility function. But in practice, should-statements often are disguised exhortations to adopt a particular utility function.
I think the question raised by the OP is something like: Why we should take the exhortation seriously if we have not already adopted the particular utility function?
Replies from: cousin_it, bogus↑ comment by cousin_it · 2013-04-24T01:36:50.105Z · LW(p) · GW(p)
One could try to create a model of agents that respond to such exhortations. Maybe such agents could be uncertain about their own utility function, like in Dewey's value learning paper.
↑ comment by bogus · 2013-04-24T13:55:19.758Z · LW(p) · GW(p)
If agents have different utility functions / conative ambitions / "shoulds", they will presumably need to engage in some kind of negotiation in order to compromise their values and reach an efficient outcome. Presumably, ethical disputes can function as a way of reaching such outcomes - some accounts of ethics are quite clear in describing ethical reasoning as being very much about such a balancing of "right versus right". Even Kantian ethics can be seen in such terms, although what we woud call "rights" Kant would perhaps refer to as "principles of practical reason".
↑ comment by Wei Dai (Wei_Dai) · 2013-04-24T06:41:12.060Z · LW(p) · GW(p)
If we try to translate sentences involving "should" into descriptive sentences about the world, they will probably sound like "action A increases the value of utility function U".
As you know, there is no commonly agreed upon way of stating "action A increases the value of utility function U" as math (otherwise decision theory would be solved). Given that, what does it mean when I say "I think we should express 'action A increases the value of utility function U' in math as X", which seems like a sensible statement? I don't see how the "should" in this sentence can be translated into something that sounds like "action A increases the value of utility function U" without making the sentence mean something obviously different.
Replies from: Pentashagon, cousin_it↑ comment by Pentashagon · 2013-04-24T19:44:20.113Z · LW(p) · GW(p)
Given that, what does it mean when I say "I think we should express 'action A increases the value of utility function U' in math as X", which seems like a sensible statement?
I think it makes sense as a statement about decision theories. How would a choice of which mathematical expression of 'action A increases the value of utility function U' affect actual utility? Only by affecting which actions are chosen; in other words by selecting a particular (class of) decision theory which maximizes utility due in part to its expression of what "should" means mathematically.
↑ comment by [deleted] · 2013-04-24T01:15:31.566Z · LW(p) · GW(p)
"action A increases the value of utility function U"
Your comment implies that sentences making reference to utility functions are sentences about the world. Do you mean to say that "action A increases the value of utility function U" involves no normative content, that it is purely a description of a state of affairs?
Replies from: CoffeeStain↑ comment by CoffeeStain · 2013-04-24T05:22:03.417Z · LW(p) · GW(p)
Is there a theory of normativity that claims that normative content does not reduce to states of affairs?
EDIT: Well of course there would be, under section 3 of the OP's link. Unfortunately, I could use help dissecting language such as:
Replies from: NoneMoore, whose metaethical views are taken as the archetype of a non-naturalist position, leaves us with two independent legacies. One is the non-reductionist metaphysical doctrine that the normative is sui generis and unanalysable into non-normative components or in purely non-normative terms, leading some writers to classify views as forms of ‘non-naturalism’ on this basis. The other legacy is the epistemological doctrine of intuitionism: that some substantive or synthetic normative truths are knowable a priori.
↑ comment by [deleted] · 2013-04-24T12:46:50.099Z · LW(p) · GW(p)
One is the non-reductionist metaphysical doctrine that the normative is sui generis and unanalysable into non-normative components or in purely non-normative terms, leading some writers to classify views as forms of ‘non-naturalism’ on this basis.
means Moore thinks you can't reduce 'ought' claims to 'is' claims, roughly.
The other legacy is the epistemological doctrine of intuitionism: that some substantive or synthetic normative truths are knowable a priori.
means that Moore thinks that you have access to informative moral truths (like 'it is wrong to kill wontonly' and not just 'murder is illegal homocide') in such a way that doesn't make reference to any particular experiences or contingent facts about the world. So Moore thinks that you can know 'it is wrong to kill wontonly'' independently of knowing any of the specific facts about human beings or human societies or anything like that.
But right, non-naturalism is a possibility for normative theories (and not a particularly unusual one, since all Kantians would count as non-naturalists). I'm not a non-naturalist myself, but I suspect cousin isn't getting away with eliminating the 'ought' in referring to utility functions, but just hiding it in the utility function. But I'm not well versed in that sort of thing, so I don't think I'm quite entitled to the criticism.
↑ comment by MindTheLeap · 2013-04-24T06:41:53.862Z · LW(p) · GW(p)
I read
"action A increases the value of utility function U"
to mean (1:) "the utility function U increases in value from action A". Did you mean (2:) "under utility function U, action A increases (expected) value"? Or am I missing some distinction in terminology?
The alternative meaning (2) leads to "should" (much like "ought") being dependent on the utility function used. Normativity might suggest that we all share views on utility that have fundamental similarities. In my mind at least, the usual controversy over whether utility functions (and derivative moral claims, i.e., "should"s and "ought"s) can be objectively true, remains.
Edit: formatting.
comment by Vladimir_Nesov · 2013-04-24T10:41:25.562Z · LW(p) · GW(p)
my current thinking is that "should" usually means "better according to some shared, motivating standard or procedure of evaluation", but occasionally it can also be used to instill such a standard or procedure of evaluation in someone
A should statement interpreted in the form "taking action A seems to be more optimal according to your values" relates an action (or its outcome) with agent's values. If an agent is gullible and accepts a statements like that even if it's not true, a pressure for consistency may lead to resolving the error by either disbelieving the statement or by changing the values.
comment by Eugine_Nier · 2013-04-24T03:39:47.656Z · LW(p) · GW(p)
- You should stop building piles of X pebbles because X = Y*Z.
- We should kill that police informer and dump his body in the river.
- You should one-box in Newcomb's problem.
Another 'should' to think about:
4. One should update on evidence according to Bayes's rule.
Replies from: JoshuaZ, private_messaging↑ comment by JoshuaZ · 2013-04-24T04:38:51.069Z · LW(p) · GW(p)
The large variety of statements here, and now this one make me wonder if "should" is just a hopeless word, meaning so many different things in different contexts, that it should be tabooed and never allowed to return.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2013-04-24T07:13:37.513Z · LW(p) · GW(p)
Tabooing is a risky operation. One can easily replace a word with a substance that is wrong, that doesn't capture one's intended meaning. Consider example 3, where it's tempting to taboo "should" with some formal definition of rationality from decision theory. If you were to taboo "should" in that sentence with the mathematical definition of expected utility maximization from CDT, for example, you would get a definitive "false" for the truth value of the sentence, and we would no longer be able to have a discussion that eventually leads to TDT and related ideas.
So before we taboo a word, we need to make sure we fully understand what we mean by it. When you're doing this with a word like "should" which seems to mean different things in different contexts, it seems worth asking whether there is some sort of common meaning in all of those contexts that's not immediately obvious, or if there is some other explanation for why the same word seems to be used for different purposes. This is what I was trying to do in my post, but I'd say that my understanding of the meaning of "should" is still insufficient for me to safely taboo it in many circumstances and I'll have to keep using the word for the foreseeable future.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-24T14:29:17.974Z · LW(p) · GW(p)
before we taboo a word, we need to make sure we fully understand what we mean by it.
Well, yes. As you say, tabooing a word with some more detailed description of something I didn't actually mean in the first place is an error... which is to say, I shouldn't do it... which is to say, it has negative expected value. (At least, that's what I think I meant. Perhaps I'm mistaken.)
But if I don't know what I mean by a word, and therefore can't correctly taboo it, continuing to use the word unreflectively doesn't really help us communicate clearly either. (Is that to say I shouldn't do it? Maybe.)
So, sure, maybe the various uses of "should" have some core commonality and sufficient analysis of that problem will make explicit some important insight about the nature of whatever that core commonality refers to which is currently implicit in our language use. In which case continued analysis of that core commonality might reveal useful insights and is therefore worth doing.
But that still doesn't seem like a reason to use "should" in my conversations once I've convinced myself that I don't know what I mean by it.
So, what happens if I do something else instead?
Well, for example, I was about to write "So, what should I do instead?" and, recognizing the irony, stopped and rethought what question I wanted to ask. Did I in the process taboo "should"? Perhaps not... it's quite possible that the question I asked is importantly different from the one I initially meant to ask. (For example, the question I asked is explicitly consequentialist, and the one I initially meant to ask is not, which might be a change of meaning or it might not be.)
Am I worse off for having done this? Would I have been better off to retain the original wording?
Well, if it turns out that there is value to a deontological view of ethics, then my replacing my original vague statement with an explicitly consequentialist statement has negative expected value. If not, then it has positive value. Either I think that's more likely than the alternative, or less likely. But it seems like I pretty much have to make a choice here based on incomplete information and my best guess.
To say "I don't know enough to taboo 'should' so I'll keep using it the way I'm accustomed to" seems unjustified.
Replies from: None↑ comment by [deleted] · 2013-04-24T16:42:12.786Z · LW(p) · GW(p)
But if I don't know what I mean by a word, and therefore can't correctly taboo it, continuing to use the word unreflectively doesn't really help us communicate clearly either.
I don't think that's quite right: there's enough of a gap between 'knowing how to use a word' and 'knowing how to define a word or replace it with other language' that I don't think it's reasonable to take the latter as decisive for whether or not we should feel comfortable carrying on with a discussion.
For example, if you asked me to taboo 'should' I would be flabbergasted. I would have no idea what to replace it with, and I would be tempted to say that its not a tabooable word. On the other hand, I think I can say with some confidence that I use the word dozens of times a day, thousands of times a year, and I almost never use it in a way that sounds strange to anyone. Evidence which seems to suggest I know exactly how to use the word, despite not knowing (at all) how to taboo it. And if I can use a word perfectly, I think it stands to reason I can carry on a discussion with it even if I can't taboo it, so long as the discussion isn't about the definition of that word.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-24T17:09:50.643Z · LW(p) · GW(p)
Yeah, this is exactly what I'm disputing.
If I can't explain clearly or coherently what I mean when I say I should do something, I have no confidence that I mean anything coherent when I say it, and I have no confidence that what you understand by it is what I mean by it.
Replies from: None↑ comment by [deleted] · 2013-04-24T17:56:00.506Z · LW(p) · GW(p)
If I can't explain clearly or coherently what I mean when I say I should do something, I have no confidence that I mean anything coherent when I say it, and I have no confidence that what you understand by it is what I mean by it.
There are two things (at least) you could mean here, one of which I agree with. You could mean "If you say 'You should vote your pocket book' yet you can't clearly explain what you mean by 'you should vote your pocket book', then you can have no confidence that you mean anything particularly coherent by 'you should vote your pocket book'." I agree with this, but it seems to be beside the point.
But the issue of tabooing is different. If you mean "If you say 'you should vote your pocket book', but cannot taboo or define the word 'should' then you can have no confidence that you mean anything particularly coherent by 'you should vote your pocket book'." If this is what you mean, then I disagree and I think this is strongly contra-indicated by the linguistic practices of everyone around us. How many people, after all, could come up with a taboo or definition for 'should'? Yet I imagine this philosophical oversight would not prevent anyone (if they were so capable) from explaining what 'you should vote your pocket book' means.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-24T19:31:24.892Z · LW(p) · GW(p)
I agree that if I am confident that I know what the sentence means (as you seem to be), that should increase my confidence that I also know what "should" means in that sentence (ditto "vote" and "pocketbook").
But I'm not confident that I know what that sentence means, propositionally anyway, and your stated reasons for such confidence (that lots of people use the sentence without noticing a problem) don't seem compelling to me, because lots of people regularly utter all kinds of sentences whose propositional content is deeply unclear. And, in particular, the ambiguity surrounding that sentence has not much to do with "vote your pocketbook" (which, while a highly metaphorical phrase, I'm pretty confident I understand) and quite a lot to do with "you should X".
How many people, after all, could come up with a taboo or definition for 'should'?
Very few. Very few could even come up with an explanation of what they mean by "should" in a particular sentence (such as "you should vote your pocket book") which is a noticeably simpler task.
That isn't somehow evidence that they know what it means. Quite the contrary.
Replies from: None↑ comment by [deleted] · 2013-04-24T21:37:40.652Z · LW(p) · GW(p)
because lots of people regularly utter all kinds of sentences whose propositional content is deeply unclear.
So, suppose a pair of construction workers, Bob and Jill.
Bob: Jill, pass me that hammer. Jill: Which one? Bob: The one I want has a black handle. Jill: I see it, here you are.
Let's posit that Bob could not taboo or define 'has'. Jill could not taboo or define 'are'. I think most people couldn't, but we might disagree on that. I think they are likely to have trouble with 'that', 'one', 'want', 'see' and 'it'.
Are you saying that the propositional contents of Bob and Jill's utterances are deeply unclear, despite the fact that their conversation goes off without a hitch, and Bob gets the hammer he wants?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-24T23:14:08.977Z · LW(p) · GW(p)
No, I'm not saying that.
I am saying that my confidence in the clarity of the propositional contents of Bob and Jill's utterances (to me, to Bob, to Jill, etc.) does not rest solely on the fact that Bob gets the hammer he wants. (Supposing that Bob in fact got the hammer he wanted.) Specifically, it also depends on a bunch of other things that I can roughly summarize as "imagining myself in Bob's position and thinking about what I would mean by Bob's utterances and what I would understand by Jill's, and similarly imagining myself in Jill's position".
Still less does it depend on the fact that Bob and Jill feel content with the interaction (which they might even if Bob didn't get the hammer he wanted, but instead got some other hammer that solves his problem... or if Bob didn't really give a damn about the hammer, he just wanted to interact with Jill... or if various other contentment-producing pragmatic utterance-evaluation frames were in play).
Replies from: None↑ comment by [deleted] · 2013-04-24T23:22:45.049Z · LW(p) · GW(p)
So, to be clear, you're not saying that being able to define or taboo the words you're using (or hearing) in a given sentence is a necessary condition on having a perfectly clear understanding of the propositional content of that sentence. Is that right?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-24T23:33:53.336Z · LW(p) · GW(p)
I'm saying that my inability to 'define' (for a particular understanding of defining, closer to the LW usage of "taboo" than, say, what a dictionary does) a word in a sentence is strong evidence that I lack a perfectly clear understanding of the propositional content of that sentence.
I might nevertheless be confident in my propositional (or other) understanding of that sentence, if I had significant enough alternative sources of evidence of that understanding.
So, agreed, the ability to define/taboo the words isn't a necessary condition, it's a source of significant evidence.
Replies from: None↑ comment by [deleted] · 2013-04-24T23:44:23.954Z · LW(p) · GW(p)
I see. I think we disagree on the contexts in which it is significant evidence against understanding the propositional content of that sentence. For example, I think being unable to taboo 'utility function' probably means one doesn't understand it and that one's use of it in sentences is confused. This is probably true of all philosophical or scientific terms of art. I don't think this is true of 'should', or what you might call more everyday language. And I'm inclined to say that the tabooing or defining of such everyday language almost always does more harm than good. But thats a matter of details.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-24T23:45:45.395Z · LW(p) · GW(p)
I share your understanding of our disagreement.
↑ comment by private_messaging · 2013-04-24T07:28:16.361Z · LW(p) · GW(p)
4. One should update on evidence according to Bayes's rule.
Which doesn't look anything like the simple formula for statistically independent evidence in a cycle less graph.
comment by CronoDAS · 2013-04-25T01:28:26.958Z · LW(p) · GW(p)
I've found an answer I like.
Replies from: lukeprog↑ comment by lukeprog · 2013-04-26T22:51:56.115Z · LW(p) · GW(p)
Fyfe and I began (but didn't finish) a podcast on desirism called Morality in the Real World. I suspect many people will enjoy the discussions in those episodes even if they don't think desirism is a useful framing for morality.
comment by MrMind · 2013-04-24T08:36:19.527Z · LW(p) · GW(p)
In the view of morality as "common value computation + local patches", the world should seems unproblematic: it indicates that a certain option has more value than some other set of options, according to a morality.
'Should' in this view is seen as assuming three pieces of information: the morality of the speaker, the set of available options, the calculated highest/lowest value option.
This view decomposes the three sentences as such:
1) according to pebblesorter value computation, when building piles of pebbles a heap of X has never a positive value;
2) according to our drug dealer morality, when dealing with the police informer the highest value option is killing him and dump his body in the river;
3) according to pure morality, the correct computation of the highest value option in the Newcomb problem is one-boxing.
↑ comment by MrMind · 2013-04-24T17:06:16.058Z · LW(p) · GW(p)
Also on the Eugene_Nier sentence:
One should update on evidence according to Bayes's rule.
deconstructed as:
4) according to pure morality, the Cox's theorem shows that the only correct way to compute evidence updating is the Bayes' rule.
"Pure morality" here is intended to mean valuing the things that humans usually value, instead of something like prime numbers heaps.
comment by Shmi (shminux) · 2013-04-23T22:25:44.850Z · LW(p) · GW(p)
Of course this isn't a fully satisfactory theory of normativity since I don't know what "philosophy" really is (and I'm not even sure it really is a thing).
Empirical approach: first, there is such a thing, because people keep doing it. Second, again, empirically, philosophy, pardon the metaphor, acts like a womb by spawning a natural science from time to time, while not being a science itself. Well, not always natural, sometimes it's logic, or decision theory or something. But mostly. Which would work fine were this goal made explicit: ponder the big questions, chip a small bit of one, make it into a science and let it loose. Unfortunately the practitioners of philosophy (what an oxymoron...) have trouble letting go, and the baby science has to run away and get disowned more often than not.
Replies from: None↑ comment by [deleted] · 2013-04-23T22:41:37.224Z · LW(p) · GW(p)
practitioners of philosophy (what an oxymoron...)
I guess this is just an offhand comment, but I missed your meaning here and the point seems interesting. Could you explain?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-23T23:33:16.012Z · LW(p) · GW(p)
It was, but if you insist. Don't read too much into it. Practice is something of an antithesis to theory, and theorizing is all philosophy does, since it's not something that can be experimentally tested. Once some part of it is, it becomes an aforementioned spawn.
Replies from: Nonecomment by bentarm · 2013-04-24T08:24:36.780Z · LW(p) · GW(p)
Not sure, as I'm not a native speaker of another language, but do most other languages use the same word for "should" in all three of those sentences? If not, it seems highly likely that it's just some sort of linguistic accident. If so, there might be something interesting worth pursuing.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-04-24T10:09:48.596Z · LW(p) · GW(p)
but do most other languages use the same word for "should" in all three of those sentences?
I suppose yes (data points: Slovak, Esperanto).
Seems to me that although the reasoning why anyone "should" do the thing is different in different cases, the expected outcome is the same -- the sentence is spoken to create a social pressure on the listener, to increase the probability that the listener will do the thing.
Thus, the presssure on other person's behavior seems like the essense of "shouldness", not the justification. (Even the special case of "I should" is probably applying the social rules to oneself, either to remind oneself that other people would want them to do that, or simply to re-use the existing mechanism of altering a person's behavior.)
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-04-24T12:34:42.068Z · LW(p) · GW(p)
the sentence is spoken to create a social pressure on the listener
How does it do that? Only indirectly, through what it means to the speaker and listener. What does it mean? That is the question here, and "social pressure" is not the sort of thing that can be the answer.
comment by Manfred · 2013-04-24T00:03:46.109Z · LW(p) · GW(p)
Makes sense.
Though for your sentences, they might be comprehensible to me if someone else said them, but I wouldn't generate #1 or #2. So I have some model of other people where they can want other things than I do (like one does), but when I use "should," I use it my own way (unless I'm putting it in someone else's mouth). This is not so different from Eliezer's proposal about inserting double colons.
comment by [deleted] · 2013-04-23T22:50:57.410Z · LW(p) · GW(p)
but my current thinking is that "should" usually means "better according to some shared, motivating standard or procedure of evaluation"
If this is right, it should be nonsensical or at least strange to say "I think we should switch to an enriched uranium currency, but no one shares my standards or procedure of evaluation." But this doesn't seem to be to even be an unusual case of a 'should' sentence. Am I missing something?
Edit: On reflection, I think I also don't understand the 'metaphilosophy' part of the post. If you are willing to indulge me, could you boil it down to a sentence or so?
Replies from: CoffeeStain↑ comment by CoffeeStain · 2013-04-23T23:24:53.727Z · LW(p) · GW(p)
Could it be that the "should" in "we should switch to enriched uranium currency" might mean that the switch satisfies one shared motivating standard but not others? It's unlikely that no one else shares any of your standards or procedures of evaluation, but rather just the ones you think are relevant to the choice of currency medium.
What you really mean to say is, "I think switching to enriched uranium currency satisfies some more basic shared desire, but others disagree that they do so to an extent that overrides their disparate desires." This allows discussion over whether the shared desires are really more important, and over the logic for how much enriched uranium currency really fulfills them.
Replies from: None↑ comment by [deleted] · 2013-04-24T01:12:36.765Z · LW(p) · GW(p)
That's a good point, but consider this contrast:
It seems clear to me that customs and agreements involve "something shared" in the sense Wei intends. As a result, it sounds straightforwardly odd to say that "This is our custom, but no one does it that way" or "We agreed on it, though he didn't know it at the time." Sentences like these demand some explanation or qualification.
On the other hand, nothing seems obviously weird about saying "You should X, though you wouldn't accept any standard that would recommend that." It may be that there are problems with using the word 'should' without reference to a shared standard (as opposed to just a standard) but it's another thing to say that this is packed into the meaning of 'should'. It sounds awfully like a theory of normativity, rather than an analysis of the meaning of a word. In fact, Wei Dai calls it a theory of normativity, though it seems to me that it must either be such a theory, or a suggestion about the meaning of a word. It can't be both.
Replies from: CoffeeStain↑ comment by CoffeeStain · 2013-04-24T05:15:22.883Z · LW(p) · GW(p)
How about, "You should X, and you should accept a standard that would recommend it?" Thereby appealing to a third (shared) standard, possibly one having to do with rationality of moral beliefs. Applying an analogous moral version of Aumann's Agreeement Theorem could lead us to a theory which suggests that you can never say this quoted sentence unless you're willing to believe that you should accept the standard you recommend.
I do hope to avoid discussion about the common usage of "should" in favor of a theory that would allow us (if no one else) to use it consistently to refer to some shared standard, and I believe this can be done without paradox. So long as a community shares a sufficiently basic belief, it will be possible to extract shared consequences of that belief. In the same sense that a group of rationalists cannot convince non-Baysians that they should apply Aumann's Agreement Theorem, we cannot convince an analogous group that our word "should" refers to our internally normative values. In neither case should we worry.
comment by Shmi (shminux) · 2013-04-24T16:09:11.229Z · LW(p) · GW(p)
There is a "because" or "if" missing in the last two.
We should kill that police informer and dump his body in the river [because that's what we do to traitors]
You should one-box in Newcomb's problem [if you value money]
Let's also throw in the one from Eugene_Nier:
One should update on evidence according to Bayes's rule [because/if.., what? you want to achieve best calibrated posteriors? Because Eliezer says so? Because you want to maximize your odds of something?]
These shoulds are all different. The pebble sorting one is about fulfilling imperatives, the police informer one is about compliance with [unwritten] rules, the Newcomb one is about maximizing [someone's] utility, and I am not sure about the last. In any case, it seems like they describe or prescribe different metaethics.
Replies from: Eugine_Nier, TheOtherDave↑ comment by Eugine_Nier · 2013-04-25T04:09:12.373Z · LW(p) · GW(p)
One should update on evidence according to Bayes's rule [because/if.., what? you want to achieve best calibrated posteriors? Because Eliezer says so? Because you want to maximize your odds of something?]
These shoulds are all different. (..) and I am not sure about the last.
That's why I mentioned it. It has an unconditional quality that Wei Dai's examples lack.
↑ comment by TheOtherDave · 2013-04-24T17:33:29.492Z · LW(p) · GW(p)
There is a "because" or "if" missing in the last two.
On your account, how does "You should one-box in Newcomb's problem [if you value money]" differ from "One-boxing on Newcomb's problem maximizes expected money"?
Relatedly, how does "We should kill that police informer and dump his body in the river [because that's what we do to traitors]" differ from "We kill traitors and their bodies in the river, and that police informer is a traitor"?
This isn't a rhetorical question; it seems to me that these sentences are different, but their differences have nothing to do with their propositional content.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-24T19:14:55.130Z · LW(p) · GW(p)
On your account, how does "You should one-box in Newcomb's problem [if you value money]" differ from "One-boxing on Newcomb's problem maximizes expected money"?
The second statement is just a description of possible worlds, not a call to act. Same for the other one.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-24T19:33:45.823Z · LW(p) · GW(p)
Cool. I agree. Given that, on what grounds do you unpack "You should one-box in Newcomb's problem" as "You should one-box in Newcomb's problem [if you value money]" rather than "You should [value money and therefore] one-box in Newcomb's problem"?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-24T20:48:10.733Z · LW(p) · GW(p)
I suppose either interpretation is possible and should (ehm) be made explicit, which is what was missing from the OP, and was basically my point.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-24T20:56:42.831Z · LW(p) · GW(p)
Ah, OK. I certainly agree that the statement in its conventional form is propositionally ambiguous.
My own sense is that "You should X" is primarily a call to action, roughly synonymous with "X!", and not really asserting a proposition at all, and trying to interpret it propositionally (by reading various "because"/"if"/etc. clauses into it) is already a mistake, akin to trying to decide what the referent of "it" is in the sentence "It is raining."
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-24T21:31:15.355Z · LW(p) · GW(p)
My own sense is that "You should X" is primarily a call to action roughly synonymous with "X!", and not really asserting a proposition at all
I see your point, I think. However, that's not how the OP treats it:
my current thinking is that "should" usually means "better according to some shared, motivating standard or procedure of evaluation", but occasionally it can also be used to instill such a standard or procedure of evaluation in someone (such as a child) who is open to being instilled by the speaker/writer.
which is either/both asserting a proposition or/and taking action (of convincing someone to do something).
There is a big distinction between a statement about possible worlds ("is"), "should " and "do!", of course. In the OP's link it's discussed as normativity vs norm-relativity. Unfortunately, it has the standard philosophical shortcomings I alluded to in my other comment: it does not attempt to formalize it and instead goes into various historical descriptions and the differences of opinions between several equally confused schools, trying to swallow the whole thing instead of taking a careful manageable bite. Predictably, as a result, the whole thing gets retched back up undigested. Which is justified by calling it a "survey". Well, I am probably too harsh.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-24T23:37:41.511Z · LW(p) · GW(p)
Fair enough. I took you as speaking more in your own voice, and less as adopting the OP's interpretive frame, than I think you meant to be. (To be clear, I fully endorse adopting the interpretive frame of a post while responding to it, I just misunderstood the extent to which you were doing so.)
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-25T05:20:08.079Z · LW(p) · GW(p)
Hmm, I thought I was speaking in my own voice, such as it is. In my instrumental approach "should" implies an attempt to manipulate outputs, specifically the ones leading to someone else's (modeled) outputs, not just a passive evaluation.
comment by asparisi · 2013-04-24T11:58:01.086Z · LW(p) · GW(p)
since I don't know what "philosophy" really is (and I'm not even sure it really is a thing).
I find it's best to treat philosophy as simply a field of study, albeit one that is odd in that most of the questions asked within the field are loosely tied together at best. (There could be a connection between normative bioethics and ontological questions regarding the nature of nothingness, I suppose, but you wouldn't expect a strong connection from the outset) To do otherwise invites counter-example too easily and I don't think there is much (if anything) to gain in asking what philosophy really is.
comment by PrawnOfFate · 2013-04-23T21:38:44.050Z · LW(p) · GW(p)
"better according to some shared, motivating standard or procedure of evaluation",
I broadly agree. My thinking ties shoulds and musts to rules and payoffs. Wherever you are operating a set of rules (which might be as localised as playing chess), you have certain localised "musts".
It seems to me that different people (including different humans) can have different motivating standards and procedures of evaluation, and apparent disagreements about "should' sentences can arise from having different standards/procedures, or disagreement about whether something is better according to a shared standard/procedure.
I'm very resitant to the idea, promoted by EY in the thread you refenced, that the meaning of should changes. Does he think chess players have a different concept of "rule" to poker players?