Morality vs related concepts
post by MichaelA · 2020-01-07T10:47:30.240Z · LW · GW · 17 commentsContents
Normativity Prudence (Instrumental) Rationality Epistemic rationality Subjective vs objective Axiology Decision theory Metaethics Metanormativity None 17 comments
Cross-posted to the EA Forum. [EA · GW]
How can you know I’m talking about morality (aka ethics), rather than something else, when I say that I “should” do something, that humanity “ought” to take certain actions, or that something is “good”? What are the borderlines and distinctions between morality and the various potential “something else”s? How do they overlap and interrelate?
In this post, I try to collect together and summarise philosophical concepts that are relevant to the above questions.[1] I hope this will benefit readers [LW · GW] by introducing them to some thought-clarifying conceptual distinctions they may not have been aware of, as well as terms and links they can use to find more relevant info. In another post [LW · GW], I similarly discuss how moral uncertainty differs from and overlaps with related concepts.
Epistemic status: The concepts covered here are broad, fuzzy, and overlap in various ways, making definitions and distinctions between them almost inevitably debatable. Additionally, I’m not an expert in these topics; indeed, I expect many readers to know more than me about at least some of them, and one reason I wrote this was to help me clarify my own understandings. I’d appreciate feedback or comments in relation to any mistakes, poor phrasings, etc. (and just in general!).
Also note that my intention here is mostly to summarise existing ideas, rather than to provide original ideas or analysis.
Normativity
A normative statement is any statement related to what one should do, what one ought to do, which of two things are better, or similar. “Something is said by philosophers to have ‘normativity’ when it entails that some action, attitude or mental state of some other kind is justified, an action one ought to do or a state one ought to be in” (Darwall). Normativity is thus the overarching category (superset) of which things like morality, prudence (in the sense explained below), and arguably rationality are just subsets.
This matches the usage of “normative” in economics, where normative claims relate to “what ought to be” (e.g., “The government should increase its spending”), while positive claims relate to “what is” (including predictions, such as what effects an increase in government spending may have). In linguistics, the equivalent distinction is between prescriptive approaches (involving normative claims about “better” or “correct” uses of language) and descriptive approaches (which are about how language is used).
Prudence
Prudence essentially refers to the subset of normativity that has to do with one’s own self-interest, happiness, or wellbeing (see here and here). This contrasts with morality, which may include but isn’t limited to one’s self-interest (except perhaps for egoist moral theories).
For example (based on MacAskill p. 41), we may have moral reasons to give money to GiveWell-recommended charities, but prudential reasons to spend the money on ourselves, and both sets of reasons are “normatively relevant” considerations.
(The rest of this section is my own analysis, and may be mistaken.)
I would expect that the significance of prudential reasons, and how they relate to moral reasons, would differ depending on the moral theories one is considering (e.g., depending on which moral theories one has some belief in). Considering moral and prudential reasons separately does seem to make sense in relation to moral theories that don’t precisely mandate specific behaviours; for example, moral theories that simply forbid certain behaviours (e.g., violating people’s rights) while otherwise letting one choose from a range of options (e.g., donating to charity or not).[2]
In contrast, “maximising” moral theories like classical utilitarianism claim that the only action one is permitted to take is the very best action, leaving no room for choosing the “prudentially best” action out of a range of “morally acceptable” actions. Thus, in relation to maximising theories, it seems like keeping track of prudential reasons in addition to moral reasons, and sometimes acting based on prudential rather than moral reasons, would mean that one is effectively either:
- using a modified version of the maximising moral theory (rather than the theory itself), or
- acting as if “morally uncertain” [LW · GW] between the maximising moral theory and a “moral theory” in which prudence is seen as “intrinsically valuable”.
Either way, the boundary between prudence and morality seems to become fuzzier or less meaningful in such cases.[3]
(Instrumental) Rationality
(This section is sort-of my own analysis, and may be mistaken or use terms in unusual ways.)
Rationality, in one important sense at least, has to do with what one should do or intend, given one’s beliefs and preferences. This is the kind of rationality that decision theory often is seen as invoking. It can be spelled out in different ways. One is to see it as a matter of coherence: It is rational to do or intend what coheres with one’s beliefs and preferences (Broome, 2013; for a critic, see Arpaly, 2000).
Using this definition, it seems to me that:
- Rationality can be considered a subset of normativity in which the “should” statements, “ought” statements, etc. follow in a systematic way from one’s beliefs and preferences.
- Whether a “should” statement, “ought” statement, etc. is rational is unrelated to the balance of moral or prudential reasons involved. E.g., what I “rationally should” do relates only to morality and not prudence if my preferences relate only to morality and not prudence, and vice versa. (And situations in between those extremes are also possible, of course).[4]
For example, the statement “Rationally speaking, I should buy a Ferrari” is true if (a) I believe that doing so will result in me possessing a Ferrari, and (b) I value that outcome more than I value continuing to have that money. And it doesn’t matter whether the reason I value that outcome is:
- Prudential: based on self-interest;
- Moral: e.g., I’m a utilitarian who believes that the best way I can use my money to increase universe-wide utility is to buy myself a Ferrari (perhaps it looks really red and shiny and my biases are self-serving the hell out of me);
- Some mixture of the two.
Epistemic rationality
Note that that discussion focused on instrumental rationality, but the same basic points could be made in relation to epistemic rationality, given that epistemic rationality itself “can be seen as a form of instrumental rationality in which knowledge and truth are goals in themselves” (LW Wiki).
For example, I could say that, from the perspective of epistemic rationality, I “shouldn’t” believe that buying that Ferrari will create more utility in expectation than donating the same money to AMF would. This is because holding that belief won’t help me meet the goal of having accurate beliefs.
Whether and how this relates to morality would depend on whether the “deeper reasons” why I prefer to have accurate beliefs (assuming I do indeed have that preference) are prudential, moral, or mixed.[5]
Subjective vs objective
Subjective normativity relates to what one should do based on what one believes, whereas objective normativity relates to what one “actually” should do (i.e., based on the true state of affairs). Greaves and Cotton-Barratt illustrate this distinction with the following example:
Suppose Alice packs the waterproofs but, as the day turns out, it does not rain. Does it follow that Alice made the wrong decision? In one (objective) sense of “wrong”, yes: thanks to that decision, she experienced the mild but unnecessary inconvenience of carrying bulky raingear around all day. But in a second (more subjective) sense, clearly it need not follow that the decision was wrong: if the probability of rain was sufficiently high and Alice sufficiently dislikes getting wet, her decision could easily be the appropriate one to make given her state of ignorance about how the weather would in fact turn out. Normative theories of decision-making under uncertainty aim to capture this second, more subjective, type of evaluation; the standard such account is expected utility theory.[6][7]
This distinction can be applied to each subtype of normativity (i.e., morality, prudence, etc.).
(I discuss this distinction further in my post Moral uncertainty vs related concepts [LW · GW].)
Axiology
The term axiology is used in different ways in different ways, but the definition we’ll focus on here is from the Stanford Encyclopaedia of Philosophy:
Traditional axiology seeks to investigate what things are good, how good they are, and how their goodness is related to one another. Whatever we take the “primary bearers” of value to be, one of the central questions of traditional axiology is that of what stuffs are good: what is of value.
The same article also states: “For instance, a traditional question of axiology concerns whether the objects of value are subjective psychological states, or objective states of the world.”
Axiology (in this sense) is essentially one aspect of morality/ethics. For example, classical utilitarianism combines:
- the principle that one must take actions which will lead to the outcome with the highest possible level of value, rather than just doing things that lead to “good enough” outcomes, or just avoiding violating people’s rights
- the axiology that “well-being” is what has intrinsic value
The axiology itself is not a moral theory, but plays a key role in that moral theory.
Thus, one can’t have an axiological “should” statement, but one’s axiology may influence/inform one’s moral “should” statements.
Decision theory
(This section is sort-of my own commentary, may be mistaken, and may accidentally deviate from standard uses of terms.)
It seems to me that the way to fit decision theories into this picture is to say that one must add a decision theory to one of the “sources of normativity” listed above (e.g., morality) in order to get some form of normative (e.g., moral) statements. However, a decision theory can’t “generate” a normative statement by itself.
For example, suppose that I have a moral preference for having more money rather than less, all other things held constant (because I wish to donate it to cost-effective causes). By itself, this can’t tell me whether I “should” one-box or two-box in Newcomb’s problem. But once I specify my decision theory, I can say whether I “should” one-box or two-box. E.g., if I’m a causal decision theorist, I “should” two-box.
But if I knew only that I was a causal decision theorist, it would still be possible that I “should” one-box, if for some reason I preferred to have less money. Thus, as stated, we must specify (or assume) both a set of preferences and a decision theory in order to arrive at normative statements.
Metaethics
While normative ethics addresses such questions as "What should I do?", evaluating specific practices and principles of action, meta-ethics addresses questions such as "What is goodness?" and "How can we tell what is good from what is bad?", seeking to understand the nature of ethical properties and evaluations. (Wikipedia)
Thus, metaethics is not directly normative at all; it isn’t about making “should”, “ought”, “better than”, or similar statements. Instead, it’s about understanding the “nature” of (the moral subset of) such statements, “where they come from”, and other such fun/spooky/nonsense/incredibly important matters.
Metanormativity
Metanormativity relates to the “norms that govern how one ought to act that take into account one’s fundamental normative uncertainty”. Normative uncertainty, in turn, is essentially a generalisation of moral uncertainty that can also account for (uncertainty about) prudential reasons. I will thus discuss the topic of metanormativity in my next post, on Moral uncertainty vs related concepts [LW · GW].
As stated earlier, I hope this usefully added to/clarified the concepts in your mental toolkit, and I’d welcome any feedback or comments!
(In particular, if you think there’s another concept whose overlaps with/distinctions from “morality” are worth highlighting, either let me know to add it, or just go ahead and explain it in the comments yourself.)
This post won’t attempt to discuss specific debates within metaethics, such as whether or not there are “objective moral facts”, and, if there are, whether or not these facts are “natural”. Very loosely speaking, I’m not trying to answer questions about what morality itself actually is, but rather about the overlaps and distinctions between what morality is meant to be about and what other topics that involve “should” and “ought” statements are meant to be about. ↩︎
Considering moral and prudential reasons separately also seems to make sense for moral theories which see supererogation as possible; that is, theories which see some acts as “morally good although not (strictly) required” (SEP). If we only believe in such theories, we may often find ourselves deciding between one act that’s morally “good enough” and another (supererogatory) act that’s morally better but prudentially worse. (E.g., perhaps, occasionally donating small sums to whichever charity strikes one’s fancy, vs donating 10% of one’s income to charities recommended by Animal Charity Evaluators.) ↩︎
The boundary seems even fuzzier when you also consider that many moral theories, such as classical or preference utilitarianism, already consider one’s own happiness or preferences to be morally relevant. This arguably makes also considering “prudential reasons” look like simply “double-counting” one’s self-interest, or giving it additional “weight”. ↩︎
If we instead used a definition of rationality in which preferences must only be based on self-interest, then I believe rationality would become a subset of prudence specifically, rather than of normativity as a whole. It would still be the case that the distinctive feature of rational “should” statements is that they follow in a systematic way from one’s beliefs and preferences. ↩︎
Somewhat relevantly, Darwall writes: “Epistemology has an irreducibly normative aspect, in so far as it is concerned with norms for belief.” ↩︎
We could further divide subjective normativity up into, roughly, “what one should do based on what one actually believes” and “what one should do based on what it would be reasonable for one to believe”. The following quote is relevant (though doesn’t directly address that exact distinction):
Before moving on, we should distinguish subjective credences, that is, degrees of belief, from epistemic credences, that is, the degree of belief that one is epistemically justified in having, given one’s evidence. When I use the term ‘credence’ I refer to epistemic credences (though much of my discussion could be applied to a parallel discussion involving subjective credences); when I want to refer to subjective credences I use the term ‘degrees of belief’.
The reason for this is that appropriateness seems to have some sort of normative force: if it is most appropriate for someone to do something, it seems that, other things being equal, they ought, in the relevant sense of ‘ought’, to do it. But people can have crazy beliefs: a psychopath might think that a killing spree is the most moral thing to do. But there’s no sense in which the psychopath ought to go on a killing spree: rather, he ought to revise his beliefs. We can only capture that idea if we talk about epistemic credences, rather than degrees of belief.
(I found that quote in this comment [LW(p) · GW(p)], where it’s attributed to Will MacAskill’s BPhil thesis. Unfortunately, I can’t seem to access the thesis, including via Wayback Machine.) ↩︎
It also seems to me that this “subjective vs objective” distinction is somewhat related to, but distinct from, ex ante vs ex post thinking. ↩︎
17 comments
Comments sorted by top scores.
comment by Vaughn Papenhausen (Ikaxas) · 2020-01-09T04:24:53.508Z · LW(p) · GW(p)
I am an ethics grad student, and I will say that this largely accords with my understanding of these terms (though tbh the terminology in this field is so convoluted that I expect that I still have some misunderstandings and gaps).
Re epistemic rationality, I think at least some people will want to say that it's not just instrumental rationality with the goal of truth (though I am largely inclined to that view). I don't have a good sense of what those other people do say, but I get the feeling that the "epistemic rationality is instrumental rationality with the goal of truth" view is not the only game in town.
Re decision theory, I would characterize it as closely related to instrumental rationality. How I would think about it is like this: CDT or EDT are to instrumental rationality as utilitarianism or Kantianism are to morality. CDT is one theory of instrumental rationality, just as utilitarianism is one theory of morality. But this is my own idiosyncratic understanding, not derived from the philosophical literature, so the mainstream might understand it differently.
Re metaethics: thank you for getting this one correct. Round these parts it's often misused to refer to highly general theories of first order normative ethics (e.g. utilitarianism), or something in that vicinity. The confusion is understandable, especially given that utilitarianism (and probably other similarly general moral views) can be interpreted as a view about the metaphysics of reasons, which would be a metaethical view. But it's important to get this right. Here's a less example-driven explanation due to Tristram McPherson:
"Metaethics is that theoretical activity which aims to explain how actual ethical thought and talk—and what (if anything) that thought and talk is distinctively about—fits into reality" (McPherson and Plunkett, "The Nature and Explanatory Ambitions of Metaethics," in The Routledge Handbook of Metaethics, p. 3).
Anyway, thank you for writing this post, I expect it will clear up a lot of confusions and be useful as a reference.
Replies from: MichaelA↑ comment by MichaelA · 2020-01-09T06:51:17.837Z · LW(p) · GW(p)
Glad to hear this roughly matches your understandings!
And that way of fitting decision theory into the picture sounds reasonable to me. I'd guess there's a few different ways one could slice this sort of stuff up, and it's not yet clear to me which is best (and I'd guess there probably isn't a single clear winner).
comment by Gordon Seidoh Worley (gworley) · 2020-01-08T00:52:19.383Z · LW(p) · GW(p)
To the extent that axiology is about values (what is good/bad), it is about preferences (what would one rather do), and is thus tied to decision theory in that it offers the place from which numbers gets assigned to different decisions even if it doesn't say how to choose among them. I assume most people are familiar with preferences and it may or may not be very relevant for your work as it's already pretty similar to issues in morality that requires choosing between options, but thought it worth mentioning.
comment by Said Achmiz (SaidAchmiz) · 2020-01-09T09:52:12.447Z · LW(p) · GW(p)
Excellent post. There is not much here to agree or disagree with—which, to be clear, is a compliment! Your explanations seem mostly to be consistent with what I’ve been taught and have read.
A couple of fairly minor notes:
“maximising” moral theories like classical utilitarianism claim that the only action one is permitted to take is the very best action, leaving no room for choosing the “prudentially best” action out of a range of “morally acceptable” actions
This accords with my own understanding, but I should note that I’ve seen utilitarians deny this. That is, the claim seemed to be (on the several occasions I’ve seen it made) that this is “not even wrong”, and misunderstands utilitarianism. I was not able to figure out just what the confusion was, so I can’t say much more than that; I only figured that this is worth noting. (I am not a utilitarian myself, to be clear.)
[stuff about axiology]
I found this part unsatisfying, but I don’t think it’s your fault. In fact I’ve always found the idea of axiology—the so-called “study of ‘value’”—to be rather confused. There is (it seems to me) a non-confused version which boils down to conceptual analysis of the concept of ‘value’, but this would be quite orthogonal to both morality and prudence (and everything else in this post). Anyhow, this is a digression, and I think mostly irrelevant to any points you intend to make.
I very much look forward to the next post!
Replies from: steve2152, MichaelA↑ comment by Steven Byrnes (steve2152) · 2020-02-19T11:11:40.936Z · LW(p) · GW(p)
Not a philosopher, but common-sensically, I understand utilitarianism as saying that actions that create more good for more people are progressively more praiseworthy. It's something else to label the one very best possible action as "moral / permitted" and label every other action as "immoral / forbidden". That seems like a weird and counterproductive way to talk about things. Do utilitarians actually do that?
Replies from: SaidAchmiz, MichaelA, MichaelA, TAG↑ comment by Said Achmiz (SaidAchmiz) · 2020-02-20T01:43:05.905Z · LW(p) · GW(p)
Ethics/morality is generally understood to be a way to answer the question, “what is the right thing to do [in some circumstance / class of circumstances]?” (or, in other words, “what ought I to do [in this circumstance / class of circumstances]?”)
If, in answer to this, your ethical framework / moral system / etc. says “well, action X is better than action Y, but even better would be action Z”, then you don’t actually have an answer to your question (yet), do you? Because the obvious follow-up is, “Well, ok, so… which of those things should I do? X? Or Y? Or Z…?”
At that point, your morality can give you one of several answers:
-
“Any of those things is acceptable. You ought to do something in the set { X, Y, Z } (but definitely don’t do action W!); but which of those three things to do, is really up to you. Although, X is more morally praiseworthy than Y, and Z more praiseworthy than X. If you care about that sort of thing.”
-
“You ought to do the best thing (which is Z).”
-
“I cannot answer your question. There is no right thing to do, nor is there such a thing as ‘the thing you ought to do’ or even ‘a thing you ought to do’. Some things are simply better than others.”
If your morality gives answer #3, then what you have is actually not a morality, but merely an axiology. In other words, you have a ranking of actions, but what do you do with this ranking? Not clear. If you want your initial question (“what ought I to do?”) answered, you still need a morality!
Now, an axiology can certainly be a component of a morality. For example, if you have a decision rule that says “rank all available actions, then do the one at the top of the ranking”, and you also have a utilitarian axiology, then you can put them together and presto!—you’ve got a morality. (You might have a different decision rule instead, of course, but you do need one.)
Answer #3 plus a “do the best thing, out of this ranking” is, of course, just answer #2, so that’s all fine and good.
In answer #1, we are supposing that we have some axiology (evaluative ranking) that ranks actions Z > X > Y > W, and some decision rule that says “do any of the first three (feel free to select among them according to any criteria you like, including random choice), and you will be doing what you ought to do; but if you do W, you’ll have done a thing you ought not to do”. Now, what can be the nature of this decision rule? There would seem to be little alternative to the rule being a simple threshold of some sort: “actions that are at least this good [in the evaluative ranking] are permissible, while actions worse than this threshold are impermissible”. (In the absence of such a decision rule, you will recall, answer #1 degenerates into answer #3, and ceases to be a morality.)
Well, fair enough. But how to come up with the threshold? On what basis to select it? How to know it’s the right one—and what would it mean for it to be right (or wrong)? Could two moralities with different permissibility thresholds (but with the same, utilitarian, axiology) both be right?
Note that the lower you set the threshold, the more empty your morality becomes of any substantive content. For instance, if you set the threshold at exactly zero—in the sense that actions that do either no good at all, or some good, but in either case no harm, are permitted, while harmful actions are forbidden—then your morality boils down to “do no harm (but doing good is praiseworthy, and the more the better)”. Not a great guide to action!
On the other hand, the higher you set the threshold, the closer you get to answer #2.
And in any event, the questions about how to correctly locate the threshold, remain unanswered…
↑ comment by MichaelA · 2020-02-27T14:59:07.533Z · LW(p) · GW(p)
Turns out the latest 80,000 Hours episode has a brief relevant discussion at 30:45. That discussion seems to match my claim in my other comment that classical utilitarianism, if you take its original form at face value, sees only the maximally good action as permitted. But it also matches the idea that other forms of utilitarianism - and possibly the most common versions of utilitarianism nowadays - do not work that way.
(Said Achmiz's answer is more thorough anyway, though.)
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-02-27T19:08:26.831Z · LW(p) · GW(p)
Timely! I noticed that too and was gonna comment but you beat me to it... For my part, I guess I go back and forth between Rob-style "I am going to do the best possible thing for the future, but that thing is to set reasonable goals so I don't burn out or give up, etc. etc.", and the quasi-nihilist [LW · GW] "None of this morality stuff is coherent, but I care about people having good lives and good futures, and I'm going to act on the basis of that feeling! (And oh by the way I also care about other things too.)" :-P
To be clear, I haven't thought it through very much, y'know, it's just the meaning of life, nothing important, I'm kinda busy :-P
↑ comment by MichaelA · 2020-02-19T12:41:20.764Z · LW(p) · GW(p)
I'm pretty confident that (self-described) utilitarians, in practice, very rarely do that. I think it's more common for them to view and discuss things as if they become "progressively more praiseworthy", or as if there's an obligation to do something that's at least sufficiently good, and then better things become "progressively more praiseworthy" (i.e., like you have to satisfice, and then past there it's a matter of supererogation).
I'm pretty confident that at least some forms of utilitarianism do see only the maximally good (either in expectation or "objectively") action as permitted. And I think that classical utilitarianism, if you take its original form at face value, fits that description. But there are various forms of utilitarianism, and it's very possible that not all of them have this "maximising" nature. (Note that I'm not a philosopher either.)
I think a few somewhat relevant distinctions/debates are subjectivism vs objectivism (as mentioned in this post) and actualism vs possibilism (full disclosure: I haven't read that linked article).
Note that me highlighting that self-described utilitarians don't necessarily live by or make statements directly corresponding to classical utilitarianism isn't necessarily a critique. I would roughly describe myself as utilitarian, and don't necessarily live by or make statements directly corresponding to classical utilitarianism. This post [EA · GW]is somewhat relevant to that (and is very interesting anyway).
Replies from: TAG↑ comment by TAG · 2020-02-20T10:17:36.747Z · LW(p) · GW(p)
People keep telling me that my criticisms of utilitarianism are criticisms of classical utilitarianism, not the improved version they believe in. But they keep failing to provide clear explanations of new improved utilitarianism. Which is a problem because if improved utilitarianism has aspects of subjectivism or social construction, or whatever, then it is no longer a purely mathematical and objective theory, as advertised.
↑ comment by TAG · 2020-02-20T09:54:18.395Z · LW(p) · GW(p)
We put people in jail or execute them for doing bad things. That's kind of a binary. If utilitarianism can only justify a spectrum of praiseworthiness-blameworthiness, then it is, insufficient to justify the social practices surrounding ethics. If it can't justify blameworthiness, then things are even worse.
↑ comment by MichaelA · 2020-01-09T11:44:29.709Z · LW(p) · GW(p)
Currently, axiology seems confusing to me in that it seems to mean many different things at different times. I haven't looked into it enough to be confident calling it rather than me confused, but I certainly wouldn't throw that hypothesis out yet either.
But I'm also a bit confused as to why you think analysis of the concept of value would be orthogonal to morality, prudence, and other normative matters?
It seems to me like maybe one analogy (which is spit-balling and goes outside of my wheelhouse) to illustrate my way of viewing this could be that an agent's moral theory, if we subtracted the axiology from it, gives the agent a utility function, but one containing references/pointers to other things, not yet specified. Like it could say "maximise value", but not what value is. And then the axiology specifies what that is. So to the extent to which axiology (under a given definition) helps clarify what is valuable, it feeds into morality, rather than running perpendicular to it. Or do you view it differently?
Perhaps what you meant by "boils down to conceptual analysis of the concept of ‘value’" was more like metaethics-style reasoning about things like the "nature of" value, which might not directly help answer what specifically is valuable?
comment by MichaelA · 2020-01-08T23:19:04.818Z · LW(p) · GW(p)
Another concept potentially worth highlight morality's overlaps with and distinctions from is aesthetics. I might add that later.
comment by Donald Hobson (donald-hobson) · 2020-01-08T14:12:57.234Z · LW(p) · GW(p)
For example, I could say that, from the perspective of epistemic rationality, I “shouldn’t” believe that buying that burrito will create more utility in expectation than donating the same money to AMF would. This is because holding that belief won’t help me meet the goal of having accurate beliefs.
There is a phenomena in AI safety called "you can't fetch the coffee if your dead". A perfect total utilitarian, or even a money maximiser would still need to eat, if they want to be able to work next year. If you have a well paid job, or a good chance of getting one, don't starve yourself. Eat something quick cheap and healthy. Quick so you can work more today and healthy so you can work years later. In a world where you need to wear a sharp suit to be CEO, the utilitarians should buy sharp suits. Don't fall for the false economy of personal deprivation. This doesn't entitle utilitarians to whatever luxury they feel like. If most of your money is going on sharp suits, it isn't a good job. A sharp suited executive should be able to donate far more than a cardboard box wearing ditch digger.
Replies from: MichaelA↑ comment by MichaelA · 2020-01-08T23:15:42.721Z · LW(p) · GW(p)
Fair point. I've now replaced it with "buying a Ferrari", which, while still somewhat debatable, seems a lot less so. Thanks for the feedback!
I do think there's a sense in which, under most reasonable assumptions, it'd be true that buying the burrito itself won't maximise universe-wide utility, partly because there's likely some cheaper food option. But that requires some assumptions, and there's also a good chance that, if we're really talking about someone actively guided by utilitarianism, they've probably got a lot of good to do, and will likely do it better in the long run if they don't overthink every small action and instead mostly use some policies/heuristics (e.g., allow myself nice small things, but don't rationalise endless overseas holidays and shiny cars). And then there's also the point you raise about how one would look to others, and the consequences of that.
I do remember noticing when writing this post that that was an unnecessarily debatable example (the kind which whole posts could be and have been written about how to handle), but for some reason I then dropped that line of thinking.
Replies from: Dagon↑ comment by Dagon · 2020-01-08T23:27:35.235Z · LW(p) · GW(p)
Ehn, I think this is dodging the question. There _ARE_ things one could do differently if one truly believed that others were as important as oneself. NOBODY actually behaves that way. EVERYONE does things that benefit themselves using resources that would certainly give more benefit to others.
Any moral theory that doesn't recognize self-interest as an important factor does not apply to any being we know of.
Replies from: MichaelA↑ comment by MichaelA · 2020-01-08T23:40:44.735Z · LW(p) · GW(p)
I would say that's yet another set of (related) debates that are interesting and important, but not core to this post :)
Examples of assumptions/questions/debates that your comment seem to make/raise:
- What is it to "truly believe" others are as important as oneself? Humans aren't really cohesive agents with a single utility function and set of beliefs. Maybe someone does believe that, on some level, but it just doesn't filter through to their preferences, or their preferences don't filter through to their behaviours.
- Is "true altruism" possible? There are arguably some apparent cases, such as soldiers jumping on grenades to save their brothers in arms, or that guy who jumped on the subway tracks to save a stranger.
- What does "true altruism" even mean?
- Should we care whether altruism is "true" or not"? If so, why?
- As I suggested above, would it really be the case that a person who does act quite a bit based on (effective) altruism would bring more benefit to others by trying to make sure every little action benefits others as much as possible, rather than by setting policies that save themselves time and emotional energy on the small matters so they can spend it on bigger things?
- Is the goal of moral philosophy to find a moral theory that "applies" to beings we know of, or to find the moral theory these beings should follow?
- More generally, what criteria should we judge moral theories by?
- What's the best moral theory?
- A bunch of metaethical and metaphysical cans of worms you opened up in trying to tackle the last three questions
Each of those points would deserve at least one post for itself, if not a series of books by different debating people who dedicated their whole lives to studying the matters.
This post wasn't trying to chuck all that in one place. This post is just about disentangling what we even mean by "morality" from other related concepts.
So I guess maybe I'm biting the bullet of the charge of dodging the question? I.e., that was exactly my intention when I switched to an example "which, while still somewhat debatable, seems a lot less so", because this post is about things other than those debates.