Thoughts on Moral Philosophy
post by Chris_Leong · 2021-08-17T12:57:22.019Z · LW · GW · 12 commentsContents
12 comments
I don't really feel that I personally have any special insight into moral philosophy, but nonetheless, some people may find it interesting or useful to know where I stand or the philosophers/perspectives that I do consider to have such insight.
That said, perhaps my most original offering is my three-paragaph post On Disingenuity [LW · GW]:
Suppose someone claims that all morality is relative, but when pressed on whether this would apply even to murder, they act evasive and refuse to give a clear answer. A critic might conclude that this person is disingenuous in refusing to accept the clear logical consequences of their belief.
However, imagine that there's a really strong social stigma against asserting that murder might not be bad, to the point of permanently damaging such a person's reputation, even though there's no consequence for making the actually stronger claim that all morality is relative. The relativist might therefore see the critic as the one who is disingenuous; trying to leverage social pressure against them instead of arguing on the basis of reason.
Thus in the right circumstances, each side can quite reasonably see the other as disingenuous. I suspect that everyone will have experienced both sides of the coin at different times depending on the issue being discussed.
I strongly dislike most arguments that morality is relative or subjective as they usually involve a bunch of handwaving and biting bullets which the proponent hasn't really thought through, but I also acknowledge that Hume's Is-Ought disjunction is quite devastating for the prospects of establishing an objective morality.
Derek Parfit makes the best attempt I've seen to get around this disjunction with a) his use of the Future Tuesday thought experiment to argue that some preferences are objectively more correct than others and b) by noting that people generally believe that we can have knowledge about mathematics despite its seemingly non-empirical nature and then noting that this suggests that we might be able to similarly have such a priori knowledge about morality. I didn't find his arguments completely persuasive, but I've only read summaries.
I find that discussions about moral philosophy are much more productive when we make a distinction between Principled and Pragmatic Morality [LW · GW]. For example, someone may endorse utilitarianism in principle, whilst rejecting it in practise because of the impossible burden of calculating expected values before every action or because they are worried that humans will tend to mostly just use it as an excuse to commit atrocities, while rejecting what its radical moral demands would mean for our lives.
Beyond this, I strongly endorse distinguishing Terminal and Instrumental Values [LW · GW] especially in combination with techniques like Goal Factoring [LW · GW]. In most cases I strongly decry The Copenhagen Interpretation of Ethics, although I concede that it may have some value given the difficulty of obtaining societal agreement on complex social norms [LW · GW] or as part of a Conflict-Theory style strategy (see In Defence of Conflict Theory [LW · GW]).
As per Sartre and Simone de Beauvoir, I endorse the existential position that life is so complex that there's no set of rules that can always lead us towards the correct moral decision. Nonetheless, as a pragmatic adjustment, I concede that rules are necessary for social co-ordination and as a defence against motivated reasoning.
Even though I've only read a single chapter of Peter Unger's Living High and Letting Die I was really struck by his approach. While most philosophers aim to make thought experiments as simple as possible, Peter Unger tends to consider though experiments with more options and more complexity. His approach allows a better exploration of how moral psychology actually works, such as when adding or removing options changes our tendency to rank the other options. It seems inevitable that any moral argument will depend on intuitions and so it is vital to understand the best that we can how these are formed.
For example, he makes a very persuasive argument that for moral dilemmas like the trolley problem we tend to conceive as some people as being within the scope of the problem and some people being outside the scope of the problem, with us not having a right to harm people outside the scope to protect people inside the scope. As an illustration, people are much more willing to shift a trolley from a track containing five people to a track containing one person than to shift the train off the tracks where it'll go across a road and then hit someone in their own backyard. Since our conception of who is or isn't within a problem can be changed by adding or removing options, he argues very persuasively that this is a cognitive distortion.
In terms of my actual moral position, I have a degree of sympathy towards utilitarianism as a principled morality. The distinction between acts and ommissions seems to be more of a matter of pragmatics than principle a) because it is more intrusive for the government to tell you to do things than for it to ban things and b) because we don't trust others enough to let them hurt some people in order to protect others.
Another reason I lean utilitarian is that it just feels that at some level of stakes we have to be willing to act for the greater good. While some people maintain that we are unjustified in sacrificing an innocent life even if it were necessary to save the whole world, that feels like a naive and overly sentimental position to me.
That said, I don't feel endorse utilitarianism as a pricipled morality, largely because of the difficultly I have endorsing utility monsters.
With regards to the particular brand of utilitarianism, I strongly lean towards total utilitarianism. The Repugant Conclusion isn't as damaging as it first sounds because very even small amounts of positive utility are still positive, unlike say, if we were talking about distributions of money where living on less than a certain amount of money may lead to a life becoming net-negative. Additionally, the Sadistic Conclusion seems at least as bad as the Repugnant Conclusion, so comparatively, average utilitarianism is worse.
Pragmatically, I tend to endorse a somewhat unprincipled combination of utilitarianism, deontology and virtue ethics as utilitarianism helps us keep in mind consequences, deonotology defends us against motivated reasoning and virtue ethics helps us become better people.
I have a great appreciation of the elegance and appeal of Libertarianism, but I found Scott Alexander's Non-Libertarian FAQ Persuasive. Ultimately, the price of pure libertarianism would be people dying on the streets or having their potential wasted and once we've conceded freedom as an absolute principle, it is hard to prevent libertarianism collapsing down to a form of utilitarianism.
Since this post was focused more on summarising my views rather than arguing for them, I've only really provided high-level justifications for my views. However, I'm quite happy to provide extra details in the comments.
12 comments
Comments sorted by top scores.
comment by artifex0 · 2021-08-18T07:42:59.251Z · LW(p) · GW(p)
Do you think it's plausible that the whole deontology/consequentialism/virtue ethics confusion might arise from our idea of morality actually being a conflation of several different things that serve separate purposes?
Like, say there's a social technology that evolved to solve intractable coordination problems by getting people to rationally pre-commit to acting against their individual interests in the future, and additionally a lot of people have started to extend our instinctive compassion and tribal loyalties to the entirety of humanity, and also people have a lot of ideas about which sorts of behaviors take us closer to some sort of Pareto frontier- and maybe additionally there's some sort of acausal bargain that a lot of different terminal values converge toward or something.
If you tried to maximize just one of those, you'd obviously run into conflicts with the others- and then if you used the same word to describe all of them, that might look like a paradox. How can something be clearly good and not good at the same time, you might wonder, not realizing that you've used the word to mean different things each time.
If I'm right about that, it could mean that when encountering the question of "what is most moral" in situations where different moral systems provide different answers, the best answer might not be so much "I can't tell, since each option would commit me to things I think are immoral," but rather "'Morality' isn't a very well defined word; could you be more specific?"
Replies from: Chris_Leong↑ comment by Chris_Leong · 2021-08-18T07:56:53.521Z · LW(p) · GW(p)
That's entirely plausible
comment by Srdjan Miletic (srdjan-miletic) · 2021-08-19T09:42:48.239Z · LW(p) · GW(p)
I think you may be confusing utilitarianism and consequentialism a bit. Your arguments for accepting utilitarianism past a certain scale (e.g: would you kill one person to save the world, no logical basis for act/omission distinction) are more arguments for consequentialism generally than they are for utilitarianism specifically. Your objections on the other hand are specific to utilitarianism.
Have you considered that you may be a consequentialist (you think the best principled course of action/universe is one where we maximise goodness) but not a utilitarian (consequentialism + the only thing we should care about it utility. No weighting for desert, justice, knowledge, etc...)
Replies from: Chris_Leong↑ comment by Chris_Leong · 2021-08-19T10:35:04.874Z · LW(p) · GW(p)
Yeah, I probably could have been more specific there.
I'm slightly torn on pure utilitarianism. Like it seems very hard to assert that it is equally good for one person to have a thousand utility and 999 to have zero, than for a thousand to all have one utility.
But I don't really feel a need to include a weighting for knowledge or desert.
comment by Lance Bush (lance-bush) · 2021-08-17T20:27:07.187Z · LW(p) · GW(p)
I don't endorse relativism/subjectivism, however, relativist positions do not strike me as especially troublesome or implausible, nor do they seem to me to entail biting any bullets.
You state that you dislike arguments for relativism/subjectivism because they involve handwaving and biting bullets. Could you say a bit more about this? What kind of handwaving do you have in mind? And what bullets do you think these arguments lead one to bite?
I’m also interested in why you consider Parfit’s Future Tuesday thought experiment to be the best attempt (at getting around the is-ought divide I take it?), and more generally what (if anything) you think the thought experiment demonstrates or provides evidence for. I grant that “best” doesn’t necessarily mean good or convincing and I recognize that you don’t find these arguments completely persuasive, but I don’t find Parfit’s considerations even slightly persuasive.
I am also intrigued by the suggestion that “people generally believe that we can have knowledge about mathematics.” Sure, but we can also have knowledge about the rules of soccer or chess, but I don’t think that the rules of these games are stance-independently true. They are constructed. Just the same, the rules of mathematics could likewise be constructed. The fact that we can have knowledge of something does not entail realism about that thing in the sense Parfit and other non-naturalist moral realists are claiming there are objective moral facts (i.e., they don’t think these facts are true in the way the rules of soccer are). In any case, claims about what people generally believe about the metaphysics of mathematics seems like an empirical question; is there convincing data that people are generally mathematical realists?
Either way, I don’t see much reason to infer that there are objective moral facts even if there are objective mathematical facts; while I grant that identifying examples of bodies of non-empirical knowledge (like math) raises the plausibility of other bodies of non-empirical knowledge (like morality), this provides at best only marginal evidence of the possibility of the latter; it’s hardly a good reason to think moral realism is tenable.
Replies from: JBlack, Chris_Leong↑ comment by JBlack · 2021-08-18T00:15:41.200Z · LW(p) · GW(p)
Rules of mathematics are constructed and arbitrary in much the same way soccer rules are, but from any given set of rules specific conclusions follow objectively. It doesn't make much sense to say that a given set of axioms and derivation rules is "objectively true"(*), but it does make sense to say that the theorems "objectively follow" from the axioms and derivation rules.
So mathematics is not really of much help to any concept of objective morality.
(*) I am aware that there are mathematicians who believe in "objectively true" systems of mathematics. They're deluding themselves by confusing other qualities such as power, elegance, usefulness, or beauty for objective truth.
↑ comment by Chris_Leong · 2021-08-18T03:40:29.450Z · LW(p) · GW(p)
"You state that you dislike arguments for relativism/subjectivism because they involve handwaving and biting bullets. Could you say a bit more about this? What kind of handwaving do you have in mind? And what bullets do you think these arguments lead one to bite?"
Well, people will say things like "X is true for me and Y is true for you" as a way to try to persuade you which doesn't really tell us anything about morality, just about the grammar of the English language. Or they'll say that the fact that we don't agree on morality shows morality is relative - while this provides some degree of evidence, it isn't a very strong argument as humans can get into endless arguments about all kinds of things.
Then there's the fact that they seems to be denying people the ability to claim anything is right or wrong, except for them personally or from within the perspective of their culture, whilst simultaneously claiming the relativity or subjectivity of morality universally.
Well, a lot of people who support relativism/subjectivism just want us to be more respectful of other people's perspectives and cultures or believe that we should stay out of things that aren't our business - if they actually saw a women being stoned to death for adultery, their position would usually change pretty fast.
I’m also interested in why you consider Parfit’s Future Tuesday thought experiment to be the best attempt
Well, most people agree that the Future Tuesday preference is objectively wrong or bad or mistaken. This stands in contrast to subjectivism about preferences, which says that there's no right or wrong, or better or worse when it comes to preferences.
Now preferences and morality are very similar. If we concede that some non-moral preferences are objectively better than others, then analogously it seems plausible that some moral preferences could be objectively better than others.
I am also intrigued by the suggestion that “people generally believe that we can have knowledge about mathematics.”
I agree with you that they are constructed, but lots of people believe that the suprising effectiveness of mathematics indicates that it is, for example, "the language of the universe".
This provides at best only marginal evidence of the possibility of the latter; it’s hardly a good reason to think moral realism is tenable
Well, it would defeat Hume's Is-Ought disjunction which claims that we can't get from empirical facts to morality. If we accept the characterisation of mathematics as a priori knowledge, then it would mean that starting from empirical facts wouldn't the only road to objective knowlege.
Replies from: lance-bush↑ comment by Lance Bush (lance-bush) · 2021-08-18T06:05:48.377Z · LW(p) · GW(p)
People may say those sorts of things, but it is easy to find poor representation for any position. Relativism/subjectivism as they are put forward by moral philosophers are a very different thing, and are less (or not at all) susceptible to the kinds of concerns you raise.
The persistence of intractable disagreement is the tip of a bigger iceberg for reasons to doubt moral realism; I share your view that it is not very strong evidence, but there are other reasons, and the overall picture seems to me to overwhelmingly favor the antirealist position. At the very least, the persistence of disagreement can spark a broader discussion about how one would go about determining what the allegedly objective moral facts are, and I don’t think moral realists have anything very convincing to say on the epistemic front.
Then there's the fact that they seems to be denying people the ability to claim anything is right or wrong, except for them personally or from within the perspective of their culture, whilst simultaneously claiming the relativity or subjectivity of morality universally.
Some relativists may do this, but relativism as a metaethical stance does not require or typically entail the claim that people don’t have the ability to claim anything is right or wrong except with respect to that person’s standards or the standards of their culture.
Insofar as relativism includes a semantic thesis, the thesis is that as a matter of fact this is what people mean when they make moral claims; not that they lack the ability to do otherwise. In other words, the relativist might say “when people make moral claims, they intend to report facts that are implicitly indexicalized to themselves or their culture’s standards.” The semantic aspect of relativism is about the meaning of ordinary moral thought and discourse; it isn’t (necessarily) a strict requirement that nobody could speak or think differently.
Relativists can and do acknowledge the existence of people who don’t speak or think this way; after all, they often find themselves arguing with moral realists, whose reflective moral stance is that there are non-relative moral facts. The relativist might adopt an error theory towards these people.
I’m not entirely sure I understand the last part, about simultaneously claiming the “relativity or subjectivity of morality universally.”
Well, a lot of people who support relativism/subjectivism just want us to be more respectful of other people's perspectives and cultures or believe that we should stay out of things that aren't our business - if they actually saw a women being stoned to death for adultery, their position would change pretty fast.
It may be that some people’s relativism is really just a clumsy and roundabout way to endorse tolerance towards others, but that’s a problem for these people’s views, it isn’t really a problem with relativism as a metaethical position; relativism as a metaethical position doesn’t entail and isn’t really about tolerance.
Also, when you say that people’s position would change pretty fast, do you mean that they’d endorse some form of realism? People whose apparent metaethical standards change when asked about atrocities may very well be confused, as the question may give the rhetorical impression that if they don’t object to atrocities in the realist sense, that they don’t object to them at all. This simply isn’t the case. Relativists who fold under pressure when presented with atrocities don’t need to: nothing about relativism requires that one be any less opposed, disgusted, and outraged by stoning adulterers.
In any case, what kind of people do you have in mind? Are these laypeople who don’t study metaethics? I study metaethics, and I am an antirealist; my particular stance is different than relativists/subjectivists, but I share with them the denial that there are objective moral facts. My metaethical standards don’t change in response to people pointing to actions I oppose; my opposition to those actions is fully consistent with antirealism.
Well, most people agree that the Future Tuesday preference is objectively wrong or bad or mistaken.
Do they? That’s an empirical question. In any case, even if most people did, I’d just say these people are mistaken. Most people agreeing on something is at best very weak evidence for whatever it is they agree on. I’ve discussed the Future Tuesday indifference scenario several times and have yet to hear a good explanation as to how one extracts towards objectivity, or external reasons, or justify claiming the agent in the scenario is “irrational,” etc. The typical response I get is simply that it “seems intuitive” or something like that. Should we take other people’s intuitions to be probative of the truth? If so, why?
FWIW, I don’t even think the type of moral realism Parfit was going for is intelligible. So when people report that they intuit implications from the Future Tuesday thought experiment, I’m not entirely clear on what it is they’re claiming seems to be true to them; that is, I don’t think it even makes sense to say something is objectively right or wrong. Happy to discuss this further!
Finally, regarding what I think may be going on: it seems far more plausible that people reading the scenario are projecting their own notions of what would or wouldn’t be rational onto the agent in the scenario, and mistakenly thinking that there is some stance-independent standard of what is “rational.” In other words, they’re actually just imputing their own standards onto agents without realizing it. Personally, I just don’t think there’s anything irrational about future Tuesday indifference.
If we concede that some non-moral preferences are objectively better than others, then analogously it seems plausible that some moral preferences could be objectively better than others.
Unfortunately, I also do not agree with this. It’s not just that I don’t think it makes any sense to describe some preferences as objectively better than others. It’s that even if there were objective nonmoral normative facts, I don’t think this provides much support for moral realism.
I’m also simply not sure it’s true that moral facts are similar to preferences. I am not sure that’s true, and I am not sure that if it is that whatever respects in which they’re similar provide much a reason to take moral realism seriously. After all, unicorns are quite similar to horses, but the existence of horses is hardly a good reason to think the existence of unicorns is plausible.
Consider a hypothetical society that had a completely distinct sui generis category of norms that aren’t moral norms, they are, say, “zephyrian norms.” Zephyrian norms developed over the centuries in this society in response to a wide array of considerations, and are built around regulating and maintaining social order through adherence to various rituals, taboos, and ceremonial practices. For instance, one important zephyrian norm revolves around never wearing the color blue, because it is intrinsically unzephyrian to do so.
I take it you and I would find “zephyrian realism” utterly implausible. There’s just no reason to think it’s objectively bad to wear blue, or to sing the Hymn of the Aegis every 7th moon. Yet if we grew up in a society with zephyrian norms, we may regard them as distinct from and just as important as moral norms.
And we could argue that, if objectivism about preferences is true, then it seems plausible there could be objective zephyrian facts about what we should or shouldn’t do. Of course, zephyrian realism is false; there are no objective zephyrian facts.
That there might be some other normative facts does very little to increase the plausibility of zephyrian realism. I think the same holds for moral realism. Even if preference realism were true, none of us would be tempted to take zephyrian realism much more seriously. Likewise, it's not clear why preference realism should does much to render moral realism more plausible.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2021-08-18T08:34:49.349Z · LW(p) · GW(p)
Regarding laymen vs philosophers - I was mainly trying to criticise the lay ideas of relativity floating around. And I wasn't denying that some people could endorse moral relativity seriously, just that I think that the majority of people endorse it without biting the bullet.
What you wrote about Zephyrian realism is interesting, I'd have to think about it.
Replies from: lance-bush↑ comment by Lance Bush (lance-bush) · 2021-08-18T17:26:27.117Z · LW(p) · GW(p)
Great, thanks for clarifying. I am a very enthusiastic proponent of moral antirealism so feel free to get in touch if you want to discuss metaethics.
comment by Teo Ajantaival · 2021-08-17T16:56:44.214Z · LW(p) · GW(p)
Additionally, the Sadistic Conclusion seems at least as bad as the Repugnant Conclusion, so comparatively, total utilitarianism is worse.
I think you intend to say that "comparatively, average utilitarianism is worse" :)
Replies from: Chris_Leong↑ comment by Chris_Leong · 2021-08-17T19:05:30.887Z · LW(p) · GW(p)
Oops, you're correct. Thanks :-)