The Case Against Moral Realism

post by Zero Contradictions · 2024-11-07T10:14:26.269Z · LW · GW · 10 comments

This is a link post for https://thewaywardaxolotl.blogspot.com/2024/08/the-case-against-moral-realism.html

Contents

10 comments

The PDF version can be read here.

Moral realism is an explicit version of the ordinary view of morality. It has the following assumptions:

There are many problems with moral realism, including:

Let’s go through these problems in more detail, starting with the definition of good and evil.

What are good and evil?

If good and evil are objectively real, then we should be able to measure them, analogous to how we measure height or temperature. We could construct a device to measure things on this objective moral dimension, in a way that is free from personal biases. Then we could use the device to resolve moral conflicts, in the same way that we can use a ruler to resolve a disagreement about height. But of course, we can’t do any of those things for good and evil.

(see the rest of the post in the link)

10 comments

Comments sorted by top scores.

comment by AnthonyC · 2024-11-07T13:42:53.176Z · LW(p) · GW(p)

I agree with the thrust and most of the content of the post, but in the interest of strengthening it, I'm looking at your list of problems and wanted to point out what I see as gaps/weaknesses.

For the first one, keep in mind it took centuries from trying to develop a temperature scale to actually having the modern thermodynamic definition of temperature, and reliable thermometers. The definition is kinda weird and unintuitive, and strictly speaking runs from 0 to infinity, then discontinuously jumps to negative infinity (but only for some kinds of finite systems), then rises back towards negative zero (I always found this funny when playing the Sims 3 since it had a" -1K Refrigerator"). Humans knew things got hot and cold for many, many millennia before figuring out temperature in a principled way. Morality could plausibly be similar.

The third and fourth seem easily explainable by bounded rationality, in the same way that "ability to build flying machines and quantum computers" and "ability to identify and explain the fundamental laws of physical reality" vary between individuals, cultures, and societies.

For the fifth, there's no theoretical requirement that something real should only have a small number of principles that are necessary for human-scale application. Occam's Razor cuts against anyone suggesting a fundamentally complex thing, but it is possible there is a simple underlying set of principles that is just incredibly complicated to use in practice. I would argue that most attempts to formalize morality, from Kant to Bentham etc., have this problem, and one of the common ways they go wrong is that people try to apply them without recognizing that.

The sixth seems like a complete non-sequitur to me. If moral realism were true, then people should be morally good. But why would they? Even if there were somehow a satisfying answer to the second problem of imposing an obligation, this does not necessarily provide an actual mechanism to compel action or a trend to action to fulfil the obligation. In fact at least some traditional attempts to have moral realist frameworks, like Judeo-Christian God-as-Lawgiver religion, explicitly avoid having such a mechanism. 

Replies from: NicksName
comment by NicksName · 2024-11-11T12:41:03.574Z · LW(p) · GW(p)

The whole field of meta-ethics is bogus and produces verbiage that isn't helpful. The last paragraph here hits the nail on the head, an interlocutor can grant basically anything about morality, as long as there isn't an enforcement mechanism it's simply irrelevant. Any non supernatural enforcement mechanism just turns compliance/non-compliance into a game theoretic problem. Even if there was a supernatural enforcement mechanism - people would just cooperate out of calculation, in which case morality and altruism get divorced and morality loses it's emotional appeal and just reduces to individual selfishness, if you think long enough about it, which chokes the motivation to care about morality in the first place.

Replies from: AnthonyC
comment by AnthonyC · 2024-11-15T12:01:54.171Z · LW(p) · GW(p)

If I'm understanding you correctly, then I strongly disagree about what ethics and meta-ethics are for, as well as what "individual selfishness" means. The questions I care about flow from "What do I care about, and why?" and "How much do I think others should or will care about these things, and why?" Moral realism and amoral nihilism are far from the only options, and neither are ones I'm interested in accepting.

comment by StartAtTheEnd · 2024-11-07T18:19:18.688Z · LW(p) · GW(p)

I don't think good and evil are objectively real as moral terms, but if something makes us select against certain behaviour, it may be because said behaviour results in organisms deleting themselves from existence. So that "evil" actually means "unsustainable". But this makes it situational (your sustainable expenditure depends on your income, for instance, so spending 100$ cannot be objectively good or evil).

Moral judgments vary between individuals, cultures and societies


Yes, and which actions result in you not existing will also vary. There's no universal morality for the same reason that there's no universal "best food" or "most fitting zoo enclosure", for "best" cannot exist on its own. Calling something "best" is a kind of shortcut, there's implicit things being referred to.
What's the best move in Tetris? The correct answer depends on the game state. When you're looking for "objectively correct universal moral rules" you might also be throwing away the game state on which the answer depends.

I'd go as far as to say that all situations where people are looking for universal solutions are mistaken, as there may (necessarily? I'm not sure) exist many local solutions which are objectively better in the smaller scope. For instance, you cannot design a tool which is the best tool for fixing any machine, instead you will have to create 100s of tools which are the best for each part of each machine. So hammers, saws, wrenches, etc. exist and you cannot unify all of them them to get something which is objectively better than any of them in any situation. But does this imply that tools are not objective? Does it not rather imply that good is a function taking at least two inputs (tool, object) and outputting a value based on the relation between the two? (a third input could be context, i.e. water is good for me in the context that I'm thirsty). 

If my take is right, then like 80% of all philosophical problems turn out to be nonsense. In other words, most unsolved problems might be due to flawed questions. I'm fairly certain in this take, but I don't know if it's obvious or profound.

comment by Seth Herd · 2024-11-07T17:24:32.450Z · LW(p) · GW(p)

I think this issue has been discussed at length and repeatedly on LW, leading to a weak consensus that at least strong moral realism isn't true.

Can anyone supply links to some other good posts on the topic?

Replies from: cubefox
comment by cubefox · 2024-11-08T14:49:37.978Z · LW(p) · GW(p)

Yudkowsky has written about it:

(...) In standard metaethical terms, we have managed to rescue 'moral cognitivism' (statements about rightness have truth-values) and 'moral realism' (there is a fact of the matter out there about how right something is). We have not however managed to rescue the pretheoretic intuition underlying 'moral internalism' (...)

comment by JBlack · 2024-11-09T05:22:00.425Z · LW(p) · GW(p)

Only the first point "Good and evil are objectively real" is a necessary part of moral realism. Sometimes the first half of the third ("We have an objective moral obligation to do good and not do evil") is included, but by some definitions that is included in what good and evil mean.

All the rest are assumptions that many people who believe in moral realism also happen to hold, but aren't part of moral realism itself.

comment by cubefox · 2024-11-08T09:15:27.408Z · LW(p) · GW(p)

Replace in the post "morality" with "rationality" and you get a reductio ad absurdum.

comment by kylefurlong · 2024-11-12T06:41:26.073Z · LW(p) · GW(p)

If we imagine that each individual actor in an environment is constantly transducing incident causal events into further watersheds of causal event chains, and we allow each actor to evaluate the benefit or harm of a given incident causal event, we have a basis for a definition of moral good or evil. Moral good or evil is a positive or negative value of the ratio of benefits of an action less the harms of an action summed over the entire causal watershed and all affected actors, divided by the total benefits and harms determined in the same way. This quantity is computable and ranges from 1 to -1 (pure good, and pure evil).

comment by Dagon · 2024-11-07T19:21:40.658Z · LW(p) · GW(p)

I think there's a much simpler case against it: show me the instrument readings, or at least tell me the unit of measure.