To first order, moral realism and moral anti-realism are the same thing
post by Stuart_Armstrong · 2019-06-03T15:04:56.363Z · LW · GW · 8 commentsContents
Moral realists look like moral anti-realists Moral anti-realists look like moral realists The first-order similarity None 8 comments
I've taken a somewhat caricaturist view [LW · GW] of moral realism[1], describing it, essentially, as the random walk of a process defined by its "stopping" properties.
In this view, people start improving their morality according to certain criteria (self-consistency, simplicity, what they would believe if they were smarter, etc...) and continue on this until the criteria are finally met. Because there is no way of knowing how "far" this process can continue until the criteria are met, this can drift very far indeed from its starting point [LW · GW].
Now I would like to be able to argue, from a very anti-realist perspective, that:
- Argument A: I want to be able to judge that morality is better than morality , based on some personal intuition or judgement of correctness. I want to be able to judge that is alien and evil, even if it is fully self-consistent according to formal criteria, while is not fully self-consistent.
Moral realists look like moral anti-realists
Now, I maintain that this "random walk to stopping point" is an accurate description of many (most?) moral realist systems. But it's a terrible description of moral realists. In practice, most moral realists allow for the possibility of moral uncertainty, and hence that their preference approach might have a small chance of being wrong.
And how would they identify that wrongness? By looking outside the formal process, and checking if the path that the moral "self-improvement" is taking is plausible, and doesn't lead to obviously terrible outcomes.
So, to pick one example from Wei Dai (similar examples can be found in this post on self-deception [LW · GW], and in the "Senator Cruz" section of Scott Alexander's "debate questions" post):
I’m envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won’t be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked [LW · GW].
If the moral realist approach included getting into conversations with such systems and thus getting randomly subverted, then the moral realists I know would agree that the approach had failed, no matter how internally consistent it seems. Thus, they allow, in practice, some considerations akin to Argument A: where the moral process ends up (or at least the path that it takes) can affect their belief that the moral realist conclusion is correct.
So moral realists, in practice, do have conditional meta-preferences [LW · GW] that can override their moral realist system. Indeed, most moral realists don't have a fully-designed system yet, but have a rough overview of what they want, with some details they expect to fill in later; from the perspective of here and now, they have some preferences, some strong meta-preferences (on how the system should work) and some conditional meta-preferences (on how the design of the system should work, conditional on certain facts or arguments they will learn later).
Moral anti-realists look like moral realists
Enough picking on moral realists; let's look now at moral anti-realists, which is relatively easy for me as I'm one of them. Suppose I was to investigate an area of morality that I haven't investigated before; say, political theory of justice.
Then, I would expect that as I investigated this area, I would start to develop better categories than what I have now, with crisper and more principled boundaries. I would expect to meet arguments that would change how I feel and what I value in these areas. I would apply simplicity arguments to make more elegant the hodgepodge of half-baked ideas that I currently have in that area.
In short, I would expect to engage in moral learning. Which is a peculiar thing for a moral anti-realist to expect...
The first-order similarity
So, to generalise a bit across the two categories:
- Moral realists are willing to question the truth of their systems based on facts about the world that should formally be irrelevant to that truth, and use their own private judgement in these cases.
- Moral anti-realists are willing to engage in something that looks like moral learning.
Note that the justifications of the two points of view are different - the moral realist can point to moral uncertainty, the moral anti-realist to personal preferences for a more consistent system. And the long-term perspectives are different: the moral realist expects that their process will likely converge to something with fantastic properties, the moral anti-realist thinks it likely that the degree of moral learning is sharply limited, only a few "iterations" beyond their current morality.
Still, in practice, and to a short-term, first-order approximation, moral realists and moral-anti realists seem very similar. Which is probably why they can continue to have conversations and debates that are not immediately pointless.
I apologise for my simplistic understanding and definitions of moral realism. However, my partial experience in this field has been enough to convince me that there are many incompatible definition of moral realism, and many arguments about them, so it's not clear there is a single simple thing to understand. So I've tried to define is very roughly, enough so that the gist of this post makes sense. ↩︎
8 comments
Comments sorted by top scores.
comment by Gordon Seidoh Worley (gworley) · 2019-06-11T01:24:20.117Z · LW(p) · GW(p)
I apologise for my simplistic understanding and definitions of moral realism. However, my partial experience in this field has been enough to convince me that there are many incompatible definition of moral realism, and many arguments about them, so it's not clear there is a single simple thing to understand. So I've tried to define is very roughly, enough so that the gist of this post makes sense. ↩︎ [LW · GW]
I think this is mostly because there are lots of realist and anti-realist positions and they cluster around features other than their stance on realism, i.e. whether or not moral facts exist, or said less densely, whether or not moral claims can be true or false. The two camps seems to have a lot more going on, though, than is captured by this rather technical point, as you point out. In fact, most of the interesting debate is not about this point, but about things that can be functionally the same regardless of your stance on realism, hence your noticing how realists and anti-realists can look like each other in some cases.
(My own stance is to be skeptical, since I'm not even sure we have a great idea of what we really mean when we say things are true or false. It seems like we do at first, but if we poke too hard the whole thing starts to come apart at the seams, which makes it a bit hard to worry too much about moral facts when you're not even sure about facts in the first place!)
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-06-27T04:41:49.853Z · LW(p) · GW(p)
"""And the long-term perspectives are different: the moral realist expects that their process will likely converge to something with fantastic properties, the moral anti-realist thinks it likely that the degree of moral learning is sharply limited, only a few "iterations" beyond their current morality."""
^ Why do you say this?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2019-06-27T11:30:37.401Z · LW(p) · GW(p)
Just my impression based on discussing the issue with some moral realists/non-realists.
comment by J Thomas Moros (J_Thomas_Moros) · 2019-06-06T16:31:27.059Z · LW(p) · GW(p)
At least as applied to most people, I agree with your claim that "in practice, and to a short-term, first-order approximation, moral realists and moral anti-realists seem very similar." As a moral anti-realist myself, a likely explanation for this seems to be that they are engaging in the kind of moral reasoning that evolution wired into them. Both the realist and anti-realist are then offering post hoc explanations for their behavior.
With any broad claims about humans like this, there are bound to be exceptions. Thus all the qualifications you put into your statement. I think I am one of those exceptions among the moral anti-realist. Though, I don't believe it in any way invalidates your "Argument A." If you're interested in hearing about a different kind of moral anti-realist, read on.
I'm known in my friend circle for advocating that rationalists should completely eschew the use of moral language (except as necessary to communicate to or manipulate people who do use it). I often find it difficult to have discussions of morality with both moral realists and anti-realists. I don't often find that I "can continue to have conversations and debates that are not immediately pointless." I often find people who claim to be moral anti-realists engaging in behavior and argument that seem antithetical to an anti-realist position. For example, when anti-realists exhibit intense moral outrage and think it justified/proper (esp. when they will never express that outrage to the offender, but only to disinterested third parties). If someone engages in a behavior that you would prefer they not, the question is how can you modify their behavior. You shouldn't get angry when others do what they want, and it differs from what you want. Likewise, it doesn't make sense to get mad at others for not behaving according to your moral intuitions (except possibly in their presence as a strategy for changing their behavior).
To a great extent, I have embraced the fact that my moral intuitions are an irrational set of preferences that don't have to and never will be made consistent. Why should I expect my moral intuitions to be any more consistent than my preferences for food or whom I find physically attractive? I won't claim I never engage in "moral learning," but it is significantly reduced and more often of the form of learning I had mistaken beliefs about the world than changing moral categories. When debating the torture vs. dust specks problem with friends, I came to the following answer: I prefer dust specks. Why? Because my moral intuitions are fundamentally irrational, but I predict I would be happier with the dust specks outcome. I fully recognize that this is inconsistent with my other intuition that harms are somehow additive and the clear math that any strictly increasing function for combining the harm from dust specks admits of a number of people receiving dust specks in their eyes that tallies to significantly more harm than the torture. (Though there are other functions for calculating total utility that can lead to the dust specks answer.)
↑ comment by Stuart_Armstrong · 2019-06-17T11:45:54.639Z · LW(p) · GW(p)
How do you feel about the torture vs dust speck situation if you expected to encounter that situation 3^^^3 times, knowing that 3^^^3 dust specks are much worse that 50 years of torture?
More interestingly, have you seen that aggregation argument before, and does it do something inside your mind? Might that be a form of "moral learning"?
Replies from: J_Thomas_Moros↑ comment by J Thomas Moros (J_Thomas_Moros) · 2019-06-21T14:39:29.394Z · LW(p) · GW(p)
I can parse your comment a couple of different ways, so I will discuss multiple interpretations but forgive me if I've misunderstood.
If we are talking about 3^^^3 dust specks experienced by that many different people, then it doesn't change my intuition. My early exposure to the question included such unimaginably large numbers of people. I recognize scope insensitivity may be playing a role here, but I think there is more to it.
If we are talking about myself or some other individual experiencing 3^^^3 dust specks (or 3^^^3 people each experiencing 3^^^3 dust specks), then my intuition considers that a different situation. A single individual experiencing that many dust specks seems to amount to torture. Indeed, it may be worse than 50 years of regular torture because it may consume many more years to experience them all. I don't think of that as "moral learning" because it doesn't alter my position on the former case.
If I have to try to explain what is going on here in a systematic framework, I'd say the following. Splitting up harm among multiple people can be better than applying it to one person. For example, one person stubbing a toe on two different occasions is marginally worse than two people each stubbing one toe. Harms/moral offenses may separate into different classes such that no amount of a lower class can rise to match a higher class. For example, there may be no number of rodent murders that is morally worse than a single human murder. Duration of harm can outweigh intensity. For example, imagine mild electric shocks that are painful but don't cause injury and furthermore that receiving one followed by another doesn't make the second any more physically painful. Some slightly more intense shocks over a short time may be better than many more mild shocks over a long time. This comes in when weighing 50 years of torture vs 3^^^3 dusk specks experienced by one person though it is much harder to make the evaluation.
Those explanations feel a little like confabulations and rationalizations. However, they don't seem to be any more so than a total utilitarianism or average utilitarianism explanation for some moral intuitions. They do, however, give some intuition why a simple utilitarian approach may not be the "obviously correct" moral framework.
If I failed to address the "aggregation argument," please clarify what you are referring to.
↑ comment by Stuart_Armstrong · 2019-06-22T08:37:51.267Z · LW(p) · GW(p)
What I meant was this: assume that 3^^^3 dust specks on one person is worse that 50 years of torture. As long as the dust specks sensation is somewhat additive, that should be true. Now suppose you have to choose between dust specks and torture 3^^^3 times, one for each person ("so, do we torture individual 27602, or one dust speck on everyone? Now, same question for 27603....).
Then always choosing dust specks is worse, for everyone, than always choosing torture.
So the dust-speck decision becomes worse and worse, the more often you expect to encounter it.
Replies from: Joe_Collman↑ comment by Joe Collman (Joe_Collman) · 2021-03-11T01:51:35.588Z · LW(p) · GW(p)
Then always choosing dust specks is worse, for everyone, than always choosing torture.
27602 may beg to differ.