0 comments
Comments sorted by top scores.
comment by artifex · 2022-05-16T04:47:42.605Z · LW(p) · GW(p)
If morality is subjective, why do I form moral opinions and try to act on them? I think I do that for the same reason I think I do anything else. To be happy.
What makes you happy is objective, so if that’s how you ground your theory of morality, it is objective in that sense. It’s subjective only in that it depends on what makes you happy rather than what makes other possible beings happy.
If morality is a thing we have some reason to be interested in and care about, it’s going to have to be grounded in our preferences. Our preferences, not any possible intelligent being’s preferences—so it’s subjective in that sense. But we can’t make up anything, either. We already have a complete theory of how we should act, given by our preferences & our decision theory. Morality needs to be part of or implied by that in some way.
To figure out what’s moral, there is real work that needs to be done: evolutionary psychology, game theoretic arguments, revealed preferences, social science experiments, etc. Stuff needs to be justified. Any aggregation procedure we choose to use, any weights we choose to use in said aggregation procedure, need to be grounded—there has to be a reason we are interested in that aggregation procedure and these weights.
There are multiple kinds of utilities that have moral import for different reasons, some of them interpersonally comparable and others not. Preference utilities are not interpersonally comparable and we care about them for game theoretic reasons that would apply just as well to many agents very different from us (who would use different weights however); what weights and aggregation procedure to use must be grounded in these game theoretic reasons. However they are to be aggregated, it can’t be weighted-sum utilitarianism, since the utilities aren’t interpersonally comparable (which doesn’t mean they can’t be aggregated by other means). But pleasure utilities (dependent on any positive mental or emotional state) often are interpersonally comparable:
An [individual’s] inability to weigh between pleasures is an epistemic problem. [Some] pleasures are greater than others. The pleasure of eating food one really enjoys is greater than that of eating food one doesn’t really enjoy. We can make similar interpersonal comparisons. We know that one person being tortured causes more suffering than another stubbing their toe. (HT: Bentham’s bulldog)
At least it should be the case that some mental states can be biologically quantified in ways that should be interpersonally comparable. And they can have moral import. Why not? It all depends on what evolution did or didn’t do. We need to know in what ways people care about other beings (which state or thing related to these beings they care about), which ones of the beings and to what degrees (and there can be multiple true answers to these questions).
How do we know? Well, there are things like ultimatum game experiments, dictator game, kin altruism, and so on. The details matter and there seems to be much controversy on interpretation.
Can we just know through introspection? It would be awfully convenient if so, but that requires that evolution has given us a way to introspect on our preferences that regard other people and reliably get the real answers instead of getting social desirability bias. How do we know if that’s the case? Two ways.
Way one: by comparing the answers people claim to get through introspection with their actual behavior. If introspection is reliable, the two should probably match to a high degree.
Way two: by seeing how much variation there is in the answers people claim to get through introspection. We still need to interpret that variation. Is it more plausible that people have very different moralities than that their answers are very different for other reasons (which ones?)?
This fog is too thick for me to see through. Many smart people have tried, probably much harder than me, and sometimes have said a few smart things: [1] [2] [3]. There must be people who have figured much more out and if so I would highly appreciate links.
Replies from: TAG↑ comment by TAG · 2022-05-16T11:30:51.044Z · LW(p) · GW(p)
If morality is a thing we have some reason to be interested in and care about, it’s going to have to be grounded in our preferences.
To some extent. Minimally it can be grounded in our preference not to be punished. Less minimally, but not maximally, it can be grounded in negative preferences , like " I don't want to be killed" without being grounded in positive preferences like * "I prefer Tutti Frutti". In either case, you dont need a detailed.picture of human preference to solve morality, if you haven't first shown that all preferences are relevant.
comment by TAG · 2022-06-15T10:59:30.381Z · LW(p) · GW(p)
I don’t think there is an objective morality.
I think morality is subjective.
The validity of subjective morality doesn't follow from the invalidity of objective morality....because both could be wrong , and because there are other options. Admittedly , you didn't argue that explicitly ... but you didn't argue any other way. Other options include societal definitions. Societies put people in jail for breaking laws which delimit bad behaviour from.good behaviour, so something like deontology is going on under your nose. If the jailing and executing isn't justifiable by your morality, then it is a gross injustice.
comment by Dave Lindbergh (dave-lindbergh) · 2022-05-15T16:47:20.526Z · LW(p) · GW(p)
Your two principle goals - maximize total utility and minimize utility inequality - are in conflict, as is well-known. (If for no other reason, because incentives matter.) You can't have both.
A more reasonable goal would be Pareto efficiency-limited utility inequality.
Replies from: Matt Goldwater↑ comment by UtilityMonster (Matt Goldwater) · 2022-05-16T01:43:15.556Z · LW(p) · GW(p)
But it’s not literally impossible to achieve both goals. And I think there are practical ways to improve total utility and reduce utility inequality at the same time. For example, anything that helps make a sad being happy.
As I said, I don’t know how I’d make tradeoffs between total utility and utility inequality yet. If I did know, I would want society’s existing utility to be distributed in a Pareto efficient way.
Replies from: Measure↑ comment by Measure · 2022-05-16T02:15:55.556Z · LW(p) · GW(p)
A separate problem is that there's not really a principled way to compare utility between two individuals, so (in)equality is poorly defined.
Replies from: Matt Goldwater↑ comment by UtilityMonster (Matt Goldwater) · 2022-05-16T02:54:43.648Z · LW(p) · GW(p)
I think having a theoretical definition helps me. But I agree that I can't precisely measure utility in practice.