Decontextualizing Morality

post by ACrackedPot · 2021-05-05T20:25:23.568Z · LW · GW · 11 comments

Contents

  Moral Context
  Maximizing Utility Isn't The Real Standard
  Moral Luck
  An Aesthetic and Simple Solution
  Addendum: Utilitarianism as Decision Theory
None
11 comments

I have a fundamental objection to the way a lot of people do utilitarian morality, which is that most people's morality is context-dependent; copy and paste somebody from one context to another, and the morality of their lives and choices can become entirely different.

Somebody who lives an ordinary life in a utopia becomes a moral monster if you copy-paste them into a dystopia, even if their subjective experience is the same; in the utopia, they walked by a field of flowers without really thinking about them, where in the dystopia, they walk by rivers full of drowning children with the same lack of awareness or concern.

The Copenhagen Interpretation of Ethics discusses this kind of problem in general, but I think most people's moral intuitions, even after adjusting for this, still sees the person who does nothing to improve the world in a utopia as morally neutral, where the person who does nothing to improve the world in a dystopia is at least a little morally repugnant. (Question: Do you think somebody who doesn't save a drowning child is evil?)

In a sense, this is capturing our intuitions that a good person should do something if there is a problem, but that our morality shouldn't be punitive if there isn't a problem to fix - the person in the utopia gets a pass because we can't distinguish between the person who would help, if there was something to help with, and the person who wouldn't.

However, it seems a bit wonky to me, if our metaethics punish somebody for what basically amounts to luck.  And if it seems odd that I use the word punish - how can it be punishment to assign moral values to things, that's just a value statement - then I think maybe you don't experience moral blameworthiness as pain.  (I think this might be a central active ingredient in a particular kind of moral conscientiousness.)

But also, I think "Feeling like a good person" is something that should at least be available to people who are doing their best, and an ethical system that takes that away from somebody for moral luck is itself at least a little unethical.

Moral Context

Moral context is a very big thing; I'm not going to be able to do it justice.  So I'm going to limit the context I deal with to the environmental context; there are other kinds of context which we may want to preserve.  For instance, to return to the question of whether someone who walks by a drowning child is evil, there's a ready reason why we should think of them as evil - because somebody who does that has demonstrated certain qualities of themselves (lack of regard for others) which we may want to categorize as evil - and this demonstration is context-dependent, in that if you live in a society full of drowning children such that you'd never stop saving them, the same lack of regard might be a self-preservation strategy that is harder to fault as evil.  I don't actually have a good argument against this; score one for virtue ethics.

So the argument for a decontextualized morality is not, in fact, a universal argument; context does in fact matter.  But the decontextualization is, I think, particularly useful for considering abstract moral questions, and it is particularly useful when evaluating utilitarianism.  That is, it is particularly useful when some of the context has already been stripped away, and when the moral systems itself is not highly context-dependent.

I can't comment too much on deontology; I think it is already nearly completely decontextualized, such that I don't think the concerns raised here apply.

Maximizing Utility Isn't The Real Standard

I encountered a piece of criticism of a rationalist, that they are spending money on cryogenically freezing their brain, instead of using that money to pay for malaria nets, or something similar, as evidence that they didn't live up to the ethical standard of utility maximization, and so shouldn't be taken seriously.  Likewise, apparently somebody else did exactly that, and canceled their cryogenic policy in favor of effective altruism.

I'm sure there's an argument that the cryogenic freezing is actually really utility maximizing, but I'm equally sure this is, fundamentally, just a rationalization.

My moral intuitions say that what we want is to be able to say that the rationalist with the cryogenic policy is still a good person, but also that the person who canceled their policy is a good person who is extra-good.  In particular, my moral intuitions say that we shouldn't say the cryogenic rationalist has failed at all; instead, what I want is to say that somebody who gives up a chance at immortality has gone above and beyond the call of duty.

That is, I want to be able to say that somebody has succeeded at utilitarianism, even if they haven't literally maximized utility.  And I also want to say that what this other person has done is praiseworthy.

The utilitarian maxim of "Maximize utility" kind of fails at this.

Moral Luck

Contextualized morality has a perverse quality, in which a given standard of behavior can become worse if you live in a universe that is worse.  That is, the more miserable the world you are in, the worse a given standard of behavior becomes; the better the world is, the less is expected of you.  So the more miserable you can expect to be, the more unethical you can expect to be as well, even with nothing changing in your behavior relative to somebody in a less miserable universe.

My intuitions say this isn't actually how we want ethics to work; in fact, this looks kind of like the thing Evil Uncle Ben would come up with.  If this is how your ethics are supposed to work, your meta-ethics look kind of faulty.

Now, we want to make it so that the right choice is saving the child, but we don't want to make it so you can't engage in self-preservation in the world of drowning children, and also we want to make it so that the environmental context doesn't have an undue effect on whether or not someone is a good person - we don't want to punish people for having the bad moral luck to be born into a world full of misery.

Also, we want the solution to be aesthetic and simple.

An Aesthetic and Simple Solution

The solution in utilitarianism is simple and aesthetic, at least to me.

First, axiomatically, we measure utility against the counterfactual; if an action makes the world neither better nor worse compared to not taking that action, the utility of that action is 0.

Second, we stop pretending that the ethical maxim of utilitarianism is "Maximize utility".  Instead, we should acknowledge how utilitarianism is actually practiced; maintaining utility is the gold standard that makes you a basically decent person, and increasing utility is the aspirational standard which makes you a good person.

That is, the goal is for every action to be neutral or positive utility.  This isn't a change from how utilitarianism is actually practiced, mind - as far as I can tell, this is basically how most utilitarians actually practice utilitarianism - it's just acknowledging the truth of the matter.

Note that moral luck is actually back again, but inverted; a person in a miserable world might actually be the morally lucky person, because it may be easier to do good in that world (make things actually better), just because there's lots of low-hanging fruit around.  Possibly due to loss aversion biases (I find moral luck more problematic when it is taking something away than when it is giving something out), I'm basically okay with this.  (But also it seems a lot less perverse.)

The net result is, I think, a version of utilitarianism that is significantly less contextual; copy and pasting people around has a much smaller impact on the moral evaluation we have of them (and that we would want them to have of themselves), largely limited, I think, to the ways that I think living the same life can actually have different impacts on different societies, in particular focusing evaluation on whether a person makes a society better or worse.

Addendum: Utilitarianism as Decision Theory

There's a deep counterargument that can be raised to this entire article, that utilitarian morality is about actions and not people, that talking about the "goodness" or "badness" of people is missing the point of what utilitarianism is all about, and reducing the entire ethical edifice to maximizing your personal score is antithetical to its core principles - but my moral intuitions disagree with at least part of that, and regard moral behavior as fundamentally being about agents, and in particular about how those agents evaluate themselves with respect to their actions both past and present.  If keeping score is how you evaluate whether or not you're a good person, I want you to feel like a good person exactly insomuch as you are a good person, so I want your score-keeping system to be a good one.

That is, I think the utilitarianism describes in that counterargument is not a moral theory at all, but rather something more like a decision theory.

It is perhaps a worthy decision theory, but it isn't what I'm talking about.

11 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2021-05-06T09:28:27.617Z · LW(p) · GW(p)

When a highly intelligent self-driving boat on the bank of a lake doesn't try to save a drowning child, what is the nature of the problem? Perhaps the boat is morally repugnant and the world will be a better place if it experiences a rapid planned disassembly. Or the boat is a person and disassembling or punishing them would in itself be wrong, apart from any instrumental value gained in the other consequences of such an action. Or the fact that they are a person yet do nothing qualifies them as evil and deserving of disassembly, which would not be the case had they not been a person. Maybe the boat is making an error of judgement, that is according to some decision theory and under human-aligned values the correct actions importantly differ from the actual actions taken by the boat. Or maybe this particular boat is simply instrumentally useless for the purpose of saving drowning children, in the same way that a marker buoy would be useless.

What should be done about this situation? That's again a different question, the one asking it might be the boat themself, and a solution might not involve the boat at all.

Replies from: ACrackedPot
comment by ACrackedPot · 2021-05-06T13:06:07.322Z · LW(p) · GW(p)

Would you mind unpacking this?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2021-05-06T13:25:08.959Z · LW(p) · GW(p)

There are many ways of framing the situation, looking for models of what's going on that have radically different shapes. It's crucial to establish some sort of clarity about what kind of model we are looking for, what kind of questions or judgements we are trying to develop. You seem to be conflating a lot of this, so I gave examples of importantly different framings. Some of these might fit what you are looking for, or help with noticing specific cases where they are getting mixed up.

Replies from: ACrackedPot
comment by ACrackedPot · 2021-05-06T13:58:54.762Z · LW(p) · GW(p)

I feel like I was reasonably clear that the major concern was about how utilitarianism interacts with being human, as much of the focus is on moral luck.

Insofar as an intelligent boat can be made miserable by failing to live up to an impossible moral system, well, I don't know, maybe don't design it that way.

comment by Dagon · 2021-05-06T03:45:34.982Z · LW(p) · GW(p)

[note: not a utilitarian; don't think there is any correct aggregation function]

I'd like to hear a bit about your overall framework outside of ethics.  Specifically, I think you're conflating very different things when you say

our metaethics punish somebody

Punishment is extremely distinct from moral evaluation.  We make moral judgements about agents and actions - are they acting "properly", based on whatever we think is proper.  Are they doing the best they can, in whatever context they are in?  If not, why not?

Moral judgement is often cited as a reason to impose punishment, but it's not actually part of the moral evaluation - it's a power tactic to enforce behavior in the absence of moral agreement.  Judging someone as morally flawed is NOT a punishment.  It's just an evaluation in one's value system.

To your title and general idea, I don't understand how anyone can claim that context and luck (meaning anything outside the control of the moral agent in question) isn't a large part of any value system.  Ethical systems can claim to be situation-independent, but they're just deluded.  Most serious attempts I've seen acknowledge the large role of situation in the evaluation - the moral comparison is "what one did compared to what one could have done".  Both actual and possible-counterfactual are contingent on situation.  

Replies from: ACrackedPot
comment by ACrackedPot · 2021-05-06T12:35:58.862Z · LW(p) · GW(p)

Punishment is extremely distinct from moral evaluation.  We make moral judgements about agents and actions - are they acting "properly", based on whatever we think is proper.  Are they doing the best they can, in whatever context they are in?  If not, why not?

Moral judgement is often cited as a reason to impose punishment, but it's not actually part of the moral evaluation - it's a power tactic to enforce behavior in the absence of moral agreement.  Judging someone as morally flawed is NOT a punishment.  It's just an evaluation in one's value system.

The important question isn't whether we judge someone else as morally flawed, but whether or not using a moral system leads us to judge ourselves as morally flawed, in which case the punitive element may become more clear.

But maybe not, but if you don't see a moral system which leads one to regard oneself as morally flawed as having an inherent punitive element, I'm going to question whether you experience morality, or whether you just think about it.

To your title and general idea, I don't understand how anyone can claim that context and luck (meaning anything outside the control of the moral agent in question) isn't a large part of any value system.  Ethical systems can claim to be situation-independent, but they're just deluded.  Most serious attempts I've seen acknowledge the large role of situation in the evaluation - the moral comparison is "what one did compared to what one could have done".  Both actual and possible-counterfactual are contingent on situation.  

By making the moral standard to judge against the counterfactual, the specific terribleness of the situation stops being relevant to the moral self-worth of the actors - only how much better their actions leave the world, compared to inaction or their nonexistence, is morally relevant.  A moral system which produces moral action which is impossible isn't actually useful.

Replies from: Dagon
comment by Dagon · 2021-05-06T16:07:46.817Z · LW(p) · GW(p)

But maybe not, but if you don't see a moral system which leads one to regard oneself as morally flawed as having an inherent punitive element, I'm going to question whether you experience morality, or whether you just think about it.

Wow.  Now I'm curious whether your moral framwork applies only to yourself, or to all people, or to all people who "experience morality" similarly to you.  

I do primarily use system 2 for moral evaluations - my gut reactions tend to be fairly short-term and selfish for my reasoned preferences.  And I do recognize that I (and all known instances of a moral agent) am flawed - I sometimes do things that I don't think are best, and my reasons for the failures aren't compelling to me.

Replies from: ACrackedPot
comment by ACrackedPot · 2021-05-06T17:26:12.366Z · LW(p) · GW(p)

Wow.  Now I'm curious whether your moral framwork applies only to yourself, or to all people, or to all people who "experience morality" similarly to you.

Mu?  It applies to whoever thinks it useful.

I think morality is an experience, which people have greater or lesser access to; I don't think it is actually meaningful to judge other people's morality.  Insofar as you judge other people immoral, I think you're missing the substantive nature of morality in favor of a question of whether or not other people sufficiently maximize your values.

If this seems like an outlandish claim, consider whether or not a hurricane is moral or immoral.  Okay, the hurricane isn't making decisions - really morality is about how we make decisions.  Well, separating it from decision theory - that is, assuming morality is in fact distinct from decision theory - it is not in fact about how decisions are arrived at.  So what is it about?

Consider an unspecified animal.  Is it a moral agent?  Okay, what if I specify that the animal can experience guilt?

I'd say morality is a cluster of concepts we have which are related to a specific set of experiences we have in making decisions.  These experiences are called "moral intuition"; morality is the exercise in figuring out the common elements, the common values, which give rise to these experiences, such that we can, for example, feel guilt, and figuring out a way of living which is in harmony with these values, such that we improve our personal well-being with respect to those experiences.

If your moral system leads to a reduced personal well-being with respect to those experiences in spite of doing your relative best - that is, if your moral system makes you feel fundamentally flawed in an unfixable way - then I think your moral system is faulty.  It's making you miserable for no reason.

Replies from: Dagon
comment by Dagon · 2021-05-06T20:25:23.387Z · LW(p) · GW(p)

I think morality is an experience, which people have greater or lesser access to;

Interesting.  I'll have to think on that.  My previous conception of the topic is that it's a focal topic for a subset of decision theory - it's a lens to look at which predictions and payouts should be considered for impact on other people.  

comment by TAG · 2021-05-06T16:49:30.350Z · LW(p) · GW(p)

Second, we stop pretending that the ethical maxim of utilitarianism is “Maximize utility”. Instead, we should acknowledge how utilitarianism is actually practiced; maintaining utility is the gold standard that makes you a basically decent person, and increasing utility is the aspirational standard which makes you a good person

Yes. In particular, if falling bellow the standard implies that you get blamed or punished, utilitarianism sets an unreasonably high standard. There is the problem that you are not cognitevely able to perform perfect utilitarian calculations , on top of the fact that you haven't got enough krasia to implement them.

You can kinda solve the problem by separating the theoretical standard from a practical standard, but that manourvre only gives the game away that utilitarianism is only answering theoretical questions, and needs to be supplemented by something else.

(Theoretical-but-not-practical is a bit of a theme here. Aumann's theorem is theoretical-but-not-practical under most circumstances, Aumann's theorem is theoretical-but-not-practical under most circumstances, so is Bayes, so is Solomonoff).

Replies from: Pattern
comment by Pattern · 2021-05-07T00:30:52.130Z · LW(p) · GW(p)

If you figure there's an 80/20 rule in place, 'it's not perfect, but it works' can be very efficient.