Impartial ethics and personal decisions

post by Emile · 2015-03-08T12:14:04.205Z · LW · GW · Legacy · 18 comments

Some moral questions I’ve seen discussed here:

Yet I spend time and money on my children and parents, that may be “better” spent elsewhere under many moral systems. And if I cared as much about my parents and children as I do about random strangers, many people would see me as somewhat of a monster.

In other words, “commonsense moral judgements” finds it normal to care differently about different groups; in roughly decreasing order:

… and sometimes, we’re even perceived as having a *duty* to care more about one group than another (if someone saved three strangers instead of two of his children, how would he be seen?).

In consequentialist / utilitarian discussions, a regular discussion is “who counts as agents worthy of moral concern” (humans? sentient beings? intelligent beings? those who feel pain? how about unborn beings?), which covers the later part of the spectrum. However I have seen little discussion of the earlier part of the spectrum (friends and family vs. strangers), and it seems to be the one on which our intuitions agree the most reliably - which is why I think it deserves more of our attention (and having clear ideas about it might help about the rest).

Let’s consider two rough categories of decisions:

Impartial utilitarianism and consequentialism (like the question at the head of this post) make sense for impersonal decisions (including when an individual is acting in a role that require impartiality - a ruler, a hiring manager, a judge), but clash with our usual intuitions for personal decisions. Is this because under those moral systems we should apply the same impartial standards for our personal decisions, or because those systems are only meant for discussing impersonal decisions, and personal decisions require additional standards ?

I don’t really know, and because of that, I don’t know whether or not I count as a consequentialist (not that I mind much apart from confusion during the yearly survey; not knowing my values would be a problem, but not knowing which label I should stick on them? eh, who cares).

I also have similar ambivalence about Effective Altruism:

Scott’s “give ten percent” seems like a good compromise on the first point.

So what do you think? How does "caring for your friend’s and family" fit in a consequentialist/utilitarian framework ?

Other places this has been discussed:

Other related points:

18 comments

Comments sorted by top scores.

comment by [deleted] · 2015-03-08T14:01:25.611Z · LW(p) · GW(p)

One of the major problems I have with classical "greatest good for the greatest number" utilitarianism, the kind that most people think of when they hear the word, is that people act as if these are still rules handed to them from on high. When given the trolley problem, for example, people think you should save the five people rather than the one for "shut up and calculate" reasons, and that they are just supposed to count all humans exactly the same because those are "the rules".

I do not believe that assigning agents moral weight as if you are getting these weights from some source outside yourself is a good idea. The only way to get moral weights is from your personal preferences. Do you find that you assign more moral weight to friends and family than to complete strangers? That's perfectly fine. If someone else says they assign all humans equal weight, well, that's their decision. But when people start telling you that your weights are assigned wrong, then that's a sign that they still think morality comes from some outside source.

Morality is (or, at least, should be) just the calculus of maximizing personal utility. That we consider strangers to have moral weight is just a happy accident of social psychology and evolution.

Replies from: Vaniver
comment by Vaniver · 2015-03-08T15:17:45.534Z · LW(p) · GW(p)

I do not believe that assigning agents moral weight as if you are getting these weights from some source outside yourself is a good idea.

Suppose I get my weights from outside of me, and you get your weights from outside of you. Then it's possible that we could coordinate and get them from the same source, and then agree and cooperate.

Suppose I get my weights from inside me, and you get yours from inside you; then we might not be able to coordinate, instead wrestling each other over the ability to flip the switch.

Replies from: Emile, None
comment by Emile · 2015-03-08T19:55:30.662Z · LW(p) · GW(p)

Suppose I get my weights from inside me, and you get yours from inside you; then we might not be able to coordinate, instead wrestling each other over the ability to flip the switch.

In practice people with different values manage to coordinate perfectly fine via trade; I agree an external source of morality would be sufficient for cooperation, but it's not necessary (also having all humans really take an external source as the real basis for all their choices would require some pretty heavy rewriting of human nature).

comment by [deleted] · 2015-03-08T15:52:54.803Z · LW(p) · GW(p)

But that presupposes that I value cooperation with you. I don't think it's possible to get moral weights from an outside source even in principle; you have to decide that the outside source in question is worth it, which implies you are weighing it against your actual, internal values.

It's like how selfless action is impossible; if I want to save someone's life, it's because I value that person's life in my own utility function. Even if I sacrifice my own life to save someone, I'm still doing it for some internal reason; I'm satisfying my own, personal values, and they happen to say that the other person's life is worth more.

Replies from: Vaniver
comment by Vaniver · 2015-03-08T16:31:09.441Z · LW(p) · GW(p)

But that presupposes that I value cooperation with you. I don't think it's possible to get moral weights from an outside source even in principle; you have to decide that the outside source in question is worth it, which implies you are weighing it against your actual, internal values.

I think you're mixing up levels, here. You have your internal values, by which you decide that you like being alive and doing your thing, and I have my internal values, by which I decide that I like being alive and doing my thing. Then there's the local king, who decides that if we don't play by his rules, his servants will imprison or kill us. You and I both look at our values and decide that it's better to play by the king's rules than not play by the king's rules.

If one of those rules is "enforce my rules," now when the two of us meet we both expect the other to be playing by the king's rules and willing to punish us for not playing by the king's rules. This is way better than not having any expectations about the other person.

Moral talk is basically "what are the rules that we are both playing by? What should they be?". It would be bad if I pulled the lever to save five people, thinking that this would make me a hero, and then I get shamed or arrested for causing the death of the one person. The reasons to play by the rules at all are personal: appreciating following the rules in an internal way, appreciating other people's appreciation of you, and fearing other people's reprisal if you violate the rules badly enough.

Replies from: None
comment by [deleted] · 2015-03-08T16:38:58.955Z · LW(p) · GW(p)

If the king was a dictator and forced everyone to torture innocent people, it would still be against my morals to torture people, regardless of whether I had to do it or not. I can't decide to adopt the king's moral weights, no matter how much it may assuage my guilt. This is what I mean when I say it is not possible to get moral weights from an outside source. I may be playing by the king's rules, but only because I value my life above all else, and it's drowning out the rest of my utility function.

On a related note, is this an example of a intrapersonal utility monster? All my goals are being thrown under the bus except for one, which I value most highly.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-03-09T15:47:24.890Z · LW(p) · GW(p)

Your example of the King who wants you to torture is extreme, and doesnt generalize ... you have set up not torturing as a non-negotiable absolute imperative. A more steelmanned case would be compromising on negotiable principles at the behest of society at large.

comment by shminux · 2015-03-08T18:50:05.582Z · LW(p) · GW(p)

From what little I know about EA, they tend to mix together the two issues, one is "Who to care about?" and the other "How to best care about those you care about?" Probably in part owing to the word "care" in English having multiple meanings, but certainly not entirely so.

comment by Vaniver · 2015-03-08T16:47:03.159Z · LW(p) · GW(p)

However I have seen little discussion of the earlier part of the spectrum (friends and family vs. strangers), and it seems to be the one on which our intuitions agree the most reliably - which is why I think it deserves more of our attention (and having clear ideas about it might help about the rest).

I think, like you point out, this gets into near / far issues. How I behave around my family is tied into a lot of near mode things, and how I direct my charitable dollars is tied into a lot of far mode things. It's easy to talk far mode in an abstract way (Is it better to donate to ease German suffering or Somali suffering?) than it is to talk near mode in an abstract way (What is the optimal period for calling your mother?).

This was a big debate in ancient China, between the Confucians who considered it normal to have “care with distinctions” (愛有差等), whereas Mozi preached “universal love” (兼愛) in opposition to that, claiming that care with distinctions was a source of conflict and injustice.

The Spring and Autumn period definitely seems relevant, and I think someone could get a lot of interesting posts out of it.

Replies from: Emile
comment by Emile · 2015-03-08T20:04:01.029Z · LW(p) · GW(p)

The Spring and Autumn period definitely seems relevant, and I think someone could get a lot of interesting posts out of it.

Yep, I've been reading a fair amount about it recently; I had considering first making a "prequel" post talking about that period and about how studying ancient China can be fairly interesting, in that it shows us a pretty alien society that still had similar debates.

I had heard from various sources how Confucius said it was normal to care more about some than others, and it took me a bit of work to dig up what that notion was called exactly.

comment by [deleted] · 2015-03-09T17:14:37.631Z · LW(p) · GW(p)

How does "caring for your friend’s and family" fit in a consequentialist/utilitarian framework ?

If you have a desert-adjusted moral system, especially if combined with risk aversion, then it might make sense to care for friends and family more than others.

You want to spend your “caring units” on those who deserve them, you know enough about your friends and family to determine they deserve caring units, and you are willing to accept a lower expected return on your caring units to reduce the risk of giving to a stranger who doesn’t deserve them.

Now to debate myself…

What about that unbearable cousin? A family member, but not deserving of your caring units.

Also, babies. If an infant family member and a poor Third World infant both have unknown levels of desert, shouldn’t you give to the poor Third World infant, assuming this will have a greater impact?

comment by MathiasZaman · 2015-03-09T08:57:07.285Z · LW(p) · GW(p)

The impression I get when reading posts like these is that people should read up on the morality of self-care. If I'm not "allowed" to care for my friends/family/-self, not only would my quality if life decrease, it would decrease in such a way that would it harder and less efficient to actively care (e.g. donate) about people I don't know.

Replies from: Emile
comment by Emile · 2015-03-09T09:29:11.417Z · LW(p) · GW(p)

But is caring for yourself and your friends and family an instrumental value that helps you stay sane so that you can help others more efficiently, or is it a terminal value? It sure feels like a terminal value, and your "morality of self-care" sounds like a roundabout way of explaining why people care so much about it by making it instrumental.

Replies from: MathiasZaman
comment by MathiasZaman · 2015-03-11T09:53:26.316Z · LW(p) · GW(p)

I don't know. I also don't know if terminal values for utility maximizers and terminal values for fallible human beings perfectly line up, even if humans might strive to be perfectly selfless utility maximizers.

What I do know is that for a lot of people the practical utility increase they can manage goes up when they have friends and family they can care about. If you forbid people from self-care, you create a net decrease of utility in the world.

comment by fizolof · 2015-03-08T16:12:41.585Z · LW(p) · GW(p)

I think ultimately, we should care about the well-being of all humans equally - but that doesn't necessarily mean making the same amount of effort to help one kid in Africa and your brother. What if, for example, the institution of family is crucial for the well-being of humans, and not putting your close ones first in the short run would undermine that institution?

Replies from: Emile, buybuydandavis
comment by Emile · 2015-03-08T20:01:21.877Z · LW(p) · GW(p)

What if, for example, the institution of family is crucial for the well-being of humans, and not putting your close ones first in the short run would undermine that institution?

If that was the real reason you would treat your brother better than one kid in Africa, than you would be willing to sacrifice a good relationship with your brother in exchange for saving two good brother-relationships between poor kids in Africa.

I agree you could evaluate impersonally how much good the institution of the family (and other similar things, like marriages, promises, friendship, nation-states, etc.) creates; and thus how "good" are natural inclinations to help our family are (on the plus side; sustains the family, an efficient form of organization and child-rearing; on the down side: can cause nepotism). But we humans aren't moved by that kind of abstract considerations nearly as much as we are by a desire to care for our family.

comment by buybuydandavis · 2015-03-09T03:11:18.487Z · LW(p) · GW(p)

we should care about the well-being of all humans equally - but that doesn't necessarily mean making the same amount of effort to help one kid in Africa and your brother.

We have the moral imperative to have the same care for them, but not act in accordance with equal care? This is a common meme, if rarely spelled out so clearly. A "morality" that consists of moral imperatives to have the "proper feelings" instead of the "proper doings" isn't much of a morality.

comment by satt · 2015-03-10T03:21:16.549Z · LW(p) · GW(p)

I don’t really know, and because of that, I don’t know whether or not I count as a consequentialist

Consequentialism just means the rightness of behaviour is determined by its result. (The World's Most Reliable Encyclopaedia™ confirms this.) So you can be a partial (as in not impartial) consequentialist, a consequentialist who thinks good results for kith & kin are better than good results for distant strangers.

As for utilitarianism, it depends on which definition of utilitarianism one chooses. Partiality is compatible with what I call utilityfunctionarianism (and with additively-separable-utility-function-arianism), but contradicts egalitarian utility maximization.