Meaning and Moral Foundations Theory
post by bryjnar · 2018-04-07T17:59:24.227Z · LW · GW · 8 commentsContents
8 comments
(cross-posted from https://www.michaelpj.com/blog/2018/04/07/meaning-and-mft.html)
Robin Hanson writes (some time ago, but it's a classic):
So there is a bit of a tension here between the meaning that crusaders choose for themselves and the happiness they try give to others. They might reasonably be accused of elitism, thinking that happiness is good for the masses, while meaning should be reserved for elites like them. Also, since such folks tend to embrace far mode thoughts more, and tend less to think that near mode desires say what we really want, such folks should also be conflicted about their overwhelming emphasis on happiness over meaning when giving policy advice.
I think there's something interesting here, with my gloss on the interesting question being: when we intervene in other people's lives, why don't we try more often to make them meaningful rather than happy?
Let's take two premisses. First, that people often gain meaning in their lives from being good (as Hanson argues policy-makers do). This obviously isn't the only source of meaning, but it is one.
Secondly, let's suppose that we buy Jonathan Haidt's Moral Foundations theory. "Elites" are anecdotally biased towards the Harm/Care foundation, and thus they get meaning out of helping other people. But what would it mean it mean to help other people have more (morally) meaningful lives? You would have to help people to help some third party. Now, this may be an effective approach to helping the third party, but in most situations you might expect that it would be easier and more straightforward to just help the third party directly, rather than doing it indirectly.
Harm/Care is unusual among the foundations in that it's other-directed. The goal is to help other people, and it does not especially matter how that occurs. In particular, it seems somehow inappropriate for someone who cares about harm/care to care about who does the helping - this focusses the attention on the helper, when it is the helpee who is relevant.
In contrast, the other foundations centre on the moral actor themselves. I cannot be just, loyal, a good follower, or pure for you. You have to do this yourself, and so any attempt to make the world better for one of these foundations is going to require getting lots of other actors to be more moral. Which may also make their lives feel more meaningful, if they subscribe to those foundations.
So we should expect to see a lot more morality/meaningfulness interventions from people who subscribe to the other foundations. And I think we do: the war on drugs (purity), nationalism (authority), abstinence-only sex ed (purity again), etc. These are all things (partly) aiming to get people to behave "better". And maybe this even works: if you do think purity is morally important, and you don't have sex until your wedding night, perhaps you do feel like it was more meaningful.
So here's my argument for why we don't tend to talk about meaningfulness interventions: we mostly don't know how to get it, and while we can get some of it from morality interventions, those mostly don't make sense under the harm/care foundation.
We could clearly do better than this. At least we should consider cases where we can empower people to help others, thus hopefully making their lives more meaningful. If we think this is valuable, then we should be willing to trade off some amount of efficiency to get this. I don't know how much, but we should at least think about it.
Secondly, even if you don't subscribe to the other foundations, if you can help people who do subscribe to them to follow them better, then that may make their lives more meaningful, even if we don't think it's actually morally better. Obviously, we'd only want to do this in cases where it isn't actually harmful: we certainly shouldn't support abstinence-only sex ed. But perhaps we should consider e.g. helping people maintain family loyalties through difficult times.
8 comments
Comments sorted by top scores.
comment by DanArmak · 2018-04-08T16:30:34.944Z · LW(p) · GW(p)
Harm/Care is unusual among the foundations in that it's other-directed. The goal is to help other people, and it does not especially matter how that occurs. [...] In contrast, the other foundations centre on the moral actor themselves. I cannot be just, loyal, a good follower, or pure for you.
It seems to me that Harm/Care isn't as different as you say. Native (evolved) morality is mostly deontological. The object of moral feelings is the act of helping, not the result of other people being better off. "The goal is to help other people" sounds like a consequentialist reformulation. Helping a second party to help a third party may not be efficient, but morality isn't concerned with efficiency.
In contrast, the other foundations centre on the moral actor themselves. I cannot be just, loyal, a good follower, or pure for you.
I could say: yes, I can be just *to* you, loyal *to* you, a good follower *of* you. And pure *for* you too - think about purity pledges, aka "save it *for* your future spouse".
In all these cases, morality is about performance - deontology - rather than about accomplishing a goal. But each case does have an apparent goal, so our System 2 can apply consequentialist logic to it. Why do you treat Harm/Care differently?
Replies from: bryjnar, rk↑ comment by bryjnar · 2018-04-12T21:16:45.354Z · LW(p) · GW(p)
I think I would argue that harm/care isn't obviously deontological. Many of the others are indeed about the performance of the action, but I think arguably harm/care is actually about the harm. There isn't an extra term for "and this was done by X".
That might just be me foisting my consequentialist intuitions on people, though.
↑ comment by rk · 2018-04-08T16:54:40.818Z · LW(p) · GW(p)
I think I understood bryjnar differently from you.
I could say: yes, I can be just to you, loyal to you, a good follower of you. And pure for you too - think about purity pledges, aka "save it for your future spouse".
I think that saying that "you can't do x for someone" is supposed to be analogous to "I can't learn the material for you" as opposed to "if you want to climb Mt Everest, you have to do it for yourself rather than for someone else".
If you understood that the same way, then I think you're saying that if people care about others being pure, it seems they can just as easily care about others being caring. And that we should think about people trying to observe the norm of caring and making sure others do, rather than trying to care effectively. Is that right?
Replies from: DanArmak↑ comment by DanArmak · 2018-04-08T17:20:04.863Z · LW(p) · GW(p)
"I can't learn the material for you" as opposed to "if you want to climb Mt Everest, you have to do it for yourself rather than for someone else".
I'm not sure I understand the difference, can you make it more explicit?
"I can't learn the material for you": if I learn it, it won't achieve the goal of you having learned it, i.e. you knowing the material.
"I can't climb the mountain for you": if I climb it, the prestige and fun will be mine; I can't give you the experience of climbing the mountain unless you climb it yourself.
The two cases seem the same...
if people care about others being pure, it seems they can just as easily care about others being caring. And that we should think about people trying to observe the norm of caring and making sure others do, rather than trying to care effectively. Is that right?
Yes, that's what I think is happening: people observing norms and judging others on observing them, rather than on achieving goals efficiently or achieving more. Consequentially, we want to save everyone. Morally, we don't judge people harshly for not saving everyone as long as they're doing their best - and we don't expect them to make an extraordinary effort [LW · GW].
And so, I don't see a significant difference between Harm/Care and the other foundations.
Replies from: rkcomment by rk · 2018-04-08T17:49:05.401Z · LW(p) · GW(p)
"I can't climb the mountain for you"
This is not what I meant. Sorry I didn't communicate that. In the second case, the speaker is saying that you just won't be capable of climbing Mt Everest if you are trying to do it 'for' someone else['s benefit]. It has to be something you are doing for yourself. In both cases, the person climbing the mountain is you.
And so, I don't see a significant difference between Harm/Care and the other foundations.
There does seem to be a difference in that Harm/Care is at least ostensibly about other people (and their welfare), whereas purity is about the conduct of the individual
Replies from: DanArmak↑ comment by DanArmak · 2018-04-08T18:01:48.154Z · LW(p) · GW(p)
Loyalty, authority, and fairness are also about other people. A lone person can't be loyal, authoritative, or fair; you have to be those things to someone else.
And, as I've been saying, Harm/Care is also about the conduct of the individual: do you harm others or care for them?
Replies from: rk↑ comment by rk · 2018-04-08T19:26:54.256Z · LW(p) · GW(p)
Loyalty, authority, and fairness are also about other people [...] you have to be those things to someone else.
I agree. But it seems to me that we would say we value those because they're the right way to act not because of how they affect others.
In the harm/care case it's less clear to me that we would say it's about comportment as opposed to effect, but I could see that being the case