Friendship and happiness generation

post by AlexMennen · 2011-11-25T20:52:18.009Z · LW · GW · Legacy · 5 comments

Contents

5 comments

Happiness and utility are different things, with happiness (measured in hedons) generally referring to the desirability of an agent being in its current mental state, while utility (measured in utils) refers to the desirability, from the point of view of some agent, of the configuration of the universe.

Naively, one could model caring about another person as having a portion of your utility function allocated to mimicking their utility (me.utility(universe) = caring_factor*friend.utility(universe) + me.utility(universe excluding value of friend's utility function)) or their happiness (me.utility(universe) = caring_factor*friend.happiness + me.utility(universe excluding friend's happiness)). However, I think these are bad models of how caring for people actually works in humans.

I've noticed that I often gladly give up small amounts of hedons so that someone I care about can gain a similar amount of hedons. Extrapolating this, one might conclude that I care about plenty of other people nearly as much as I care about myself. However, I would be much less likely to give up a large amount of hedons for someone I care about unless the ratio of hedons that they could gain over the hedons I would have to give up is also fairly large.

While trying to figure out why this is, I realized that whenever I think I'm sacrificing hedons for someone, I usually don't actually lose any hedons because I enjoy the feeling associated with knowing that I helped a friend. I expect that this reaction is fairly common. This implies that by doing small favors for each other, friends can generate happiness for both of them even when the amount of hedons sacrificed by one (not counting the friend-helping bonus) is similar to the amount of hedons gained by the other. However, this happiness bonus for helping a friend is bounded, and grows sublinearly with respect to the amount of good done to the friend. In terms of evolutionary psychology, this makes sense: seeking out cheap ways to signal loyalty sounds like a decent strategy for getting and keeping allies.

I don't think that this tells the whole story. If a friend had enough at stake, I would sacrifice much more for them than could be reimbursed with the happiness bonus for helping a friend (plus happiness penalty that I would otherwise absorb for the feeling of knowing I had abandoned a friend), because I do actually care about people. Again, I would expect that most other people would act this way as well. But it seems likely that most favors that people do for each other are primarily motivated by pursuing personal happiness that they can get from knowing that they've helped a friend, rather than directly caring about how happy their friends are.

5 comments

Comments sorted by top scores.

comment by steven0461 · 2011-11-25T21:18:53.513Z · LW(p) · GW(p)

Small-scale acts of helping are more common than large-scale acts of helping, because helping is gratifying, and the gratification scales less than linearly with amount of good done. Small-scale acts of helping can be explained by a purely egoistic model, and there are fewer large-scale acts of helping than can be explained by a purely altruistic model, but there are still more large-scale acts of helping than can be explained by a purely egoistic model.

Fair summary?

Replies from: AlexMennen
comment by AlexMennen · 2011-11-25T21:31:38.660Z · LW(p) · GW(p)

Pretty much.

comment by vi21maobk9vp · 2011-11-26T08:43:49.887Z · LW(p) · GW(p)

Your point about having a lot at stake seems not to be needed. You would sacrifice much more than the helping-a-friend bonus, but less than their stake.

This just means that helping is reimbursed at the rate less than one-for-one, but with a positive bonus in the beginning; also, receiving help gives additional bonus at the rate less than one-for-one.

From my experience I can say that you get reimbursement for some combination of the friend's observed happiness and extrapolated happiness. First can be quite coarse sometimes (and not always available before the event), second is based on modeling your happiness in the same situatuation (and can be modified by previously learned corrections) and has a systematic error because of this.

comment by Shmi (shminux) · 2011-11-25T23:15:00.215Z · LW(p) · GW(p)

primarily motivated by pursuing personal happiness that they can get from knowing that they've helped a friend, rather than directly caring about how happy their friends are.

How would you experimentally distinguish between these two models?

comment by falenas108 · 2011-11-25T21:54:31.753Z · LW(p) · GW(p)

This seems to indicate a utility function of something like:

(caring factor)*(friend happiness)+(happiness bonus)-(personal unhappiness).

Most of the time, the caring factor is so small that the happiness bonus dominates over, but when a friend asks a large favor that helps this person gain a large amount of utility, the caring factor overrides the personal utility lost.