Conflicting advice on altruism

post by leplen · 2015-10-11T00:25:10.268Z · LW · GW · Legacy · 3 comments

Contents

  Game Theory Model
  Adding Complexity
  Received Wisdom on Altruism
  Effective Altruism
  Conclusion
None
3 comments

As far as I can tell, rather than having a single well-defined set of preferences or utility function, my actions more closely reflect the outcome of a set of competing internal drives. One of my internal drives is strongly oriented towards a utilitarian altruism. While the altruist internal drive doesn't dominate my day-to-day life, compared to the influence of more basic drives like the desires for food, fun, and social validation, I have traditionally been very willing to drop whatever I'm doing and help someone who asks for, or appears to need help. This altruistic drive has an even more significant degree of influence on my long term planning, since my drives for food, fun, etc. are ambivalent between the many possible futures in which they can be well-satisfied. 

I'm not totally sure to what extent strong internal drives are genetic or learned or controllable, but I've had a fairly strong impulse towards altruism for well over a decade. Unfortunately, even over fairly long time frames it isn't clear to me that I've been a particularly "effective" altruist. This discussion attempts to understand some of the beliefs and behaviors that contributed to my personal failure/success as an altruist, and may also be helpful to other people looking to engage in or encourage similar prosocial habits.

 

Game Theory Model

Imagine a perfect altruist competing in a Prisoner's Dilemma style game. The altruist in this model is by definition a pure utilitarian who wants to maximize the average utility, but is completely insensitive to the distribution of the utility.1 A trivial real world example similar to this would be something like picking up litter in a public place. If the options are Pick up (Cooperate) and Litter (Defect) then an altruist might choose to pick up litter even though they themselves don't capture enough of the value to justify the action. Even if you're skeptical that unselfish pure utilitarians exist, the payoff matrix and much of this analysis applies to a broader range of prosocial behaviors where it's difficult for a single actor to capture the value he or she generates.

The prisoner's dilemma payoff matrix for the game in which the altruist is competing looks something like this:

  Agent B
Agent A Cooperate Defect
Cooperate 2,2 -2,4
Defect 4,-2 -1,-1

Other examples with altered payoff ratios are possible, but this particular payoff matrix creates an interesting inversion of the typical strategy for the prisoner's dilemma. If we label the altruist Agent A (A for Altruist), then A's dominant strategy is Cooperate. Just as in the traditional prisoner's dilemma, A prefers if B also cooperates, but A will cooperate regardless of what B does. The iterated prisoner's dilemma is even more interesting. If A and B are allowed to communicate before and between rounds, A may threaten to employ a tit-for-tat-like strategy and to defect in the future against defectors, but this threat is somewhat hollow, since regardless of threats, A's dominant strategy in any given round is still to cooperate. 

A population of naive altruists is somewhat unstable for the same reason that a population of naive cooperators is unstable. It's vulnerable to infiltration by defectors. The obvious meta-strategies for individual altruists and altruist populations are to either become proficient at identifying defectors and to ignore/avoid them or to successfully threaten defectors into cooperating. Both the identify/avoid and the threaten/punish tactics have costs associated with them, and which approach is a better strategy depends on how much players are expected to change over the course of time/a series of games. Incorrigible defectors cannot be threatened/punished and must be avoided,while for more malleable defectors it may be possible to threaten them into cooperation.

If we assume that agent B is selfish and we express the asymmetry in the agent values in terms of our payoff matrix, then the symmetric payoff matrix above is equivalent to the top portion of a new payoff matrix given by

  Agent B
Agent A Cooperate Defect
Cooperate 2,2 1,4
Defect 1,-2 -1,-1
Avoid 0,0 0,0

The only difference between the two matrices is in this latter case we've given the altruist an avoid option.  There is no simple way to include the threaten option, since threaten relies on trying to convince Agent B that Agent A is either unreasonable or not an altruist and including that sort of bluff in the formal model makes is difficult to create payoff matrices that are both simple and reasonable. However, we can still make a few improvements to our formal model before we're forced to abandon it and talk about the real world.

 

Adding Complexity

The relatively simple payoff matrices in the previous section can easily be made more realistic and more complicated. In the iterated version of the game, if the total number of times A can cooperate in games is limited, then for each game in which she cooperates, she incurs an opportunity cost equal to the difference between her received payoff and her ideal payout. Under this construction an altruist who cooperates with a defector receives a negative utility as long as games with other cooperators are available. 

  Agent B
Agent A Cooperate Defect
Cooperate 2,2 -1,4
Defect -1,-2 -3,-1
Avoid 0,0 0,0

In this instance, A no longer has a dominant strategy. A should cooperate with B if she thinks that B will cooperate, but A should avoid B if she thinks that B will defect. A thus has a strong incentive to build a sophisticated model of B, which can be used either to convince B to cooperate or at the very least correctly predict B's defection. For a perfect altruist, more information and judgment of agent B leads to better average outcomes.

The popularity of gatekeeper organizations like GiveWell and Charity Navigator in altruist communities makes a lot of sense if those communities are aware of their vulnerability to defectors. Because charitable dollars are so fungible, giving money to a charity is an instance where opportunity costs play a significant role. While meta-charities offer some other advantages, a significant part of their appeal, especially for organizations like Charity Navigator, is helping people avoid "bad" charities.

Interestingly, with this addition, A's behavior may to start to look less and less like pure altruism. Even if A is totally indifferent to the distribution of utility, if A can reliably identify some other altruists then she will preferentially cooperate with them and avoid games with unknown agents in which there is a risk of defection. The benefits of cooperation could then disproportionately accrue within the altruist in-group, even if none of the altruists intend that outcome.

An observer who had access only to the results of the games and not the underlying utility functions of the players would be unlikely to conclude that the clique of A-like agents that exhibited strong internal cooperation and avoided games with all other players had a purely altruistic utility function. Their actions pattern-match much more readily to something more selfish and more like typical human tribal behavior, suggesting either a self-serving or an "us versus them" utility function instead of one that has increasing the average payoff as its goal. If we include the threaten/punish option, the altruist population may look even less like a population of altruists.

That erroneous pattern match isn't a huge issue for the perfectly rational pure altruist in our game theory model. Unfortunately, human beings are often neither of those things. A significant amount of research suggests that people's beliefs are strongly influenced by their actions, and what they think those actions say about them. An actual human that started with the purely altruistic utility function of Agent A in this section, who rationally cooperated with a set of other easily identified altruists, might very well alter his utility function to seem more consistent with his actions. The game theoretic model, in which the values of the agent are independent of the agents choices starts to break down. 


Trying to be an altruist

While very few individuals are perfect altruists/pure utilitarians as defined here, a much larger fraction of the population nominally considers the altruist value system to be an ethical ideal. The ideal that people have approximately equal value may not always be reflected in how most people live, but many people espouse such a belief and even want to believe it. We see this idea under all sorts of labels: altruism,  being a utilitarian, trying to "love your neighbor as yourself", believing in the spiritual unity of humankind, or even just an innate sense of fairness.

Someone who is trying to be an altruist may have altruism or a similar ethical injunction as one of many of their internal drives, and the drive for altruism may be relatively weak compared to their desires for personal companionship, increased social status, greater material wealth, etc. For this individual, the primary threat to the effectiveness of their prosocial behavior is not the possibility that they might cooperate with a defector; it is instead the possibility that their selfish drives might overwhelm their desire to act altruistically, and they themselves might not cooperate. 

 

Received Wisdom on Altruism

Much of the cultural wisdom in my native culture that addresses how to be a good altruist is geared towards people who are trying to be altruists, rather than towards altruists who are trying to be effective. The best course of action in the two situations is often very different, but it took me a considerable amount of time to realize the distinction.

For people trying to be altruists, focusing on the opportunity costs of their altruism is exactly the wrong thing to do. Imagining all the other things that they could buy with their money instead of giving it to a homeless person or donating it to the AMF will make it very unlikely they will give the money away. Judging the motivations of others often provides ample excuses for not helping someone. Seeking out similar cooperators can quickly turn into self-serving tribalism and indifference towards people unlike the tribe. Most people have really stringent criteria for helping others, and so most given the chance to help, most people don't.

The cultural advice I received on altruism tended to focus on avoiding these pitfalls. It stressed ideas like, "Do whatever good you can, wherever you are", and emphasized not to judge or condemn others, but to give second chances, to try and believe in the fundamental goodness of people, and to try to cooperate and value non-tribe members and even enemies. 

When I was trying to be an altruist, I took much of this cultural how-to advice on altruism very seriously and for much of my life often helped/cooperated with anyone who asked, regardless of whether the other person was likely to defect. Even when people literally robbed me I would rationalize that whoever stole my bike must have really needed a bike, and so the even my involuntary "cooperation" with the thief probably was a net positive from a utilitarian standpoint.

 

Effective Altruism

I don't think I've been particularly effective as an altruist because I haven't been judgmental enough, because I've been too focused on doing whatever good I could where I was instead of finding the places I could do the most good and moving myself to those places. I'm now trying to spend nearly as much energy identifying opportunities to do good, as I do actively trying to improve the world.

At the same time, I'm still profoundly wary of the instinct not to help, or of thinking, "This isn't my best opportunity to do good" because I know that's it's very easy to get in the habit of not helping people. I'm trying to move away from my instinct towards reactive helping anyone who asks towards something that looks more like proactive planning, but I'm not at all convinced that most other people should be trying to move in that same direction.

As with achieving any goal, success requires a balance between insufficient planning and analysis paralysis. I think for altruism in particular, this balance was and is difficult to strike in part because of the large potential for motivated selfish reasoning, but also because most of my (our?) cultural wisdom emphasizes convenient immediate action as the correct form of altruism. Long term altruistic planning is typically not much mentioned or discussed, possibly because most people just aren't that strongly oriented towards utilitarian values.

 

Conclusion

If helping others is something that you're committed enough to that a significant limitation on your effectiveness is that you often help the wrong people, then diverting energy into judging who you help and consciously considering opportunity costs is probably a good idea. If helping others is something you'd like to do, but you rarely find yourself actually doing, the opposite advice may be apropos.

 

 


1. In idealized formulations of game theory, "utility" is intended to describe not just physical or monetary gain, but to include effects like desire for fairness, moral beliefs, etc. Symmetric games are fairly unrealistic under that assumption, and such a definition of utility would preclude our altruist from many games altogether. Utility in this first example is defined only in terms of personal gain, and explicitly does not include the effects of moral satisfaction, desire for fairness, etc.

3 comments

Comments sorted by top scores.

comment by snarles · 2015-10-13T18:55:28.409Z · LW(p) · GW(p)

How do you get the top portion of the second payoff matrix from the first? Intuitively, it should be by replacing the Agent A's payoff with the sum of the agents' payoffs, but the numbers don't match.

Most people are altruists but only to their in-group, and most people have very narrow in-groups. What you mean by an altruist is probably someone who is both altruistic and has a very inclusive in-group. But as far as I can tell, there is a hard trade-off between belonging to a close-knit, small in-group and identifying with a large, diverse but weak in-group. The time you spend helping strangers is time taken away from potentially helping friends and family.

Replies from: leplen
comment by leplen · 2015-10-15T13:44:57.065Z · LW(p) · GW(p)

It's the average({4-2}/2), rather than the sum, since the altruistic agent is interested in maximizing the average utility.

The tribal limitations on altruism that you allude to are definitely one of the tendencies that much of our cultural advice on altruism targets. In many ways the expanding circle of trust, from individuals, to families, to tribes, to cities, to nation states, etc. has been one of the fundamental enablers of human civilization.

I'm less sure about the hard trade-off that you describe. I have a lot of experience being a member of small groups that have altruism towards non-group members as an explicit goal. In that scenario, helping strangers also helps in-group members achieve their goals. I don't think large-group altruism precludes you from belonging to small in-groups, since very few in-groups demand any sort of absolute loyalty. While full effort in-group altruism, including things like consciously developing new skills to better assist your other group members would absolutely represent a hard trade-off with altruism on a larger scale, people appear to be very capable of belonging to a large number of different in-groups.

This implies that the actual level of commitment required to be a part of most in-groups is rather low, and the socially normative level of altruism is even lower. Belonging to a close-knit in-group with a particularly needy member, (e.g. having a partially disabled parent, spouse, or child) may shift the calculus somewhat, but for most in-groups being a member in good-standing has relatively undemanding requirements. Examining my own motivations it seems that for many of the groups that I participate in most of the work that I do to fulfilling expectations and helping others within those group is more directly driven by my desire for social validation than my selfless perception of the intrinsic value of the other group members.

comment by Gunnar_Zarncke · 2015-10-11T15:58:46.196Z · LW(p) · GW(p)

Awesome.

Even when people literally robbed me I would rationalize that whoever stole my bike must have really needed a bike, and so the even my involuntary "cooperation" with the thief probably was a net positive from a utilitarian standpoint.

Will incorporate that into my loving kindness training.