Social Impact, Effective Altruism, and Motivated Cognition
post by JonahS (JonahSinick) · 2013-06-08T02:31:37.659Z · LW · GW · Legacy · 20 commentsContents
I request that commenters not discuss particular instances in which they believe that this has occurred, or is occurring, as I think that such discussion would reduce collaboration between different factions of the effective altruist community. None 20 comments
Money is one measure of social status. People compare themselves favorably or unfavorably to others in their social circles based on their wealth and their earning power, and signals thereof, and compare their social circles favorably or unfavorably with other social circles based on the average wealth of people in the social circles. Humans crave social status, and this is one of people’s motivations for making money.
Effective altruists attempt to quantify “amount of good done” and maximize it. Once this framing is adopted, “amount of good done” becomes a measure of social status in the same way that money is. Most people who aspire to be effective altruists will be partially motivated by a desire to matter more than other people, in the sense of doing more good. People who join the effective altruism movement may do so partially out of a desire to matter more than people who are not in the movement.
Harnessing status motivations for the sake of doing the most good can have profound positive impacts. But under this paradigm, effective altruists will generally be motivated to believe that they’re doing more good than other people are. This motivation is not necessarily dominant in any given case, but it’s sufficiently strong to be worth highlighting.
With this in mind, note that effective altruists will be motivated to believe that the activities that they themselves are capable of engaging in have higher value than they actually do, and that activities that others are engaged in have lower value than they actually do. Without effort to counterbalance this motivation, effective altruists’ views of the philanthropic landscape will be distorted, and they’ll be apt to bias others in favor of the areas that use their own core competencies.
I worry that the effective altruist community hasn’t taken sufficient measures to guard against this issue. In particular, I’m not aware of any overt public discussion of it. Independently of whether or not there are examples of public discussion that I’m unaware of, the fact that I’m not aware of any suggests that any discussion that has occurred hasn’t percolated enough.
I’ll refrain from giving specific examples that I see as causes for concern, on account of political sensitivity. The effective altruist community is divided into factions, and Politics is the Mind-Killer. I believe that there are examples of each faction irrationally overestimating the value of their activities, and/or irrationally undervaluing the value of other faction's activities, and I believe that in each case, motivated reasoning of the above type may play a role.
I request that commenters not discuss particular instances in which they believe that this has occurred, or is occurring, as I think that such discussion would reduce collaboration between different factions of the effective altruist community.
The effective altruist movement is in early stages, and it’s important to arrive at accurate conclusions about effective philanthropy as fast as possible. At this stage in time, it may be that the biggest contribution that members of the community can make is to engender and engage in an honest and unbiased discussion of how best to make the world a better place.
I don't have a very definite proposal for how this can be accomplished. I welcome any suggestions. For now, I would encourage effective altruist types to take pride in being self-skeptical when it comes to favorable assessments of their potential impact relative to other effective altruist types, or relative to people outside of the effective altruist community.
Acknowledgements: Thanks to Vipul Naik and Nick Beckstead for feedback on an earlier draft of this post.
Note: I formerly worked as a research analyst at GiveWell. All views here are my own.
I cross-posted this article to http://www.effective-altruism.com/
20 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-08T05:19:54.730Z · LW(p) · GW(p)
Seems kind of obvious? We've got plenty of people running around saying "Perhaps you overestimate your importance".
Replies from: JonahSinick, Raemon↑ comment by JonahS (JonahSinick) · 2013-06-08T05:50:46.318Z · LW(p) · GW(p)
I agree that people say such things all the time. What I haven't seen very much is
- People questioning whether they themselves are subject to this influence (as opposed to questioning whether other people are subject to this influence).
- Meta-level discussion about how to counteract this influence.
On the latter point, I find certain principles from your How To Actually Change Your Mind sequence to be highly relevant and significant, but I don't remember having seen explicit application of these principles to "assessing the relative social impact of different effective altruism interventions" in the public domain.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2013-06-08T14:16:44.274Z · LW(p) · GW(p)
I wrote a post which is related, except that I thought different people might be more or less influenced by different biases and didn't identify one in particular as the most relevant.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-08T15:28:07.859Z · LW(p) · GW(p)
Yes, I vaguely remember having seen this — good point.
↑ comment by Raemon · 2013-06-10T15:21:04.691Z · LW(p) · GW(p)
It's obvious to people in the rationality community (I'd agree with Jonah that even here, we don't do a good enough job of actually instilling habits. )
But the Effective Altruism community is in the process of going... not mainstream, exactly, but at least drawing from different pools of people than the rationality community. Some of those people are coming from places like felicifia.org, which has a fair emphasis of intellectual rigor, but a lot of those people are coming from circles where a lot of ideas we take for granted aren't really common. Over the past few months, there's been an influx of people into the facebook group discussions and I've been a lot more concerned about careful thinking.
I've been noticing similar issues promoting the NYC Less Wrong group outside of LW-itself lately. On LW there's a shared culture of taking responsibility for your own intellectual rigor, or at the very least, acknowledging when you haven't researched an idea enough to be confident in it. Figuring out how to instill this in newcomers seems pretty important.
comment by John_Maxwell (John_Maxwell_IV) · 2013-06-08T06:39:25.483Z · LW(p) · GW(p)
Well, of course everyone's going to be thinking they're doing the most effective thing, because they chose to do it based on the fact that it seemed like it'd be the most effective. (Hopefully.)
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-08T15:36:05.758Z · LW(p) · GW(p)
Different people having different comparative advantage.
Consider the question "is it better to become a doctor, or a banker?" that's been raised by 80K. Someone who's naturally suited to being a doctor will be motivated to believe that becoming a doctor is higher value, and somebody who's naturally suited to being a banker will be motivated to believe that becoming a banker is higher value.
Thinking about one's comparative advantage can be a good heuristic for figuring out how to do the most good. The trouble arises, when, e.g., people who are especially good at being doctors (resp. bankers) are motivated to believe that their activity is of higher value, and then try to convince others (who don't have the same comparative advantage) to adopt the same profession because of this.
Also, people can be confused as to what their comparative advantage actually is, and so be in a field that's suboptimal for themselves, and try to get other people to go into the field for the reasons described above.
comment by elharo · 2013-06-09T10:05:57.010Z · LW(p) · GW(p)
Absent specific, non-hypothetical examples and empirical evidence, I find this question hard to think or reason about. I have not noticed this problem myself, so I cannot recollect any such examples from my own experience.
I note this as another example of the "Politics is the Mind-Killer" is the Mind-Killer meta-problem. The point of the "Politics is the Mind-Killer" essay (and a correct one) is that we should avoid using tribal-loyalty triggering examples when discussing issues such as mathematics, cognitive biases, and logic that are not fundamentally about issues that touch on tribal identity. Triggering tribal loyalty unnecessarily is bad pedagogy.
However, "Politics is the Mind-Killer" is not a general excuse for avoiding discussion of politics or other matters that touch on tribal or personal identity when those matters are exactly the subject at hand. If rationality cannot come to epistemically correct and instrumentally useful results despite the blinders of tribal loyalty and personal identity, it is weak, impotent, and irrelevant.
The claim of this post is that people have cognitive biases based on personal identity that cause them to reach incorrect conclusions about the relative efficacy of different altruistic actions. If this group is truly rational, then we should be able to calmly discuss the actual issues and resolve them factually or at least work them down to the point where we realize some of us have different fundamental values. For instance, I would not expect us to resolve the question of whether to value future people equally with currently living people, but I would expect us to be able to make plausible estimates as to the number of QALYs (quality adjusted life years) per dollar of different interventions, or at the very least to figure out what information is missing and needs to be collected to answer the question. If we can't do that, if we can't even talk about that, then I have to question what the point of the entire LessWrong project actually is.
comment by wedrifid · 2013-06-08T14:31:05.913Z · LW(p) · GW(p)
Jonah has recently been attempting to persuade Eliezer that Eliezer's comparative advantage is not the FAI research that he is currently doing but instead doing (more) evangelism. Now we have a post explaining how status signalling related motivated cognition can cause people to overestimate the value of the altruistic efforts that they happen to personally have chosen. This is almost certainly true---typical human biases work like that in all similar areas so it would be startling to find that an activity so heavily evolutionarily entangled with signalling motives was somehow immune! I feel it is important, however, to at least make passing acknowledgement of the fact that this exhortation about motivated cognition is itself subject to motive.
Jonah himself acknowledges people are more likely to suggest motivated cognition as something that the other guy might be suffering from than to apply it to themselves. While in this case there is no overt claim like "... and therefore you should believe the guy I was arguing with is biased and so agree with me instead" and I don't believe Jonah intends anything so crude, the recent context does change the meaning of any given post---at least the context and expected social influence of a post influences how I personally evaluate contributions that I encounter and I currently do not label that habit of reading a bug.
To be clear the pattern "significant argument --(short time)--> post by one participant which points out a bias that the other participant may have" isn't (always) a cause to reject the post. This one isn't particularly objectionable (a tad obvious but that's ok in discussion). Nevertheless I suggest that for the purpose of making the actual explicit point without distraction it may usually be best to keep such posts in draft form for a couple of weeks and post them later when the context loses relevance. Either that or include a lampshade or disclaimer regarding the relevance to the existing conversation. There is something about acting oblivious that invites scepticism.
Replies from: JonahSinick, TheOtherDave↑ comment by JonahS (JonahSinick) · 2013-06-08T15:47:19.308Z · LW(p) · GW(p)
- In writing my post, I had a number of different examples in the back of my mind.
- Even if I don't think that MIRI's current Friendly AI research is of high value, I believe that there are instances in which people have undervalued Eliezer's holistic output for the reason that I describe in my post.
- There's a broader context that my post falls into: note that I've made 11 substantive posts over the past 2.5 weeks, about subjects ranging from GiveWell's on climate change and meta-research, to effective philanthropy in general, to epistemology.
- You may be right that I should be spacing my posts out in a different way, temporally.
↑ comment by TheOtherDave · 2013-06-08T19:16:35.829Z · LW(p) · GW(p)
I endorse the lampshade approach significantly more than the delay approach.
More generally, I endorse stating explicitly whatever motivational or cognitive biases may nonobviously be influencing my posting whenever doing so isn't a significant fraction of the effort involved in the post.
For example, right now I suspect I'm being motivated by interpreting wedrifid's comment as a relatively sophisticated way of taking sides in the Jonah/Eliezer discussion he references, and because power struggles make me anxious my instinct is to "go meta" and abstract this issue further away from that discussion.
In retrospect, that isn't really an example; working out that motive and stating it explicitly was a significant fraction of the effort involved in this comment.
comment by Brian_Tomasik · 2013-06-08T20:38:06.606Z · LW(p) · GW(p)
For now, I would encourage effective altruist types to take pride in being self-skeptical when it comes to favorable assessments of their potential impact relative to other effective altruist types, or relative to people outside of the effective altruist community.
Yes, I find it remarkable how EAs tend to think their work is obviously vastly more important than that of "non-EAs" (as if such a thing were even well defined). There's not a lot new under the sun, and like most movements, EA is largely a recycling and recombination of things other people have been doing since the dawn of civilization. It may be a good combination, but little in EA is really unique to EA.
All of that said, I think a big reason people think their own work dominates that of others is because they have different values from other people. It's perfectly possible for lots of people to be doing lots of things that are each optimal relative to their own values. You might (perhaps correctly) point out that most EAs have values more similar to each other than my values are to theirs, so my point may apply less broadly than I suggested.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-09T05:29:00.456Z · LW(p) · GW(p)
All of that said, I think a big reason people think their own work dominates that of others is because they have different values from other people.
The situation is blurred by the fact that people are motivated to believe that the work that they're doing fulfills their values. For an extreme but vivid case, consider participants in a genocide. It's very hard to imagine that massacring a population reflects their fundamental values, but my impression is that such people often believe that they're doing the "right" thing in some moral sense.
I worry that this might have (much more mild!) incarnations within the EA community.
comment by Mitchell_Porter · 2013-06-08T06:21:42.790Z · LW(p) · GW(p)
I’m not aware of any overt public discussion of [this issue].
ETA: It's not the same issue, I didn't read either of you properly. But perhaps the same ballpark.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-08T06:35:37.445Z · LW(p) · GW(p)
Can you elaborate? I don't immediately see the connection with effective altruists being motivated to believe that the activities that they engage in are of higher relative value than they are.
comment by CarlShulman · 2013-06-09T10:18:52.050Z · LW(p) · GW(p)
This feeds back into the earlier discussion about the flexibility of donations vs careers. Hot money donors who switch to apparently better alternatives face less in the way of costs to encourage rationalization. They still have some pressures along these lines, since they don't want to say their previous donations were foolish, and would probably like to be able to point to some new evidence or justification for the switch, but the problem would certainly seem to be smaller.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-09T21:27:33.869Z · LW(p) · GW(p)
This is a very good point, which I had not considered. As you know, I've generally erred in the direction of updating too much rather than too little, and so this issue hasn't been salient to me. It's something for me to brood on.
As I said in response to your comment on my earlier post, I think that this problem can partially be mitigated by developing transferable skills and connections that can be applied in a wide variety of contexts.
comment by JoshuaZ · 2013-06-08T03:18:02.427Z · LW(p) · GW(p)
This seems potentially connected to Goodhart's law.
comment by Pablo (Pablo_Stafforini) · 2013-06-08T17:06:31.821Z · LW(p) · GW(p)
Good post.
One obvious problem with trying to overcome bias by means of "self-skepticism" is that many of the biases we try to overcome also shape our skeptical attitudes. Here, as elsewhere, adopting the outside view is probably more effective than attempting to find flaws in one's thinking "from the inside".
A possible application for the case at hand is this. Consider the reasons why you chose to work on a particular cause, instead of the many other causes you could have worked for. Are these reasons still the same ones that you currently regard as valid? If not, you should increase your credence in the hypothesis that you might be working on the wrong cause, relative to your present beliefs and values, since you might have reached this view as a result of motivated cognition.
I will give an example from my own personal life. I chose to become a vegetarian many years ago, out of concern for the animals that were suffering (in expectation) as a result of my dietary choices. However, as I read and reflected more on the issue, I came to realize that the indirect effects on other sentient beings where much more relevant than the direct effects on the animals themselves. In particular, I thought that the effects of spreading concern for all sentience by abstaining from eating animals might shape the choices made by our descendants with power to create astronomical amounts of suffering in the Universe. However, this should make me suspicious. Was I really lucky that my new reasons just so happen to vindicate the diet to which my old reasons had caused me to become deeply attached? Or is this instead the result of motivated cognition on my part? I am still a vegetarian, but for arguments of this sort I am less convinced that this is what morality requires of me.
comment by wedrifid · 2013-06-08T14:34:21.891Z · LW(p) · GW(p)
The effective altruist movement is in early stages, and it’s important to arrive at accurate conclusions about effective philanthropy as fast as possible. At this stage in time, it may be that the biggest contribution that members of the community can make is to engender and engage in an honest and unbiased discussion of how best to make the world a better place.
This philosophy strikes me as remarkably compatible with that of Leverage Research. Are you in contact with those folks at all?