Should EA's be Superrational cooperators?
post by diegocaleiro · 2014-09-16T21:41:10.712Z · LW · GW · Legacy · 10 commentsContents
How should a consequentialist act locally? None 10 comments
Back in 2012 when visiting Leverage Research, I was amazed by the level of cooperation in daily situations I got from Mark. Mark wasn't just nice, or kind, or generous. Mark seemed to be playing a different game than everyone else.
If someone needed X, and Mark had X, he would provide X to them. This was true for lending, but also for giving away.
If there was a situation in which someone needed to direct attention to a particular topic, Mark would do it.
You get the picture. Faced with prisoner dilemmas, Mark would cooperate. Faced with tragedy of the commons, Mark would cooperate. Faced with non-egalitarian distributions of resources, time or luck (which are convoluted forms of the dictator game), Mark would rearrange resources without any indexical evaluation. The action would be the same, and the consequentialist one, regardless of which side of a dispute was the Mark side.
I never got over that impression. The impression that I could try to be as cooperative as my idealized fiction of Mark was.
In game theoretic terms, Mark was a Cooperational agent.
- Altruistic - MaxOther
- Cooperational - MaxSum
- Individualist - MaxOwn
- Equalitarian - MinDiff
- Competitive - MaxDiff
- Aggressive - MinOther
Under these definitions of kinds of agents used in research on game theoretical scenarios, what we call Effective Altruism would be called Effective Cooperation. The reason why we call it "altruism" is because even the most parochial EA's care about a set containing a minimum of 7 billion minds, where to a first approximation MaxSum ≈ MaxOther.
Locally however the distinction makes sense. In biology Altruism usually refers to a third concept, different from both the "A" in EA, and Alt, it means acting in such a way that Other>Own without reference to maximizing or minimizing, since evolution designs adaptation executors, not maximizers.
A globally Cooperational agent acts as a consequentialist globally. So does an Alt agent.
The question then is,
How should a consequentialist act locally?
The mathematical response is obviously as a Coo. What real people do is a mix of Coo and Ind.
My suggestion is that we use our undesirable yet unavoidable moral tribe distinction instinct, the one that separates Us from Them, and act always as Coos with Effective Altruists and mix Coo and Ind only with non EAs. That is what Mark did.
10 comments
Comments sorted by top scores.
comment by ChristianKl · 2014-09-16T23:42:10.910Z · LW(p) · GW(p)
How should a consequentialist act locally?
That depends very much on the local environment.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-09-19T19:28:26.599Z · LW(p) · GW(p)
Yep. It took me decades to learn this. I am a natural cooperator, and I often end up surprised that many other people aren't, and that my utility that I sacrificed for the benefit of the whole group is gone because of some other people infighting. "Help your group" must go together with "choose the right group".
Replies from: SanguineEmpiricist↑ comment by SanguineEmpiricist · 2014-09-27T05:58:25.383Z · LW(p) · GW(p)
Yes, I learned this lesson the hard way as well.
comment by leplen · 2014-09-23T04:09:53.628Z · LW(p) · GW(p)
Coo as you've described it is probably my default personality type and it's not necessarily a good one. In particular always being willing to help/give means that other people sometimes get to direct a lot of the output of my productive effort. But people are often willing to spend the effort of others cheaply. There are lots of things I would be interested in having if someone were giving them away, but that I don't necessarily want very strongly.
Inasmuch as the people I associated with shared the norm that asking for favors was difficult and carried an expectation of repayment and obligation this strategy works very well. It lowers transaction barriers for beneficial networks of obligation. When I encounter people who do not find asking for help status-lowering or otherwise costly, or people who do not feel that favors create a sense of debt/obligation things can go very poorly very quickly. Obviously this is almost exactly what the theory predicts, but it's also very true in practice (at leas for n=1).
I'm not sure about your distinction between EAs and non-EAs. There are many EAs who I may share some terminal values with, but who I strongly disagree with about the relevant priorities in the short term. I don't think that giving them as many resources as they ask for is necessarily a good idea.
comment by Fluttershy · 2014-10-05T18:04:19.787Z · LW(p) · GW(p)
For a minute after reading this article, it seemed unintuitive to me that anyone would find it surprising that someone else would Cooperate in their day-to-day real-life social interactions (which can be modeled as an iterated prisoner's dilemma). After all, people are supposed to more or less play Tit-for-Tat in real life, right?
I think that, in real life, there are lots of situations in which you can Cooperate or Defect to various degrees, such that sets of everyday social actions between two parties are better modeled by continuous iterated prisoner's dilemmas than by discrete iterated prisoner's dilemmas. This distinction (that continuous iterated prisoner's dilemmas model certain real life situations better than discrete iterated prisoner's dilemmas do) could be used to help explain why it feels weird when someone is really, really unusually nice, like Mark-- maybe it is normal to mostly Cooperate in social situations, but it is rare to completely Cooperate.
comment by V_V · 2014-09-17T09:19:46.211Z · LW(p) · GW(p)
Pointing out the obvious failure mode of this strategy left as an exercise for the reader .
Replies from: devas↑ comment by devas · 2014-09-17T11:10:46.233Z · LW(p) · GW(p)
One becomes vulnerable to Ind pretending to be Coo?
Replies from: V_V↑ comment by V_V · 2014-09-17T12:11:57.615Z · LW(p) · GW(p)
Exactly.
Replies from: diegocaleiro↑ comment by diegocaleiro · 2014-09-17T18:08:40.055Z · LW(p) · GW(p)
Thus requiring coordination and trust.
Evolution frequently solves this with costly signalling.
Levels of coordination decrease or increase substantially with intragroup trust, that is modulated by communication and mutual knowledge. Thus there will be incentives to remind everyone to do so, and incentives to remind everyone to keep eyes open. My post does the former job, V_V's original response the latter. Both are necessary.