Act of Charity
post by jessicata (jessica.liu.taylor) · 2018-11-17T05:19:20.786Z · LW · GW · 49 commentsContents
Act I. None 50 comments
(Cross-posted from my blog)
The stories and information posted here are artistic works of fiction and falsehood. Only a fool would take anything posted here as fact.
—Anonymous
Act I.
Carl walked through the downtown. He came across a charity stall. The charity worker at the stall called out, "Food for the Africans. Helps with local autonomy and environmental sustainability. Have a heart and help them out." Carl glanced at the stall's poster. Along with pictures of emaciated children, it displayed infographics about how global warming would cause problems for African communities' food production, and numbers about how easy it is to help out with money. But something caught Carl's eye. In the top left, in bold font, the poster read, "IT IS ALL AN ACT. ASK FOR DETAILS."
Carl: "It's all an act, huh? What do you mean?"
Worker: "All of it. This charity stall. The information on the poster. The charity itself. All the other charities like us. The whole Western idea of charity, really."
Carl: "Care to clarify?"
Worker: "Sure. This poster contains some correct information. But a lot of it is presented in a misleading fashion, and a lot of it is just lies. We designed the poster this way because it fits with people's idea is of a good charity they should give money to. It's a prop in the act."
Carl: "Wait, the stuff about global warming and food production is a lie?"
Worker: "No, that part is actually true. But in context we're presenting it as some kind of imminent crisis that requires an immediate infusion of resources, when really it's a very long-term problem that will require gradual adjustment of agricultural techniques, locations, and policies."
Carl: "Okay, that doesn't actually sound like more of a lie than most charities tell."
Worker: "Exactly! It's all an act."
Carl: "So why don't you tell the truth anyway?"
Worker: "Like I said before, we're trying to fit with people's idea of what a charity they should give money to looks like. More to the point, we want them to feel compelled to give us money. And they are compelled by some acts, but not by others. The idea of an immediate food crisis creates more moral and social pressure towards immediate action, than the idea that there will be long-term agricultural problems that require adjustments.
Carl: "That sounds...kind of scammy?"
Worker: "Yes, you're starting to get it! The act is about violence! It's all violence!"
Carl: "Now hold on, that seems like a false equivalence. Even if they were scammed by you, they still gave you money of their own free will."
Worker: "Most people, at some level, know we're lying to them. Their eyes glaze over 'IT IS ALL AN ACT' as if it were just a regulatory requirement to put this on charity posters. So why would they give money to a charity that lies to them? Why do you think?"
Carl: "I'm not nearly as sure as you that they know this! Anyway, even if they know at some level it's a lie, that doesn't mean they consciously know, so to their conscious mind it seems like being completely heartless."
Worker: "Exactly, it's emotional blackmail. I even say 'Have a heart and help them out'. So if they don't give us money, there's a really convenient story that says they're heartless, and a lot of them will even start thinking about themselves that way. Having that story told about them opens them up to violence."
Carl: "How?"
Worker: "Remember Martin Shkreli?"
Carl: "Yeah, that asshole who jacked up the Daraprim prices."
Worker: "Right. He ended up going to prison. Nominally, it was for securities fraud. But it's not actually clear that whatever security fraud he did was worse than what others in his industry were doing. Rather, it seems likely that he was especially targeted because he was a heartless asshole."
Carl: "But he still broke the law!"
Worker: "How long would you be in jail if you got punished for every time you had broken the law?"
Carl: "Well, I've done a few different types of illegal drugs, so... a lot of years."
Worker: "Exactly. Almost everyone is breaking the law. So it's really, really easy for the law to be enforced selectively, to punish just about anyone. And the people who get punished the most are those who are villains in the act."
Carl: "Hold on. I don't think someone would actually get sent to prison because they didn't give you money."
Worker: "Yeah, that's pretty unlikely. But things like it will happen. People are more likely to give if they're walking with other people. I infer that they believe they will be abandoned if they do not give."
Carl: "That's a far cry from violence."
Worker: "Think about the context. When you were a baby, you relied on your parents to provide for you, and abandonment by them would have meant certain death. In the environment of evolutionary adaptation, being abandoned by your band would have been close to a death sentence. This isn't true in the modern world, but people's brains mostly don't really distinguish abandonment from violence, and we exploit that."
Carl: "That makes some sense. I still object to calling it violence, if only because we need a consistent definition of 'violence' to coordinate, well, violence against those that are violent. Anyway, I get that this poster is an act, and the things you say to people walking down the street are an act, but what about the charity itself? Do you actually do the things you say you do?"
Worker: "Well, kind of. We actually do give these people cows and stuff, like the poster says. But that isn't our main focus, and the main reason we do it is, again, because of the act."
Carl: "Because of the act? Don't you care about these people?"
Worker: "Kind of. I mean, I do care about them, but I care about myself and my friends more; that's just how humans work. And if it doesn't cost me much, I will help them. But I won't help them if it puts our charity in a significantly worse position."
Carl: "So you're the heartless one."
Worker: "Yes, and so is everyone else. Because the standard you're set for 'not heartless' is not one that any human actually achieves. They just deceive themselves about how much they care about random strangers; the part of their brain that inserts these self-deceptions into their conscious narratives is definitely not especially altruistic!"
Carl: "According to your own poster, there's going to be famine, though! Is the famine all an act to you?"
Worker: "No! Famine isn't an act, but most of our activities in relation to it are. We give people cows because that's one of the standard things charities like ours are supposed to do, and it looks like we're giving these people local autonomy and stuff."
Carl: "Looks like? So this is all just optics?"
Worker: "Yes! Exactly!"
Carl: "I'm actually really angry right now. You are a terrible person, and your charity is terrible, and you should die in a fire."
Worker: "Hey, let's actually think through this ethical question together. There's a charity pretty similar to ours that's set up a stall a couple blocks from here. Have you seen it?"
Carl: "Yes. They do something with water filtering in Africa."
Worker: "Well, do you think their poster is more or less accurate than ours?"
Carl: "Well, I know yours is a lie, so..."
Worker: "Hold on. This is Gell-Mann amnesia. You know ours is a lie because I told you. This should adjust your model of how charities work in general."
Carl: "Well, it's still plausible that they are effective, so I can't condemn—"
Worker: "Stop. In talking of plausibility rather than probability, you are uncritically participating in the act. You are taking symbols at face value, unless there is clear disproof of them. So you will act like you believe any claim that's 'plausible', in other words one that can't be disproven from within the act. You have never, at any point, checked whether either charity is doing anything in the actual, material world."
Carl: "...I suppose so. What's your point, anyway?"
Worker: "You're shooting the messenger. All or nearly all of these charities are scams. Believe me, we've spent time visiting these other organizations, and they're universally fraudulent, they just have less self-awareness about it. You're only morally outraged at the ones that don't hide it. So your moral outrage optimizes against your own information. By being morally outraged at us, you are asking to be lied to."
Carl: "Way to blame the victim. You're the one lying."
Worker: "We're part of the same ecosystem. By rewarding a behavior, you cause more of it. By punishing it, you cause less of it. You reward lies that have plausible deniability and punish truth, when that truth is told by sinners. You're actively encouraging more of the thing that is destroying your own information!"
Carl: "It still seems pretty strange to think that they're all scams. Like, some of my classmates from college went into the charity sector. And giving cows to people who have food problems actually seems pretty reasonable."
Worker: "It's well known by development economists that aid generally creates dependence, that in giving cows to people we disrupt their local economy's cow market, reducing the incentive to raise cattle. And in theory it could still be worth it, but our preliminary calculations indicate that it probably isn't."
Carl: "Hold on. You actually ran the calculation, found that your intervention was net harmful, and then kept doing it?"
Worker: "Yes. Again, it is all—"
Carl: "What the fuck, seriously? You're a terrible person."
Worker: "Do you think any charity other than us would have run the calculation we did, and then actually believe the result? Or would they have fudged the numbers here and there, and when even a calculation with fudged numbers indicated that the intervention was ineffective, come up with a reason to discredit this calculation and replace it with a different one that got the result they wanted?"
Carl: "Maybe a few... but I see your point. But there's a big difference between acting immorally because you deceived yourself, and acting immorally with a clear picture of what you're doing."
Worker: "Yes, the second one is much less bad!"
Carl: "What?"
Worker: "All else being equal, it's better to have clearer beliefs than muddier ones, right?"
Carl: "Yes. But in this case, it's very clear that the person with the clear picture is acting immorally, while the self-deceiver, uhh.."
Worker: "...has plausible deniability. Their stories are plausible even though they are false, so they have more privilege within the act. They gain privilege by muddying the waters, or in other words, destroying information."
Carl: "Wait, are you saying self-deception is a choice?"
Worker: "Yes! It's called 'motivated cognition' for a reason. Your brain runs something like a utility-maximization algorithm to tell when and how you should deceive yourself. It's epistemically correct to take the intentional stance towards this process."
Carl: "But I don't have any control over this process!"
Worker: "Not consciously, no. But you can notice the situation you're in, think about what pressures there are on you to self-deceive, and think about modifying your situation to reduce these pressures. And you can do this to other people, too."
Carl: "Are you saying everyone is morally obligated to do this?"
Worker: "No, but it might be in your interest, since it increases your capabilities."
Carl: "Why don't you just run a more effective charity, and advertise on that? Then you can outcompete the other charities."
Worker: "That's not fashionable anymore. The 'effectiveness' branding has been tried before; donors are tired of it by now. Perhaps this is partially because there aren't functional systems that actually check which organizations are effective and which aren't, so scam charities branding themselves as effective end up outcompeting the actually effective ones. And there are organizations claiming to evaluate charities' effectiveness, but they've largely also become scams by now, for exactly the same reasons. The fashionable branding now is environmentalism."
Carl: "This is completely disgusting. Fashion doesn't help people. Your entire sector is morally depraved."
Worker: "You are entirely correct to be disgusted. This moral depravity is a result of dysfunctional institutions. You can see it outside charity too; schools are authoritarian prisons that don't even help students learn, courts put people in cages for not spending enough on a lawyer, the US military blows up civilians unnecessarily, and so on. But you already knew all that, and ranting about these things is itself a trope. It is difficult to talk about how broken the systems are without this talking itself being interpreted as merely a cynical act. That's how deep this goes. Please actually update on this rather than having your eyes glaze over!"
Carl: "How do you even deal with this?"
Worker: "It's already the reality you've lived in your whole life. The only adjustment is to realize it, and be able to talk about it, without this destroying your ability to participate in the act when it's necessary to do so. Maybe functional information-processing institutions will be built someday, but we are stuck with this situation for now, and we'll have no hope of building functional institutions if we don't understand our current situation."
Carl: "You are wasting so much potential! With your ability to see social reality, you could be doing all kinds of things! If everyone who were as insightful as you were as pathetically lazy as you, there would be no way out of this mess!"
Worker: "Yeah, you're right about that, and I might do something more ambitious someday, but I don't really want to right now. So here I am. Anyway... food for the Africans. Helps with local autonomy and environmental sustainability. Have a heart and help them out."
Carl sighed, fished a 10 dollar bill from his wallet, and gave it to the charity worker.
49 comments
Comments sorted by top scores.
comment by jessicata (jessica.liu.taylor) · 2019-12-12T07:20:11.865Z · LW(p) · GW(p)
[this is a review by the author]
I think what this post was doing was pretty important (colliding two quite different perspectives). In general there is a thing where there is a "clueless / naive" perspective and a "loser / sociopath / zero-sum / predatory" perspective that usually hides itself from the clueless perspective (with some assistance from the clueless perspective; consider the "see no evil, hear no evil, speak no evil" mindset, a strategy for staying naive). And there are lots of difficulties in trying to establish communication. And the dialogue grapples with some of these difficulties.
I think this post is quite complementary with other posts about "improv" social reality, especially The Intelligent Social Web [LW · GW] and Player vs. Character [LW · GW].
I think some people got the impression that I entirely agreed with the charity worker. And I do mostly agree with the charity worker. I don't think there were things at the time of writing, said by the charity worker, that I outright thought were false at the time, although some that I thought were live hypotheses but not "very probably true".
Having the thing in dialogue form probably helped me write it (because I wasn't committing to defensibly believing anything) and people listen to it (because it's obviously not "accusatory" and can be considered un-serious / metaphorical so it doesn't directly trigger people's political / etc defenses)
Some things that seem possibly false/importantly incomplete to me now:
- "Everyone cares about themselves and their friends more" assumes a greater degree of self-interest in social behavior than is actually the case; most behavior is non-agentic/non-self-interested, although it is doing a kind of constraint satisfaction that is, by necessity, solving local constraints more than non-local ones. (And social systems including ideology can affect the constraint-satisfaction process a bunch in ways that make it so local constraint-satisfaction tries to accord with nonlocal constraint-satisfaction)
- It seems like the "conformity results from fear of abandonment" hypothesis isn't really correct (and/or is quite euphemistic), I think there are also coalitional spite strategies that are relevant here, where the motive comes from (a) self-protection from spite strategies and (b) engaging in spite strategies one's self (which works from a selfish-gene perspective). Also, even without spite strategies, scapegoating is often violent (both historically and in modern times, see prison system, identity-based oppression, sadistic interpersonal behavior, etc), and conservative strategies for resisting scapegoating can be quite fearful even when the actual risk is low. (This accords more with "the act is violence" from earlier in the dialogue, I think I probably felt some kind of tension between exaggerating/euphemizing the violence aspect, which shows up in the text; indeed, it's kind of a vulnerable position to be saying "I think almost everyone is committing spiteful violence against almost everyone else almost all the time" without having pretty good elaboration/evidence/etc)
- Charities aren't actually universally fraudulent, I don't think. It's a hyperbolic statement. (Quite a lot are, in the important sense of "fraud" that is about optimized deceptive behavior rather than specifically legal liability or conscious intent, especially when the service they provide is not visible/verifiable to donors; so this applies more to international than local charities)
- "It's because of dysfunctional institutions" is putting attention on some aspects of the problem but not other aspects. Institutions are made of people and relationships. But anyway "institutions" are a useful scapegoat in part because most people don't like them and are afraid of them, and they aren't exactly people. (Of course, a good solution to the overall problem will reform / replace / remove / etc institutions)
- It seems like the charity worker gets kind of embarrassed at the end and doesn't have good answers about why they aren't doing something greater, so changes the subject. Which is... kind of related to the lack of self-efficacy I was feeling at the time of writing. (In general, it's some combination of actually hard and emotionally difficult to figure out what ambitious things to do given an understanding like this one) Of course, being evasive when it's locally convenient is very much in character for the charity worker.
comment by Ben Pace (Benito) · 2018-11-17T21:06:21.027Z · LW(p) · GW(p)
Worker: "Yes! It's called 'motivated cognition' for a reason. Your brain runs something like a utility-maximization algorithm to tell when and how you should deceive yourself. It's epistemically correct to take the intentional stance towards this process."
Carl: "But I don't have any control over this process!"
Worker: "Not consciously, no. But you can notice the situation you're in, think about what pressures there are on you to self-deceive, and think about modifying your situation to reduce these pressures. And you can do this to other people, too."
Disagree that it’s either conscious or unconscious. I think, there’s a scale, and clearly most things in the brain are unavailable to introspection, but there’s a large number of things in my awareness that are merely cognitively expensive to check in on (I.e. my working memory is pretty full, so the subprocess is on the edge of my ability to notice it).
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2018-11-17T21:29:51.867Z · LW(p) · GW(p)
This all seems right. I have found introspection techniques such as meditation pretty useful for noticing how my mind works, though even with introspection the picture is really incomplete.
comment by Ben Pace (Benito) · 2018-12-13T20:10:03.338Z · LW(p) · GW(p)
I've curated this post. It makes a lot of really interesting points about internal honesty and societal incentives, and in a very vivid way. The dialogue really pulled on different parts of my psyche; I read it twice, a few weeks apart, and one time I thought the worker was wrong and the other time I thought the worker was right. I expect I'll be linking people to this dialogue many times in the future.
comment by Raemon · 2018-11-17T18:42:34.098Z · LW(p) · GW(p)
Exactly. Almost everyone is breaking the law. So it's really, really easy for the law to be enforced selectively, to punish just about anyone.
I've heard this point a bunch but I've never actually had it made clear to me: if I'm not doing illegal drugs, what laws am I probably breaking? (edit: that one could enforce against me)
Replies from: beriukay, clone of saturn, adamzerner↑ comment by beriukay · 2018-11-17T19:49:41.973Z · LW(p) · GW(p)
Just a few possibilities (These are U.S. examples, because that's what I know):
- Throwing away mail from a previous tenant - Fines up to $250,000 and 5 years in prison. (https://www.law.cornell.edu/uscode/text/18/1702)
- Jaywalking - Municipal rules vary, but not hard to enforce if a cop sees you doing it.
- Did you ever possess a marker while under the age of 18? Hopefully you were not in Oklahoma (https://www.quora.com/Is-it-true-that-using-permanent-marker-under-18-years-is-illegal-in-the-USA)
- Driving too fast or slow - Rules seem to vary, as does enforcement, but both of these can get you pulled over.
- Illegal sex practices - Unless you are married and doing it missionary-style with intent to make babies, it is possible you are violating a sodomy law, or perhaps an obscenity statute. These are obviously difficult to enforce if you care about privacy.
- Until fairly recently, all public performances of the Happy Birthday song violated copyright.
- File sharing has gotten many people in trouble, over the years.
- Taking a Rx medication that was not specifically provided for you.
- Gambling of any kind has been banned, though there have been exceptions over the years.
- Public urination.
- Logging into a wifi network without explicit permission - Federal (https://www.law.cornell.edu/uscode/text/18/1030) and State laws (https://arstechnica.com/tech-policy/2007/05/michigan-man-arrested-for-using-cafes-free-wifi-from-his-car/) apply to this one.
- That Federal law, by the way, is vague enough to get you in trouble for making a facebook page placeholder for you boss (though there's more to it than just that, this is the law he ended up pleading guilty for violating. https://newsone.com/926425/prison-guard-charged-with-pretending-to-be-his-own-boss-on-facebook/)
Even if you have never personally violated any of those, "not doing anything wrong" is no defense against a motivated law enforcement official. The sheer volume of statutes, laws, and precedents basically puts all citizens in the position that they are probably violating SOME law all the time. There's a not-very-good book with the title Three Felonies A Day that tried to argue the title as the thesis, but really ended up as a case study for examples like that Shkreli guy. The only real defense seems to be don't stick out.
Replies from: DanielFilan↑ comment by DanielFilan · 2018-11-17T22:04:09.140Z · LW(p) · GW(p)
Unless you are married and doing it missionary-style with intent to make babies, it is possible you are violating a sodomy law, or perhaps an obscenity statute.
In the USA, sodomy laws are unconstitutional.
↑ comment by clone of saturn · 2018-11-18T16:26:13.400Z · LW(p) · GW(p)
You could follow the A Crime A Day twitter account.
↑ comment by Adam Zerner (adamzerner) · 2018-12-18T23:17:52.522Z · LW(p) · GW(p)
There's a book called Three Felonies A Day that talks about it. (I haven't read the book, just have heard of it.)
comment by Zack_M_Davis · 2019-11-25T02:07:07.170Z · LW(p) · GW(p)
This part is very important (the recursive distortion [LW · GW] of a conscious, strategic lie is less bad than the alternative of trashing your ability to think in general [? · GW]):
Carl: "[...] But there's a big difference between acting immorally because you deceived yourself, and acting immorally with a clear picture of what you're doing."
Worker: "Yes, the second one is much less bad!"
Carl: "What?"
Worker: "All else being equal, it's better to have clearer beliefs than muddier ones, right?"
comment by jefftk (jkaufman) · 2018-12-19T21:50:46.374Z · LW(p) · GW(p)
Carl: "Why don't you just run a more effective charity, and advertise on that? Then you can outcompete the other charities."
Worker: "That's not fashionable anymore. The 'effectiveness' branding has been tried before; donors are tired of it by now. Perhaps this is partially because there aren't functional systems that actually check which organizations are effective and which aren't, so scam charities branding themselves as effective end up outcompeting the actually effective ones. And there are organizations claiming to evaluate charities' effectiveness, but they've largely also become scams by now, for exactly the same reasons. The fashionable branding now is environmentalism."
I'm confused about this part. Are you saying GiveWell is a scam?
Replies from: jessica.liu.taylor, adamzerner↑ comment by jessicata (jessica.liu.taylor) · 2018-12-20T08:03:28.036Z · LW(p) · GW(p)
The main point I was making was the one adamzerner pointed out: since there are systemic issues that make things tend to become scams, and charity evaluators aren't in a better position with respect to this problem than charities themselves, one should expect charity evaluators to become scams as well.
Separately, I do believe GiveWell is a scam (as reasonable priors in this area would suggest), although I don't want this to be treated as a public accusation or anything; it's not like they're more of a scam than most other things in this general area.
This belief is partially due to private info I have (will PM some details) and partially due to models suggesting that (a) the kind of interventions GiveWell suggests are ineffective and likely counterproductive, and have reasons other than effectiveness to be pursued; (b) GiveWell does not explicitly realize and then tell people this, because of motivated confusions/self-deceptions.
For (a), I'm not going to give the full argument here, but this video is pretty good, especially the section on dependency. I believe (b) on priors (given (a)), and it is confirmed to some extent by my private info.
Replies from: jkaufman, Viliam↑ comment by jefftk (jkaufman) · 2019-01-28T03:59:33.614Z · LW(p) · GW(p)
This belief is partially due to private info I have (will PM some details)
The first part of this private info turned out to be a rumor about the way an ex-employee was treated. I checked with the person in question, and they disconfirmed the rumor.
The remainder was recommendations to speak with specific people, which I may manage to do at some point, and links to public blog posts.
↑ comment by Viliam · 2018-12-22T23:35:49.387Z · LW(p) · GW(p)
Reading the comments below the linked video... there were responses written 2 years ago that author didn't have time to reply... How specifically does donating anti-malaria nets "keep populations dependent, economically weak, and slaves to the whims of international donors"?
I would understand the part about whims: yeah, tomorrow some influential organization might decide that the nets are bad and you should stop supporting them, and there would be no more nets. Still, would that outcome be worse compared to a parallel reality, where there never was a movement to support the anti-malaria nets? The people would still have gained a few years of health.
Are the donated nets ruining a previously existing huge local anti-malaria-net industry?
there are systemic issues that make things tend to become scams, and charity evaluators aren't in a better position with respect to this problem than charities themselves, one should expect charity evaluators to become scams as well.
This part feels true. Similarly how in medicine people started reading meta-reviews, and soon the homeopaths in addition to their studies also started producing their own meta-reviews supporting their own conclusions... as soon as charity evaluation becomes a generally known thing, some of the current ineffective charities will produce their own charity evaluators which will support whatever needs to be supported.
It's just, instead of "GiveWell will become a scam", to me the more likely scenario seems "in a few years there will be so many charity evaluator scams that when you google for 'charity evaluator', you won't find GiveWell on the first three pages of results".
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2018-12-23T00:19:13.004Z · LW(p) · GW(p)
Looks like some of comments were replied to, and most of the comments were either missing the point (e.g. thinking this is about overhead; the relevant section of the video was using overhead as an example of something that could explain efficiency differences) or pretty vague ("these arguments have been debunked by these people" but not being specific about where; I checked my copy of Doing Good Better and I couldn't find anything about dependency). Is there a specific objection you think was not addressed? Not sure if you have watched the full "Dependency" section of the video, from 35:56 to 42:28; this is the part of the video I would want to see a good response to.
One of the replies does address the issue of local net makers:
"If you want hard evidence, I see it every day, in people that refuse to improve their own station in life because for their whole life charities have come do it for them. I see it in the fact that you cannot buy quality mosquito nets here. If you actually care about your family's health you cannot do anything about it, because AMF and other such organizations have put the local net makers out of business. They increase apathy.
Additionally, the good they do is tentative at best, I know many families that only bring out their bed net when they know that the AMF inspector are coming around. Since they only come every six months, this is quite easy. Other families hang the nets in an area that they do not actually sleep (most families don't have beds here). AMF does not invest in training the actual community health workers that are there every day and could not only work to convince the families that this was something important, but could do data collection for the organization for many years to come. But if they did this, they would work themselves out of a job and actually have data that their nets break down after four or five years."
It might seem strange to take YouTube comments at face value, but my sense is that this guy has a lot of epistemic privilege by virtue of (a) working on international development in one of the locations that net distributions happen in and (b) being able to think well enough (and be honest enough) to make lots of pretty good philosophy videos. Additionally, everything he says seems quite likely according to Econ 101 models.
Replies from: Viliam, jkaufman↑ comment by Viliam · 2018-12-23T15:39:39.403Z · LW(p) · GW(p)
It is a bit costly for me to review a 48 minute video without a summary, but here it goes:
[8:00] If you're donating a dollar, according to Singer you would not want 99 cents to go to overhead costs and only one cent to go to the actual process of saving lives. Where should you donate your 5 dollars, if you could choose between a program that would take $4.95 to pay their employees and cover other costs, and only take the five cents to pay for bed nets; or one that spends the $4.95 on the bed nets, and only five cents on the overheads? You probably want to pick the latter, at least according to Singer. And this criterion is going to naturally privilege some causes over others because we're focusing on that specific impact that physical tangible impact that you're having.
This seems like missing the point (surprisingly, right after having described how consequentialism means that only the consequences matter), or perhaps it is a motte-and-bailey statement.
The overhead is not the problem. I believe this is actually one of the central messages of the effective altruism: what matters is what the money as a whole causes to happen. Overhead is a part of the equation. Yes, if we -- ceteris paribus -- just randomly increase the overhead for no good reason, it obviously makes us less effective. But if spending 20% on overhead instead of 10% would double the effect of the remaining 80% (for example by hiring more qualified people, or double-checking things to prevent theft), then increasing the overhead that way would be the right thing to do. I strongly believe Singer would approve this.
So, the motte of the statement is that if we have two processes that convert money to anti-malaria nets in exactly the same ratio, only one of them also has a 5% administrative overhead and the other has a 95% overhead, it is better to choose the former. The bailey of the statement would be concluding that...
Of the eight charities listed on the [GiveWell] web page, seven focused solely on directly providing health care: providing bed nets, providing cures for various diseases, and so on. These programs had the lowest costs and according to Singer the highest return in lives saved and pain averted.
No, it's not about "the least overhead is when we provide health care, therefore health care is what we should do". It is about the ratio between the donated dollars and generated outcome (where "many lives saved" is considered to be a quite impressive outcome). The overhead is a red herring.
[11:50] I will argue in fact that does more harm than good. I will claim that charities which minimize overhead actually are less effective than those that use their funds to address other concerns.
Sigh. Go on, fight the straw-Singer!
Then the video explains how donating X to a country can ruin the local X-providing industry. Duh. That's why I'm asking: Where is the local African anti-malaria-net industry that is being so thoughtlessly ruined by GiveWell?
Because I understand how donating food ruins the local food producers, or how donating t-shirts ruins the local t-shirt producers, so I would also understand how donating anti-malaria nets would ruin the local anti-malaria-net producers... the question is, do these "local anti-malaria-net producers" exist in real life, or only as a part of the thought experiment? What fraction of African population is employed in this industry? (My uninformed prejudices say "probably several orders of magnitude less than in local food production", but I may be wrong. Please give me data, not thought experiments designed to prove your point.) Because I believe there is a number X such that "X people saved from malaria" outweighs "1 person losing a job", even in the extreme case where the person losing the job would literally starve to death. (By the way, what about those people who now don't die from malaria? What if they also take someone's job?)
[16:15] There's a mosquito-net maker in Africa. He manufactures around 500 nets a week, employs ten people who as with many African countries each have to support upwards of 15 relatives. As someone that lives in West Africa, I can corroborate that however hard they work they can't make enough nets to combat the malaria-carrying mosquitoes.
Okay, so here is the local industry. I feel uncertain about the "×15" multiplier for the supported relatives, as an argument used in weighing the benefit of "more people saved from malaria" vs "local net producers not going out of business". Some of those people dying from malaria also have relatives they support, don't they? On the other hand, some of them are the supported relatives. (If I overthink it, some of them might even be the supported relatives of the local net producers.)
Now the argument goes: GiveWell sends the nets, puts the local producers (and their ×15 families) to poverty, and "in a maximum of five years the majority of the imported Nets will be torn, damaged, and
of no further use". Now we have 165 more people depending on foreign aid. (Hey, what about those whose lives were saved from malaria? Some of them will depend on foreign aid, too!) Also, no one will restart the local net industry, because now it is seen as a risky business.
The problem I have with this line of thought is that, hypothetically speaking, if I had a magic wand I could just wave and make malaria disappear forever... that would also be an evil thing to do. There would still be the 165 people depending on the foreign aid, right?
Then goes the general argument that by giving people specific aid, we are depriving them of the freedom to choose which aid they would prefer to get. In general, this is a good point, and there is a charity called Give Directly which addresses it... oh crap, donating cash to people makes them dependent, too! :( It seems like even keeping someone alive makes them dependent, because in the future, such person will require food, anti-malaria nets, etc.
Seems like the recommended solution is... to let Africa solve it's own problems, without any kind of aid. Because that way, the solution will be sustainable. This will also prevent "brain drain", because the smartest people will be motivated to keep living in the shitty environment, if they believe they are the only ones who can save it. (Win/win! Now even the jobs of the Westerners are safe.) Then those smart people will invest their savings in their countries of origin, and everything will become exponentially better.
[43:45] Singer and Give Well underestimate the amount of good that is done and pleasure created by bringing someone out of dependence.
Okay, here is a thought experiment: Your family is sick, you go to the doctor, and the doctors tells you: "I could give you a medicine, but imagine how much better it would feel to invent it for yourself! Yeah, it may cost a lot, and your family may die while you are researching, but I am giving you the long-term perspective here, for your own good."
What I am trying to say is that there is the trade-off between the pleasures of being independent and the pleasures of having your relatives alive. Speaking for myself... well, I actually don't think my country is economically independent, and I definitely wouldn't trade my kids' lives to make it so.
But perhaps the next time I will see a person suffering, I will remember that it is a superior option to just let them be, and not take away their motivation to become a well-paid software developer.
Additionally, everything he says seems quite likely according to Econ 101 models.
To me it seems like the Just-World Hypothesis. Specifically the part about how even donating cash to someone makes their life ultimately worse, feels like a status quo worship.
By this logic, there is no way to help another person, ever, without inflicting on them a horrible curse. You give your kids an Xmas present, and you just ruined their motivation to become financially independent. You help an old lady to cross the street, and you ruined her motivation to maintain her vision or to keep good relationships with her relatives. You invent the virus that kills mosquitoes worldwide, and you deprived the Africans of their motivation to study medicine. Any help is just a harm in disguise. (Or perhaps this only applies to helping Africans? Dunno.)
Replies from: jessica.liu.taylor, Benquo↑ comment by jessicata (jessica.liu.taylor) · 2018-12-23T18:53:48.015Z · LW(p) · GW(p)
Sorry if you unnecessarily reviewed the whole video, I mostly wanted to point you at the 7-minute "Dependency" section.
The overhead is not the problem.
You're missing the point in the same way the comment I called out was. He is giving an example of 2 charities and giving overhead as a possible reason one would be more effective than the other. In fact, if bednets have a constant price, than a charity that spends 99% of its money on bednets is more effective than one that spends 1% (as long as the bednets get distributed equally well etc).
No, it’s not about “the least overhead is when we provide health care, therefore health care is what we should do”.
He said costs not overhead.
Then the video explains how donating X to a country can ruin the local X-providing industry. Duh. That’s why I’m asking: Where is the local African anti-malaria-net industry that is being so thoughtlessly ruined by GiveWell?
I just quoted him talking about it, in the previous comment. Have you checked yourself whether there was an anti-malaria-net industry?
What fraction of African population is employed in this industry?
This is not the relevant number, the relevant number is more like "counterfactually on no aid, how many bednets would the local industry be producing."
Seems like the recommended solution is… to let Africa solve it’s own problems, without any kind of aid.
Where in the text do you find this? Quoting 45:30 in the video:
"You can measure the amount of choice that someone has in a process. You can work on projects that decrease dependency and build capacity. The problem is that this measurement and these projects come with overhead costs. They don't save the most lives for the dollar, but they do the work in sustainable ways. They inspire populations to try doing the work themselves. Even if that costs more now, it will be cheaper in the long run."
What I am trying to say is that there is the trade-off between the pleasures of being independent and the pleasures of having your relatives alive.
The independence point is about long-term economic growth (causing more pleasure down the line), not being happy that you're less dependent.
By this logic, there is no way to help another person, ever, without inflicting on them a horrible curse.
Again, not supported in the text. Why would this person work in international development if he thought this?
In general this comment fails to engage with the strongest part of the video, which is the 7-minute "Dependency" section I mentioned, in which he says the problems caused by aid are extremely bad in some of the countries that are targets of aid (like, they essentially destroy people's motivation to solve their community's problems).
Replies from: Viliam↑ comment by Viliam · 2018-12-23T23:08:42.089Z · LW(p) · GW(p)
[16:15] There's a mosquito-net maker in Africa. He manufactures around 500 nets a week, employs ten people who as with many African countries each have to support upwards of 15 relatives. As someone that lives in West Africa, I can corroborate that however hard they work they can't make enough nets to combat the malaria-carrying mosquitoes.
This is specifically the part that my understanding of Econ 101 fails to process.
There is this one guy and his 10 employees, and they can't make enough nets for the whole Africa. Okay, that part is simple to imagine. But why doesn't he employ more people and increase the production? Or why someone else doesn't copy his business model?
If my memory serves me well, Econ 101 assumes that if there is an effective demand, sooner or later there will also be the supply to match it.
Should I assume that there is no effective demand to buy more nets? Like, perhaps people are not aware of the dangers of malaria-carrying mosquitoes, or they don't believe the nets are helpful, or they simply do not have enough money to match the production costs of the nets: either because the nets are too expensive, or because they have to prioritize other necessities. But then, the problem is not that they "can't make enough nets", but rather that they "can't sell enough nets".
Another thing about Econ 101 is the principle of comparative advantage. According to this principle, trying to produce everything at home is worse than trading internationally. (Otherwise an embargo would be a blessing instead of a punishment.) But an international trade will inevitably put some of your local producers out of business. Is it possible that Africa has a comparative disadvantage at producing anti-malaria nets?
It seems to me that the author is pattern-matching food aid to nets aid. But that is a different situation. In a situation without foreign aid, you cannot have a long-term imbalance between local food production and local food needs... the starving people will die. But you can have a long-term imbalance between local anti-malaria net production and local anti-malaria net needs... people unprotected against malaria only get it with some probability, not certainty; some who get malaria will die but some will survive. In other words, the resulting balance can't include people who don't eat, but it can include people who are not protected against malaria but still some of them survive until they can reproduce. Foreign food aid tries to solve a temporary imbalance; and in the process perhaps introduces more harm than good. But foreign anti-malaria-net aid tries to change the long-term balance.
the problems caused by aid are extremely bad in some of the countries that are targets of aid (like, they essentially destroy people's motivation to solve their community's problems).
I understand how an intervention that puts half of your population out of business can have this effect. I find it less likely that an intervention that puts one person in a million out of business would have the same effect. That is why I asked how many people are employed in the anti-malaria-net industry, compared with agriculture.
This seems to me like a mistaken pattern-matching. Pretty much anything can make someone lose their job. But there is a difference between "save thousand people, destroy thousand jobs" (food aid) and "save thousand people, destroy one job" (anti-malaria nets).
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2018-12-23T23:22:32.699Z · LW(p) · GW(p)
Re the section you quoted: if you watched for 1 minute longer you would see that the issue is that the local net manufacturers can't scale up because they would have to compete with free nets, so the local infrastructure atrophies. (Absent aid, they could scale up, it would just take longer; scaling up might require additional capital or training)
The issue isn't number of jobs. The issue is (a) infrastructure to solve your own problems and (b) motivation to solve your own problems rather than waiting to have someone else solve them for you. This is all covered in the "Dependency" section of the video, which I am still not convinced you have watched.
↑ comment by Benquo · 2018-12-23T17:08:31.898Z · LW(p) · GW(p)
EAs say they don't care about overhead per se and just care about outcomes, but public-facing recommenders like GiveWell and CEA keep recommending charities with low overhead relative to programmatic spending, such as AMF and GiveDirectly, rather than charities that do the sort of hard-to-account-for institution-building which low-overhead charities depend on.
EAs are good at explaining why you shouldn't do what they're (we're?) doing. That's different than actually doing the right thing.
Replies from: Viliam↑ comment by Viliam · 2018-12-23T22:03:28.389Z · LW(p) · GW(p)
EAs are good at explaining why you shouldn't do what they're (we're?) doing. That's different than actually doing the right thing.
I agree (with the second part, at least).
The author of the video said "according to Singer" repeatedly, so I assumed he also disagrees with what EAs are saying. If the real objection was "Singer says to do the right thing, but then actually does exactly the thing he said was wrong", I didn't get that message from the video. (Maybe the problem is on my side.)
Replies from: Benquo↑ comment by jefftk (jkaufman) · 2019-01-24T18:59:21.613Z · LW(p) · GW(p)
they would work themselves out of a job and actually have data that their nets break down after four or five years
This is minor, but GiveWell already says "Our best guess is that [nets] last, on average, between 2 and 2.5 years." (https://www.givewell.org/charities/amf)
↑ comment by Adam Zerner (adamzerner) · 2018-12-20T05:25:24.231Z · LW(p) · GW(p)
I assume that this is making the point that in general, with these sorts of systems, whatever the GiveWell equivalent is tends to be a scam, because they don't have the right incentives. What if GiveWell made stuff up? What if they accepted bribes? Why is it in their interest to actually be accurate? Imagine an equivalent in the field of medicine evaluating doctors and hospitals. Or in the field of education evaluating schools. I know that I wouldn't expect MedicateWell and EducateWell to actually be effective.
My guess is that the author thinks that GiveWell in particular happens to not be a scam, but that this fact is an anomaly. I'd place my confidence in this at perhaps 80% and I'm interested in hearing a response from the author too.
comment by Adam Zerner (adamzerner) · 2018-12-18T23:23:41.560Z · LW(p) · GW(p)
Eliezer's Inadequate Equilibria seems like a good thing for further reading. Specifically about systems being broken in such a way where, eg. a charity could just stop doing all of these things, because then they'd just fail. They're not choosing between doing it the right way and doing it the wrong way, they're choosing between doing it the wrong way and doing it at all.
comment by romeostevensit · 2019-11-22T05:04:00.143Z · LW(p) · GW(p)
Very nice to have a pithy description of a particular very important/common failure mode of humans trying to accomplish good together
comment by Adam Zerner (adamzerner) · 2018-12-20T05:33:59.837Z · LW(p) · GW(p)
I really enjoyed this post, thanks for writing it. Fwiw, I sat for two or three hours thinking about it and trying to come up with something useful to say, but I just couldn't. I hope humanity figures this one out eventually.
comment by Wei Dai (Wei_Dai) · 2018-11-17T17:00:41.689Z · LW(p) · GW(p)
I enjoyed the story and I agree with most of the points you're making, but I'm not sure calling the situation "morally depraved" is a good strategy. (I'd probably call it "suboptimal" or "dire" depending how strongly I wanted to express my feelings.) I think I'm more skeptical than you are that it's possible to do much better (i.e., build functional information-processing institutions) before the world changes a lot for other reasons (e.g., superintelligent AIs are invented), so I'd optimize more for not making enemies or alienating people than for making people realize how bad the situation is or joining your cause.
On a practical level, do you think there are any charities existing today that are doing substantially better than others with regard to these problems?
Replies from: jessica.liu.taylor, Benquo, gallabytes, gallabytes↑ comment by jessicata (jessica.liu.taylor) · 2018-11-17T20:38:51.677Z · LW(p) · GW(p)
Agreed that calling things morally depraved is a limited strategy and has downsides. Part of the dialogue was meant to explore the limitations of moral outrage. Perhaps a better reframing is to say that much of the economy is trapped by one or more basilisks (in the sense of Roko's basilisk); the blame goes on the basilisk rather than on the people trapped in it.
I think we do disagree on how important it is to make people realize how bad the situation is. In particular, I am not sure how you expect x-risk research/strategy to accomplish its objectives without having at least one functional information-processing institution! If solving AI safety (or somehow preventing AI risk without solving AI safety directly) requires solving hard philosophical problems, comparably hard to or harder than what have been solved in the past, then that requires people to optimize for something other than optics. Moreover, it almost certainly requires a network of communicating people, such that the network is processing information about hard philosophical problems.
I do think there are institutions that process information well enough to do pretty-hard technical things (such as SpaceX), and some relevant questions are (a) what are they doing right, and (b) how can they be improved on.
It could be that AGI will not be developed for a very long time, due to how dysfunctional current institutions are. That actually seems somewhat likely [EDIT: to be more precise, dysfunctional institutions creating AGI is more of an out-of-model tail risk than a mainline event, I understated this for some reason]. But, that would imply that there's a lot of time to build functional institutions.
Regarding charities: I think people should often be willing to fund things that they can actually check on and see the results of. For example, if you can evaluate open source software, then that allows you to selectively donate to projects that produce good software. You can also use recommendations by people you trust, who themselves evaluate things. (Unfortunately, if you're one of the only people actually checking, then there might not be strong UDT reasons to donate at all)
However, if something is hard to evaluate, then going based only on what you see results in optimization for optics. The "generalized MIRI donor strategy" would be to, when you notice a problem that you are ill-equipped to solve yourself, other people have produced some output you can evaluate (so you believe they are competent), and it seems like they are well-motivated to solve the problem (or otherwise do useful things), increase these people's slack so they can optimize for something other than optics. I think this is one of the better strategies for donors. It could suggest giving either to a charity itself or the individuals in it, depending on which increases slack the most. (On the object level, I think MIRI has produced useful output, and that increasing their employees' slack is a good thing, though obviously I haven't evaluated other charities nearly as much as I have evaluated MIRI, so combine your information state with mine, and also only trust me as much as makes sense given what you know about me).
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2018-11-18T04:02:39.509Z · LW(p) · GW(p)
In particular, I am not sure how you expect x-risk research/strategy to accomplish its objectives without having at least one functional information-processing institution! If solving AI safety (or somehow preventing AI risk without solving AI safety directly) requires solving hard philosophical problems, comparably hard to or harder than what have been solved in the past, then that requires people to optimize for something other than optics.
I guess it's a disjunction of various low-probability scenarios like:
- Metaphilosophy turns out to be easy.
- Another AI alignment approach turns out to be easy (i.e., does not require solving hard philosophical problems)
- Existing institutions turn out to be more effective than they appear.
- Some kind of historical fluke or close call with x-risk causes world leaders to get together and cooperate to stop AI development.
In my mind "building much better information-processing institutions turns out to be easier than I expect" is comparable in likelihood to these scenarios so I don't feel like pushing really hard on that at a high cost to myself personally or to my contributions to these other scenarios. But if others want to do it I guess I'm willing to help if I can do so at relatively low cost.
I do think there are institutions that process information well enough to do pretty-hard technical things (such as SpaceX), and some relevant questions are (a) what are they doing right, and (b) how can they be improved on.
It's not clear to me that whatever institutional innovations SpaceX came up with (if any, i.e., if their success isn't due to other factors) to be more successful than their competitors on solving technical problems would transfer to the kinds of problems you described for the charity sector. Until I see some analysis showing that, it seems a priori unlikely to me.
It could be that AGI will not be developed for a very long time, due to how dysfunctional current institutions are. That actually seems somewhat likely. But, that would imply that there’s a lot of time to build functional institutions.
Don't you think it takes institutions that are more functional to build aligned AGI than to build unaligned AGI? I have a lot of uncertainty about all this but I expect that we're currently near or over the threshold for unaligned AGI but well under the threshold for aligned AGI. (It seems totally plausible that by pushing for more functional institutions you end up pushing them over the threshold for unaligned AGI but still under the threshold for aligned AGI, but that might be a risk worth taking.)
Regarding charities: I think people should often be willing to fund things that they can actually check on and see the results of.
Almost nobody can do this for AI alignment, and it's really costly even for the people that can. For example I personally don't understand or can't motivate myself to learn a lot of things that get posted to AF. Seriously, what percentage of current donors for AI alignment can do this?
You can also use recommendations by people you trust, who themselves evaluate things.
How do you suggest people to find trustworthy advisors to help them evaluate things, if they themselves are not experts in the field? Wouldn't they have to rely on the "optics" of the advisors? Is this likely to be substantially better than relying on the optics of charities?
(Unfortunately, if you’re one of the only people actually checking, then there might not be strong UDT reasons to donate at all)
(I'm not convinced that UDT applies to humans, so if I was donating, it would be to satisfy some faction of my moral parliament.)
The “generalized MIRI donor strategy” would be to
Why is this called "generalized MIRI donor strategy"?
other people have produced some output you can evaluate (so you believe they are competent), and it seems like they are well-motivated to solve the problem (or otherwise do useful things)
This seems really hard for almost anyone to evaluate, both the technical side (see above), and the motivational side. And wouldn't this cause a lot of people to optimize for producing output that typical donors could evaluate (rather than what's actually important), and for producing potentially wasteful signals of motivation?
increase these people’s slack so they can optimize for something other than optics
Can you be more explicit about what you're proposing? What exactly should one do to give these people more slack? Give unconditional cash gifts? I imagine that cash gifts wouldn't be all that helpful unless it's over the threshold where they can just quit their job and work on whatever they want, otherwise they'd still have to try to keep their jobs which means working on whatever their employers tell them to, which means optimizing for optics (since that's what their employers need them to do to keep the donations coming in).
On the object level, I think MIRI has produced useful output, and that increasing their employees’ slack is a good thing, though obviously I haven’t evaluated other charities nearly as much as I have evaluated MIRI, so combine your information state with mine, and also only trust me as much as makes sense given what you know about me
If I followed this strategy, how would I know that I was funding (or doing the best that I can to fund) the most cost-effective work (i.e., that there isn't more cost-effective work that I'm not funding because I can't evaluate that work myself and I don't know anyone I trust who can evaluate that work)? How would I know that I wasn't creating incentives that are even worse than what's typical today (namely to produce what people like me can evaluate and to produce signals of well-motivation)? How do I figure out who I can trust to help me evaluate individuals and charities? Do you think there's a level of evaluative ability in oneself and one's trusted advisors below which it would be better to outsource the evaluation to something like BERI, FLI, or OPP instead? If so, how do I figure out whether I'm above or below that threshold?
Instead of replying to me point by point, feel free to write up your thoughts more systematically in another post.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2018-11-18T05:55:03.857Z · LW(p) · GW(p)
In my mind “building much better information-processing institutions turns out to be easier than I expect” is comparable in likelihood to these scenarios
My sense is that building functional institutions is something that has been done in history multiple times, and that the other things you list are extreme tail scenarios. For some context, I recommend reading Samo's blog; I think my views on this are pretty similar to his.
It’s not clear to me that whatever institutional innovations SpaceX came up with (if any, i.e., if their success isn’t due to other factors) to be more successful than their competitors on solving technical problems would transfer to the kinds of problems you described for the charity sector.
Perhaps the answer is that some problems (including AI alignment) are not best solved by charities. It seems like something that works at a well-funded company (that has long time horizons, like SpaceX) would probably also work for a well-funded nonprofit (since at this point the difference is mostly nominal), though probably it's easier to get a company this well-funded than a nonprofit.
Don’t you think it takes institutions that are more functional to build aligned AGI than to build unaligned AGI?
Most likely, yes. The question of whether we're above or below the threshold for unaligned AGI seems like a judgment call. Based on my models (such as this one), the chance of AGI "by default" in the next 50 years is less than 15%, since the current rate of progress is not higher than the average rate since 1945, and if anything is lower (the insights model linked has a bias towards listing recent insights).
Almost nobody can do this for AI alignment, and it’s really costly even for the people that can.
Agreed, maybe people who can't do this should just give discretion over their AI donations to their more-technical friends, or just not donate in the AI space in the first place.
Why is this called “generalized MIRI donor strategy”?
I'm calling it this because it seems like the right generalization of the best reason to donate to MIRI.
Can you be more explicit about what you’re proposing? What exactly should one do to give these people more slack? Give unconditional cash gifts? I imagine that cash gifts wouldn’t be all that helpful unless it’s over the threshold where they can just quit their job and work on whatever they want, otherwise they’d still have to try to keep their jobs which means working on whatever their employers tell them to, which means optimizing for optics
I am thinking of either unconditional cash gifts or donations to their employer. Some charities will spend less on short-term signalling when they have more savings. You could talk to the people involved to get a sense of what their situation is, and what would be most helpful to them. If there's a non-concavity here (where 10x the money is over 10x as good as 1x the money), I jokingly suggest gambling, but I am actually suspicious of this particular non-concavity, since having incrementally more savings means you can last incrementally longer after quitting your job.
If I followed this strategy, how would I know that I was funding (or doing the best that I can to fund) the most cost-effective work (i.e., that there isn’t more cost-effective work that I’m not funding because I can’t evaluate that work myself and I don’t know anyone I trust who can evaluate that work)?
You don't, so you might have to lower your expectations. Given the situation, all the options are pretty suboptimal.
How would I know that I wasn’t creating incentives that are even worse than what’s typical today (namely to produce what people like me can evaluate and to produce signals of well-motivation)?
You don't, but you can look at what your models about incentives say constitutes better/worse incentives. This is an educated guess, and being risk averse about it doesn't make any sense in the x-risk space (since the idea of being risk averse about risk reduction is incoherent).
How do I figure out who I can trust to help me evaluate individuals and charities?
(This also answers some of the other points about figuring out who to trust)
Talk to people, see who tends to leave you with better rather than worse ideas. See who has a coherent worldview that makes correct predictions. See who seems curious. Make your own models and use them as consistency checks. Work on projects with people and see how they go. Have friend networks that you can ask about things. This is a pretty big topic.
Some of these things would fall under optics, insofar as they are an incomplete evaluation that can be gamed. So you have to use multiple sources of information in addition to priors, and even then you will make mistakes. This is not avoidable, as far as I can tell. There's something of a red queen race, where signals of good thinking eventually get imitated, so you can't just use a fixed classifier.
Do you think there’s a level of evaluative ability in oneself and one’s trusted advisors below which it would be better to outsource the evaluation to something like BERI, FLI, or OPP instead?
Perhaps random people on the street would be better off deferring to one of these organizations, though it's not clear why they would defer to one of these in particular (as opposed to, say, the Gates Foundation or their university). I am not actually sure how you end up with the list BERI/FLI/OPP; the main commonality is that they give out grants to other charities, but that doesn't necessarily put them in a better epistemic position than, say, MIRI or FHI, or individual researchers (consider that, when you give grants, there is pressure to deceive you in particular). In general the question of who to outsource the evaluation to is not substantially easier than the question of which people you should give money to, and could be considered a special case (where the evaluator itself is considered as a charity).
Replies from: gallabytes, Wei_Dai↑ comment by gallabytes · 2018-11-18T07:46:33.130Z · LW(p) · GW(p)
Based on my models (such as this one), the chance of AGI "by default" in the next 50 years is less than 15%, since the current rate of progress is not higher than the average rate since 1945, and if anything is lower (the insights model linked has a bias towards listing recent insights).
Both this comment and my other comment are way understating our beliefs about AGI. After talking to Jessica about it offline to clarify our real beliefs rather than just playing games with plausible deniability, my actual probability is between 0.5 and 1% in the next 50 years. Jessica can confirm that hers is pretty similar, but probably weighted towards 1%.
↑ comment by Wei Dai (Wei_Dai) · 2018-11-18T10:10:10.126Z · LW(p) · GW(p)
My sense is that building functional institutions is something that has been done in history multiple times, and that the other things you list are extreme tail scenarios.
People are trying to build more functional institutions, or at least unintentionally experimenting with different institutional designs, all the time, so the fact that building functional institutions is something that has been done in history multiple times doesn't imply that any particular attempt to build a particular kind of functional institutions has a high or moderate chance of success.
For some context, I recommend reading Samo’s blog
Can you recommend some specific articles?
It seems like something that works at a well-funded company (that has long time horizons, like SpaceX) would probably also work for a well-funded nonprofit
Are there any articles about what is different about SpaceX as far as institutional design? I think I tried to find some earlier but couldn't. In my current state of knowledge I don't see much reason to think that SpaceX's success is best explained by it having made a lot of advances in institutional design. If it actually did create a lot of advances in institutional design, and they can be applied to a nonprofit working on AI alignment, wouldn't it be high priority to write down what those advances are and how they can be applied to the nonprofit, so other people can critique those ideas?
Let's reorganize the rest of our discussion by splitting your charity suggestion into two parts which we can consider separately:
- Use your own or your friends' evaluations of technical work instead of outsourcing evaluations to bigger / more distant / more formal organizations.
- Give awards/tenure instead of grants.
(Is this a fair summary/decomposition of your suggestion?) I think aside from my previous objections which I'm still not sure about, perhaps my true rejection to 1 combined with 2 is that I don't want to risk feeling duped and/or embarrassed if I help fund someone's tenure and it turns out that I or my friends misjudged their technical abilities or motivation (i.e., they end up producing low quality work or suffer a large decrease in productivity). Yeah from an altruistic perspective I should be risk neutral but I don't think I can override the social/face-saving part of my brain on this. To avoid this I could fund an organization instead of an individual but in that case it's likely that most of the money would go towards expansion rather than increasing slack. I guess another problem is that if I start thinking about what AI alignment work to fund I get depressed thinking about the low chance of success of any particular approach, and it seems a lot easier to just donate to an evaluator charity and make it their problem.
It seems to me that 2 by itself is perhaps more promising and worth a try. In other words we could maybe convince a charity to reallocate some money from grants to awards/tenure and check if that improves outcomes, or create a new charity for this purpose. Is that something you'd support?
Replies from: jessica.liu.taylor, rk↑ comment by jessicata (jessica.liu.taylor) · 2018-11-18T22:04:55.992Z · LW(p) · GW(p)
People are trying to build more functional institutions, or at least unintentionally experimenting with different institutional designs, all the time, so the fact that building functional institutions is something that has been done in history multiple times doesn’t imply that any particular attempt to build a particular kind of functional institutions has a high or moderate chance of success.
Agree, but this is a general reason why doing things is hard. Lots of people are working on philosophy too. I think my chance of success is way higher than the base rate, to the point where anchoring on the base rate does not make sense, but I can understand people not believing me on this. (Arguments of this form might fall under modest epistemology.)
Can you recommend some specific articles?
- On the Loss and Preservation of Knowledge
- Live versus Dead Players
- Functional Institutions are the Exception
- Great Founder Theory
In my current state of knowledge I don’t see much reason to think that SpaceX’s success is best explained by it having made a lot of advances in institutional design.
If one organization is much more effective than comparable ones, and it wasn't totally by accident, then there are causal reasons for the difference in effectiveness. Even if it isn't the formal structure of the organization, it could be properties of the people who seeded the organization, properties of the organization's mission, etc. I am taking a somewhat broad view of "institutional design" that would include these things too. I am not actually saying anything less trivial than "some orgs work much better than other orgs and understanding why is important so this can be replicated and built upon".
If it actually did create a lot of advances in institutional design, and they can be applied to a nonprofit working on AI alignment, wouldn’t it be high priority to write down what those advances
Yes, the general project of examining and documenting working institutions is high-priority. There will probably be multiple difficulties in directly applying SpaceX's model (e.g. the fact that SpaceX is more engineering than research), so documenting multiple institutions would help. I am otherwise occupied right now, though.
(Is this a fair summary/decomposition of your suggestion?)
Yes, this would be my suggestion for donors.
I appreciate the introspection you've done on this, this is useful information. I think there's a general issue where the AI alignment problem is really hard, so most people try to push the responsibility somewhere else, and those that do take responsibility usually end up feeling like they are highly constrained into doing the "responsible" thing (e.g. using defensible bureaucratic systems rather than their intuition), which is often at odds with the straightforward just-solve-it mental motion (which is used in, for example, playing video games), or curiosity in general (e.g. mathematical or scientific). I've experienced this personally. I recommend Ben's post on this dynamic. This is part of why I exited the charity sector and don't identify as an EA anymore. I don't know how to fix this other than by taking the whole idea of non-local responsibility (i.e. beyond things like "I am responsible for driving safely and paying my bills on time") less seriously, so I kind of do that.
In other words we could maybe convince a charity to reallocate some money from grants to awards/tenure and check if that improves outcomes, or create a new charity for this purpose. Is that something you’d support?
Yes, both awards and tenure seem like improvements, and in any case well worth experimenting with.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2018-11-19T03:22:52.368Z · LW(p) · GW(p)
- SpaceX might be more competitive than its competitors not because it's particularly functional (i.e., compared to typical firms in other fields) but because its competitors are particularly non-functional.
- If it is particularly effective but not due to the formal structure of the organization, then those informal things are likely to be hard to copy to another organization.
- SpaceX can be very successful by having just marginally better institutional design than its competitors because they're all trying to do the same things. A successful AI alignment organization however would have to be much more effective than organizations that are trying to build unaligned AI (or nominally trying to build aligned AI but cutting lots of corners to win the race, or following some AI alignment approach that seems easy but is actually fatally flawed).
those that do take responsibility usually end up feeling like they are highly constrained into doing the “responsible” thing (e.g. using defensible bureaucratic systems rather than their intuition), which is often at odds with the straightforward just-solve-it mental motion (which is used in, for example, playing video games), or curiosity in general (e.g. mathematical or scientific)
I don't think that playing video games, math, and science are good models here because those all involve relatively fast feedback cycles which make it easy to build up good intuitions. It seems reasonable to not trust one's intuitions in AI alignment, and the desire to appear defensible also seems understandable and hard to eliminate, but perhaps we can come up with better "defensible bureaucratic systems" than what exists today, i.e., systems that can still appear defensible but make better decisions than they currently do. I wonder if this problem has been addressed by anyone.
Yes, both awards and tenure seem like improvements, and in any case well worth experimenting with.
Ok, I made the suggestion to BERI since it seems like they might be open to this kind of thing.
ETA: Another consideration against individuals directly funding other individuals is that it wouldn't be tax deductible. This could reduce the funding by up to 40%. If the funding is done through a tax-exempt non-profit, then IRS probably has some requirements about having formal procedures for deciding who/what to fund.
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-12-07T20:12:59.814Z · LW(p) · GW(p)
(Just got around to reading this. As a point of reference, it seems that at least Open Phil seems to have decided that tax-deductability is not more important than being able to give to things freely, which is why the Open Philanthropy Project is an LLC. I think this is at least slight evidence towards that tradeoff being worth it.)
Replies from: CarlShulman, Wei_Dai↑ comment by CarlShulman · 2018-12-18T20:14:49.885Z · LW(p) · GW(p)
There's an enormous difference between having millions of dollars of operating expenditures in an LLC (so that an org is legally allowed to do things like investigate non-deductible activities like investment or politics), and giving up the ability to make billions of dollars of tax-deductible donations. Open Philanthropy being an LLC (so that its own expenses aren't tax-deductible, but it has LLC freedom) doesn't stop Good Ventures from making all relevant donations tax-deductible, and indeed the overwhelming majority of grants on its grants page are deductible.
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-12-19T11:34:05.408Z · LW(p) · GW(p)
Yep, sorry. I didn't mean to imply that all of Open Phil's funding is non-deductible, just that they decided that it was likely enough that they would find non-deductible opportunities that they went through the effort of restructuring their org to do so (and also gave up a bunch of other benefits like the ability to sponsor visas efficiently). My comment wasn't very clear on that.
↑ comment by Wei Dai (Wei_Dai) · 2018-12-08T02:16:26.838Z · LW(p) · GW(p)
Here's Open Phil's blog post on why they decided to operate as an LLC. After reading it, I think their reasons are not very relevant to funding AI alignment research. (Mainly they want the freedom to recommend donations to non-501(c)(3) organizations like political groups.)
↑ comment by rk · 2018-11-18T16:12:30.748Z · LW(p) · GW(p)
I am also pretty interested in 2 (ex-post giving). In 2015, there was impactpurchase.org. I got in contact with them about it, and the major updates Paul reported were a) being willing to buy partial contributions (not just for people who were claiming full responsibility for things) and b) more focused on what's being funded (like for example, only asking for people to submit claims on blog posts and articles).
I realise that things like impactpurchase is possibly framed in terms of a slightly divergent reason for 2 (it seems more focused on changing the incentive landscape, whereas the posts above include thinking about whether giving slack to people with track records will lead those people to be counterfactually more effective in future).
↑ comment by Benquo · 2018-11-18T09:32:53.031Z · LW(p) · GW(p)
but I'm not sure calling the situation "morally depraved" is a good strategy
“Total depravity” was a central tenet of a pretty successful coordination network known for maintaining a higher than usual level of personal integrity. This is not just a weird coincidence. One has to be able to describe what's going on in order to coordinate to do better. Any description of this thing is going to register as an act as well as a factual description, but that's not a strong reason to avoid a clear noneuphemistic handle for this sort of thing.
↑ comment by gallabytes · 2018-11-18T01:44:33.973Z · LW(p) · GW(p)
I think I'm more skeptical than you are that it's possible to do much better (i.e., build functional information-processing institutions) before the world changes a lot for other reasons (e.g., superintelligent AIs are invented)
Where do you think the superintelligent AIs will come from? AFAICT it doesn't make sense to put more than 20% on AGI before massive international institutional collapse, even being fairly charitable to both AGI projects and prospective longevity of current institutions.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2018-12-13T19:45:04.927Z · LW(p) · GW(p)
Huh, I notice I've not explicitly estimated my timeline distribution for massive international institutional collapse, and that I want to do that. Do you have any links to places where others/you have thought about it?
↑ comment by gallabytes · 2018-11-18T08:04:37.983Z · LW(p) · GW(p)
I'd optimize more for not making enemies or alienating people than for making people realize how bad the situation is or joining your cause.
Why isn't this a fully general argument for never rocking the boat?
Replies from: philhcomment by scarcegreengrass · 2018-12-13T23:21:20.998Z · LW(p) · GW(p)
I found this uncomfortable and unpleasant to read, but i'm nevertheless glad i read it. Thanks for posting.
comment by mako yass (MakoYass) · 2018-11-18T04:41:34.645Z · LW(p) · GW(p)
self-deception is a choice
I get in trouble when I live this belief, I don't recommend it. I might find it easier to be around people if I thought of their self-deception as a sort of demonic presence that needs to be exorcised and isn't naturally a part of them. Yes it is behaving as if there is agency in it, defending itself against correction, ignoring the warnings that something might be wrong with it, but at least we can claim that is a subagent, just a broken part in the back rather than the main thing, that will let us sleep soundly at night, that will let us pretend there is something here that deserves to be saved
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2018-11-18T08:53:11.368Z · LW(p) · GW(p)
(Amusement at your last sentence!)
Part of my view on this is that people self-deceive for basically sympathetic reasons, and that those reasons are important to our agency. As an analogy, people steal things sometimes, usually for sympathetic reasons, such as the local economy being dysfunctional and their family starving, or even being very bored and finding shoplifting to be a thrill. In an important sense it is antisocial, but the motivations behind it are good, and disowning them would be cutting out an important part of one's self. It would literally be discarding some of your values.
If self-deception is a choice, then it is not simply part of our nature. It is a strategy we use to achieve things, and we will use different means to achieve those same things if we anticipate them working better.
Humans are monsters, but humans are not demons, not even the part that chooses to self-deceive. The demons are the collective hallucinations that compel humans to harm themselves and each other.
comment by Mary Chernyshenko (mary-chernyshenko) · 2019-11-29T17:54:55.165Z · LW(p) · GW(p)
I prefer to give money (and effort) to people whom I know and know what they do. What they do might not be very efficient, but there are not many people who work on it at all; for reasons we have no power to change ourselves, building long-term infrastructure is not feasible. But losing this process now will be a worse throwback than having it taper out. So I have no problem with temporary measures, if it is all I can do.