Drowning children are rare

post by Benquo · 2019-05-28T19:27:12.548Z · LW · GW · 173 comments

This is a link post for http://benjaminrosshoffman.com/drowning-children-rare/

Contents

  People still make the funding gap claim
  How many excess deaths can developing-world interventions plausibly avert?
  What does this mean?
None
173 comments

Stories such as Peter Singer's "drowning child" hypothetical frequently imply that there is a major funding gap for health interventions in poor countries, such that there is a moral imperative for people in rich-countries to give a large portion of their income to charity. There are simply not enough excess deaths for these claims to be plausible.

Much of this is a restatement of part of my series on GiveWell and the problem of partial funding, so if you read that carefully and in detail, this may not be new to you, but it's important enough to have its own concise post. This post has been edited after its initial publication for clarity and tone.

People still make the funding gap claim

In his 1997 essay The Drowning Child and the Expanding Circle, Peter Singer laid out the basic argument for a moral obligation to give much more than most to, for the good of poor foreigners:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.
Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself? Virtually all agree that distance and nationality make no moral difference to the situation. I then point out that we are all in that situation of the person passing the shallow pond: we can all save lives of people, both children and adults, who would otherwise die, and we can do so at a very small cost to us: the cost of a new CD, a shirt or a night out at a restaurant or concert, can mean the difference between life and death to more than one person somewhere in the world – and overseas aid agencies like Oxfam overcome the problem of acting at a distance.

Singer no longer consistently endorses cost-effectiveness estimates that are so low, but still endorses the basic argument. Nor is this limited to him. As of 2019, GiveWell claims that its top charities can avert a death for a few thousand dollars, and the Center for Effective Altruism claims [? · GW] that someone with a typical American income can save dozens of lives over their lifetime by donating 10% of their income to the Against Malaria Foundation, which points to GiveWell's analysis for support. (This despite GiveWell's long-standing disclaimer that you shouldn't take its expected value calculations literally). The 2014 Slate Star Codex post Infinite Debt describes the Giving What We Can pledge as effectively a negotiated compromise between the perceived moral imperative to give literally everything you can to alleviate Bottomless Pits of Suffering, and the understandable desire to still have some nice things.

How many excess deaths can developing-world interventions plausibly avert?

According to the 2017 Global Burden of Disease report, around 10 million people die per year, globally, of "Communicable, maternal, neonatal, and nutritional diseases.”* This is roughly the category that the low cost-per-life-saved interventions target. If we assume that all of this is treatable at current cost per life saved numbers - the most generous possible assumption for the claim that there's a funding gap - then at $5,000 per life saved (substantially higher than GiveWell's current estimates), that would cost about $50 Billion to avert.

This is already well within the capacity of funds available to the Gates Foundation alone, and the Open Philanthropy Project / GiveWell is the main advisor of another multi-billion-dollar foundation, Good Ventures. The true number is almost certainly much smaller because many communicable, maternal, neonatal, and nutritional diseases do not admit of the kinds of cheap mass-administered cures that justify current cost-effectiveness numbers.

Of course, that’s an annual number, not a total number. But if we think that there is a present, rather than a future, funding gap of that size, that would have to mean that it’s within the power of the Gates Foundation alone to wipe out all fatalities due to communicable diseases immediately, a couple times over - in which case the progress really would be permanent, or at least quite lasting. And infections are the major target of current mass-market donor recommendations.

Even if we assume no long-run direct effects (no reduction in infection rates the next year, no flow-through effects, the people whose lives are saved just sit around not contributing to their communities), a large funding gap implies opportunities to demonstrate impact empirically with existing funds. Take the example of malaria alone (the target of the intervention specifically mentioned by CEA in its "dozens of lives" claim). The GBD report estimates 619,800 annual deaths - a reduction by half at $5k per life saved would only cost $3 billion per year, an annual outlay that the Gates Foundation alone could sustain for over a decade, and Good Ventures could certainly maintain for a couple of years on its own.

GiveWell's stated reason for not bothering to monitor statistical data on outcomes (such as e.g. malaria incidence and mortality, in the case of AMF) is that the data are too noisy. A reduction like that ought to be very noticeable, and therefore ought to make filling the next year's funding gap much more appealing to other potential donors. (And if the intervention doesn't do what we thought, then potential donors are less motivated to step in - but that's good, because it doesn't work!)

Imagine the world in which funds already allocated are enough to bring deaths due to communicable, maternal, neonatal, and nutritional diseases to zero or nearly zero even for one year. What else would be possible? And if you think that people's revealed preferences correctly assume that this is far from possible, what specifically does that imply about the cost per life saved?

What does this mean?

If the low cost-per-life-saved numbers are meaningful and accurate, then charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths. If the Gates Foundation and Good Ventures are behaving properly because they know better, then the opportunity to save additional lives cheaply has been greatly exaggerated. My former employer GiveWell in particular stands out, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that Good Ventures would be saving more than their "fair share" of lives.

In either case, we're not getting these estimates from a source that behaves as though it both cared about and believed them. The process that promoted them to your attention is more like advertising than like science or business accounting. Basic epistemic self-defense requires us to interpret them as marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process.

We should be more skeptical, not less, of vague claims by the same parties to even more spectacular returns on investment for speculative, hard to evaluate interventions, especially ones that promise to do the opposite of what the argument justifying the intervention recommends.

If you give based on mass-marketed high-cost-effectiveness representations, you're buying mass-marketed high-cost-effectiveness representations, not lives saved. Doing a little good is better than buying a symbolic representation of a large amount of good. There's no substitute for developing and acting on your own models of the world.

As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk while maximizing earnings, and give all your excess money to the global poor or something even more urgent. Insofar as there's a way to fix these problems as a low-info donor, there's already enough money. Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits. And try to fix the underlying systems problems that got you so confused in the first place.

* A previous version of this post erroneously read a decadal rate of decline as an annual rate of decline, which implied a stronger conclusion than is warranted. Thanks to Alexander Gordon-Brown to pointing out the error.

173 comments

Comments sorted by top scores.

comment by habryka (habryka4) · 2019-05-28T20:25:58.547Z · LW(p) · GW(p)
Either scenario clearly implies that these estimates are severely distorted and have to be interpreted as marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process.

While most things have at least some motives to control your behavior, I do think GiveWell outlines a pretty reasonable motivation here that they explained in detail in the exact blogpost that you linked (and I know that you critiqued that reasoning on your blog, though I haven't found the arguments there particularly compelling). Even if their reasoning is wrong, they might still genuinely believe that their reasoning is right, which I do think is very important to distinguish from "marketing copy designed to control your behavior".

I am often wrong and still try to explain to others why I am right. Sometimes this is the cause of bad external incentives, but sometimes it's also just a genuine mistake. Humans are not perfect reasoners and they make mistakes for reasons other than to take advantage of other people (sometimes they are tired, or sometimes they haven't invented physics yet and try to build planes anyway, or sometimes they haven't figured out what good game theory actually looks like and try their best anyways).

Replies from: Raemon
comment by Raemon · 2019-05-29T00:40:01.664Z · LW(p) · GW(p)

For clarity, the claim Givewell-at-the-time made was:

For giving opportunities that are above the benchmark we’ve set but not “must-fund” opportunities, we want to recommend that Good Ventures funds 50%. It’s hard to say what the right long-term split should be between Good Ventures (a major foundation) and a large number of individual donors, and we’ve chosen 50% largely because we don’t want to engineer – or appear to be engineering – the figure around how much we project that individuals will give this year (which would create the problematic incentives associated with “funging” approaches). A figure of 50% seems reasonable for the split between (a) one major, “anchor” donor who has substantial resources and great conviction in a giving opportunity; (b) all other donors combined.

With the two claims I've heard about why the 50% split being:

1. There's still more than $8 billion dollars worth of good to do, and they expect their last dollar to be worth more than current dollars.

(I agree that this is at least somewhat sketchy, esp. when you think about Gates Foundation and others, although I think the case is less strong than Benquo is presenting here)

2. Having a charity have most/all of their money come from a single donor creates some distorting effects, where the charity feels more beholden to that donor. Whereas if their donations are diversified the charity feels more free to make their own strategic choices. (I more recently heard someone from [OpenPhil or Givewell, can't remember which], saying that they sometimes made offhand comments to an org like "hmm, would it make sense for you to do X?" and the org treated that like "OMG we need to do X in order to get GoodVentures money" and then ran off to implement X, when the OpenPhil researcher had meant that more as an offhand hypothesis.

This second point seems pretty plausible to me, and was the thing that ultimately updated me away from the "OpenPhil should just fund everything" hypothesis.

Benquo, I can't remember if you have a post concretely addressing that – if so can you link to it?

Replies from: Benquo, Raemon, Benquo
comment by Benquo · 2019-05-29T15:07:54.636Z · LW(p) · GW(p)

Here's the part of the old series that dealt with this consideration: http://benjaminrosshoffman.com/givewell-case-study-effective-altruism-4/

The problem already exists on multiple levels, and the decision GiveWell made doesn't really alleviate it much. We should expect that GiveWell / Open Philanthropy Project is already distorting its judgment to match its idea of what Good Ventures wants, and the programs it's funding are already distorting their behavior to match their idea of what GiveWell / Open Philanthropy Project wants (since many of the "other" donors aren't actually uncorrelated with GiveWell's recommendations either!).

This line of thinking also seems like pretty much the opposite of the one that suggests that making a large grant to OpenAI in order to influence it would be a good idea, as I pointed out here. The whole arrangement is very much not what someone who was trying to avoid this kind of problem would build, so I don't buy it as an ad hoc justification for this particular decision.

I find this general pattern (providing reasons for things, that if taken seriously would actually recommend a quite different course of action than the one being considered) pretty unfortunate, and I wish I saw a feasible way to insist on better behavior. What's your model for how GiveWell should behave if they seriously wanted to avoid that sort of distortion? Why do you think it matches their revealed preferences?

Replies from: Raemon, Raemon, Raemon
comment by Raemon · 2019-06-03T00:23:48.629Z · LW(p) · GW(p)

This ended up taking awhile (and renewed some of my sympathy for the "I tried to discuss this all clearly and dispassionately and basically nobody listened" issue).

First, to save future people some effort, here is my abridged summary of what you said relating to "independence." (Also: here is a link directly to the relevant part of the blogpost)

  • Relying on a single donor does come with issues.
  • There are separate issues for:
    • Givewell's Independence (from Good Ventures)
    • Top Charity Independence (from Givewell)

Top Charity Independence

This section mostly summarized the bits I and Benquo covered in this thread, with Ben's takeaways being:

[...]
[The issues of top charity independence are] partially mitigated by the indirect and time-delayed nature of GiveWell's influence. In 2013, when AMF had been removed from the list of GiveWell top charities due to concerns about its room for more funding, it still received millions from GiveWell-influenced donors, suggesting that GiveWell donors are applying some amount of independent judgment. If this money all or mostly came from a single donor, it could exacerbate this problem, or lead top charities to hold some of the money in reserve.
If GiveWell is concerned about this effect, it could ask top charities how much money they would be willing to spend in a year from a single donor. Good Ventures could also negotiate a taper-down schedule for withdrawing funding, to reduce the potential costs of withdrawing a major source of funds.

I'm not sure I understand these suggestions yet, but they seem worth mulling over.

GiveWell independence

This section was fairly long (much longer than the previous one). I'm tempted to say "the thing I really cared about was the answer to the first problem". But I've tried to build a habit where, when I ask a question and someone responds in a different frame, I try to grok why their frame is different since that's often more illuminating (and at least seems like good form, building good will so that when I'm confident my frame makes more sense I can cash in and get others to try to understand mine)

Summarizing the section will take awhile and for now I think I just recommend people read the whole thing.


Replies from: Raemon
comment by Raemon · 2019-06-03T00:33:28.809Z · LW(p) · GW(p)

My off-the-cuff, high level response to the Givewell independence section + final conslusions (without having fully digested them) is:

Ben seems to be arguing that Givewell should either become much more independent from Good Ventures and OpenPhil (and probably move to a separate office), so that it can actually present the average donor will unbiased, relevant information (rather than information entangled with Good Venture's goals/models)

or

The other viable option is for GiveWell to give up for now on most public-facing recommendations and become a fully-funded branch of Good Ventures, to demonstrate to the world what GiveWell-style methods can do when applied to a problem where it is comparatively easy to verify results.

I can see both of these as valid options to explore, and that going to either extreme would probably maximize particular values.

But it's not obvious either of those maximize area-under-the-curve-of-total-values.

There's value to people with deep models being able to share those models. Bell Labs worked by having people being able to bounce ideas off each other, casually run into each other, and explain things to each other iteratively. My current sense is that I wish there was more opportunity for people in the EA landscape to share models more deeply with each other on a casual, day-to-day basis, rather than less (while still sharing as much as possible with the general public so people in the general public can also get engaged)

This does come with tradeoffs of neither maximizing independent judgment nor maximizing output nor most easily avoiding particular epistemic and integrity pitfalls, but it's where I expect the most total value to lie.

Replies from: Benquo
comment by Benquo · 2019-06-03T01:44:51.404Z · LW(p) · GW(p)
There's value to people with deep models being able to share those models. Bell Labs worked by having people being able to bounce ideas off each other, casually run into each other, and explain things to each other iteratively. My current sense is that I wish there was more opportunity for people in the EA landscape to share models more deeply with each other on a casual, day-to-day basis, rather than less (while still sharing as much as possible with the general public so people in the general public can also get engaged)

Trying to build something kind of like Bell Labs would be great! I don't see how it's relevant to the current discussion, though.

Replies from: Raemon
comment by Raemon · 2019-06-03T03:28:15.877Z · LW(p) · GW(p)

Right now, we (maybe? I'm not sure) have something like a few different mini-Bell-labs, that each have their own paradigm (and specialists within that paradigm).

The world where Givewell, Good Ventures and OpenPhil share an office is more Bell Labs like than one where they all have different offices. (FHI and UK CEA is a similar situation, as is CFAR/MIRI/LW). One of your suggestions in the blogpost was specifically that they split up into different, fully separate entities.

I'm proposing that Bell Labs exists on a spectrum, that sharing office space is a mechanism to be more Bell Labs like, and that generally being more Bell Labs like is better (at least in a vacuum)

(My shoulder Benquo now says something like "but if you're models are closely entangled with those of your funders, don't pretend like you are offering neutral services." Or maybe "it's good to share office space with people thinking about physics, because that's object level. It's bad to share office space with the people funding you." Which seems plausible but not overwhelmingly obvious given the other tradeoffs at play)

Replies from: Benquo, Benquo
comment by Benquo · 2019-06-03T05:57:16.135Z · LW(p) · GW(p)

People working at Bell Labs were trying to solve technical problems, not marketing or political problems. Sharing ideas across different technical disciplines is potentially a good thing, and I can see how FHI and MIRI in particular are a little bit like this, though writing white papers is a very different even within a technical field from figuring out how to make a thing work. But it doesn't seem like any of the other orgs substantially resemble Bell Labs at all, and the benefits of collocation for nontechnical projects are very different from the benefits for technical projects - they have more to do with narrative alignment (checking whether you're selling the same story), and less to do with opportunities to learn things of value outside the context of a shared story.

Collocation of groups representing (others') conflicting interests represents increased opportunity for corruption, not for generative collaboration.

Replies from: Raemon
comment by Raemon · 2019-06-03T22:48:38.188Z · LW(p) · GW(p)

Okay. I'm not sure whether I agree precisely but agree that that's the valid hypothesis, which I hadn't considered before in quite these terms, and updates my model a bit.

Collocation of groups representing (others') conflicting interests represents increased opportunity for corruption, not for generative collaboration.

The version of this that I'd more obviously endorse goes:

Collocation of groups representing conflicting interests represents increased opportunity for corruption.

Collocation of people who are building models represents increased opportunity for generative collaboration.

Collocation of people who are strategizing together represents increased opportunity for working on complex goals that require shared complex models, and/or shared complex plans. (Again, as said elsethread, I agree that plans are models are different, but I think they are subject to a lot of the same forces, with plans being subject to some additional forces as well)

These are all true, and indeed in tension.

Replies from: Raemon
comment by Raemon · 2019-06-03T22:51:02.889Z · LW(p) · GW(p)

I also think "sharing a narrative" and "building technical social models" are different, although easily confused (both from the outside and inside – I'm not actually sure which confusion is easier). But you do actually need social models if you're tackling social domains, which do actually benefit from interpersonal generativity.

comment by Benquo · 2019-06-03T06:06:37.826Z · LW(p) · GW(p)
My shoulder Benquo now says something like "but if you're models are closely entangled with those of your funders, don't pretend like you are offering neutral services." Or maybe "it's good to share office space with people thinking about physics, because that's object level. It's bad to share office space with the people funding you."

I think these are a much stronger objection jointly than separately. If Cari Tuna wants to run her own foundation, then it's probably good for her to collocate with the staff of that foundation.

comment by Raemon · 2019-05-30T01:41:34.755Z · LW(p) · GW(p)

(I do want to note that this is a domain where I'm quite confused about the right answer. I think I stand by the individual comments I made last night but somewhat regret posting them as quickly as I did without thinking about it more and it seems moderately likely that some pieces of my current take on the situation are incoherent)

comment by Raemon · 2019-05-30T01:05:57.767Z · LW(p) · GW(p)

Thanks. Will re-read the original post and think a bit more.

comment by Raemon · 2019-05-29T01:12:58.628Z · LW(p) · GW(p)

Some further thoughts on that: I agree social-reality-distortions are a big problem, although I don't think the werewolf/villager-distinction is the best frame. (In answer to Wei_Dai's comment elsethread, "am I a werewolf" isn't a very useful question. You almost certainly are at least slightly cognitively-distorted due to social reality, at least some of the time. You almost certainly sometimes employ obfuscatory techniques in order to give yourself room to maneuver, at least sort of, at least some times.)

But I think thinking in terms of villagers and werewolves leads you to ask the question 'who is a werewolf' moreso than 'how do we systematically disincentivize obfuscatory or manipulative behavior', which seems a more useful question.

I bring this all up in this particular subthread because I think it's important that one thing that incentivizes obfuscatory behavior is giving away billions of dollars.

My sense (not backed up by much legible argument) is that a major source of inefficiencies of the Gates Foundation (and OpenPhil to a lesser degree) is that they've created an entire ecosystem, which both attracts people motivated by power/money/prestige (simply to staff the organization), as well as incentives for charities to goodhart themselves to become legibly-valuable to the Gates Foundation's values.

Meanwhile, my experiencing reading OpenPhil articles is that they usually take pretty serious pains to say "don't take our estimates literally, these are very rough, please actually look at the spreadsheets that generated them and plug in your own values." AFAICT they're making a pretty good faith effort actually just be able to talk about object-level stuff without their statements being enactive language, and it's just really hard to get people to treat them that way.

(There are certainly older Givewell posts that seem to be concretely making the "drowning children everywhere" mistake, but AFAICT the current AMF page doesn't even give a concrete final estimate at all, instead the various object level costs and then linking to a spreadsheet and a whole other blogpost about how they do cost estimates)

I do see patterns within the broader EA community that push towards taking the low-cost-per-lives-saved estimates literally, where there are lots of movement-building forces that really want to translate things into a simple, spreadable message. Some of this seems like it was caused or exacerbated by specific people at specific times, but it also seems like the movement-building-forces almost exist as a force in their own right that's hard to stop.

Replies from: Raemon, elityre, jessica.liu.taylor
comment by Raemon · 2019-05-29T01:21:06.447Z · LW(p) · GW(p)

It seems like there's this general pattern, that occurs over and over, where people follow a path going:

1. Woah. Drowning child argument!

2. Woah. Lives are cheap!

3. Woah, obviously this is important to take action on and scale up now. Mass media! Get the message out!

4. Oh. This is more complicated.

5. Oh, I see, it's even more complicated. (where complication can include moving from global poverty to x-risk as a major focus, as well as realizing that global poverty isn't as simple to solve)

6. Person has transitioned into a more nuanced and careful thinker, and now is one of the people in charge of some kind of org or at least a local community somewhere. (for one example, see CEA's article on shifting from mass media to higher fidelity methods of transition)

But, the mass media (and generally simplified types of thinking independent of strategy) are more memetically virulent than the more careful thinking, and new people keep getting excited about them in waves that are self-sustaining and hard to clarify (esp. since the original EA infrastructure was created by people at the earlier stages of thinking). So it keeps on being something that a newcomer will bump into most often in EA spaces.

Replies from: Benquo
comment by Benquo · 2019-05-29T15:10:11.942Z · LW(p) · GW(p)

CEA continues to actively make the kinds of claims implied by taking GiveWell's cost per life saved numbers literally, as I pointed out in the post. Exact quote from the page I linked:

If you earn the typical income in the US, and donate 10% of your earnings each year to the Against Malaria Foundation, you will probably save dozens of lives over your lifetime.

Either CEA isn't run by people in stage 6, or ... it is, but keeps making claims like this anyway.

comment by Eli Tyre (elityre) · 2019-12-02T23:40:00.993Z · LW(p) · GW(p)
But I think thinking in terms of villagers and werewolves leads you to ask the question 'who is a werewolf' moreso than 'how do we systematically disincentivize obfuscatory or manipulative behavior', which seems a more useful question.

I want to upvote this in particular.

comment by jessicata (jessica.liu.taylor) · 2019-05-29T03:23:11.164Z · LW(p) · GW(p)

But I think thinking in terms of villagers and werewolves leads you to ask the question ‘who is a werewolf’ moreso than ‘how do we systematically disincentivize obfuscatory or manipulative behavior’, which seems a more useful question.

Clearly, the second question is also useful, but there is little hope of understanding, much less effectively counteracting, obfuscatory behavior, unless at least some people can see it as it happens, i.e. detect who is (locally) acting like a werewolf. (Note that the same person can act more/less obfuscatory at different times, in different contexts, about different things, etc)

Replies from: Raemon
comment by Raemon · 2019-05-29T03:34:39.535Z · LW(p) · GW(p)

Sure, I just think the right frame here is "detect and counteract obfuscatory behavior" rather than "detect werewolves." I think the "detect werewolves", or even "detect werewolf behavior" frame is more likely to collapse into tribal and unhelpful behavior at scale [edit: and possibly before then]

(This is for very similar reasons to why EA arguments often collapse into "donate all your money to help people". It's not that the nuances position isn't there, it just gets outcompeted by simpler versions of itself)

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-05-29T04:19:32.580Z · LW(p) · GW(p)

In your previous comment you're talking to Wei Dai, though. Do you think Wei Dai is going to misinterpret the werewolf concept in this manner? If so, why not link to the original post to counteract the possible misinterpretation, instead of implying that the werewolf frame itself is wrong?

(meta note: I'm worried here about the general pattern of people optimizing discourse for "the public" who is nonspecific and assumed to be highly uninformed / willfully misinterpreting / etc, in a way that makes it impossible for specific, informed people (such as you and Wei Dai) to communicate in a nuanced, high-information fashion)

[EDIT: also note that the frame you objected to (the villagers vs werewolf frame) contains important epistemic content that the "let's incentivize non-obfuscatory behavior" frame doesn't, as you agreed in your subsequent comment after I pointed it out. Which means I'm going to even more object to saying "the villagers/werewolf frame is bad" with the defense being that "people might misinterpret this", without offering a frame that contains the useful epistemic content of the misinterpretable frame]

Replies from: Raemon
comment by Raemon · 2019-05-29T05:06:48.225Z · LW(p) · GW(p)
I'm worried here about the general pattern of people optimizing discourse for "the public"

I do agree that this is a pattern to watch out for. I don't think it applies here, but could be wrong. I think it's very important that people be able to hold themselves to higher standards than what they can easily explain to the public [LW · GW], and it seems like a good reflex to notice when people might be trying to do that and point it out.

But I'm worried here about well-informed people caching ideas wrongly, not about the general public. More to say about this, but first want to note:

also note that the frame you objected to (the villagers vs werewolf frame) contains important epistemic content that the "let's incentivize non-obfuscatory behavior" frame doesn't, as you agreed in your subsequent comment after I pointed it out.

Huh - this just feels like a misinterpretation or reading odd things into what I said.

It had seemed obvious to me that to disincentivize obfuscatory behavior, you need people to be aware of what obfuscatory behavior looks like and what to do about it, and it felt weird that you saw that as something different.

It is fair that I may not have communicated that well, but that's part of my point – communication is quite hard. Similarly, I don't think the original werewolf post really communicates the thing it was meant to.

"Am I a werewolf" is not a particularly useful question to ask, and neither is "is so and so a werewolf?" because the answer is almost always "yes, kinda." (and what exactly you mean by "kinda" is doing most of the work). But, nonetheless, this is the sort of question that the werewolf frame prompts people to ask.

I'm worried about this, concretely, because after reading Effective Altruism is Self Recommending a while, despite the fact that I thought lots about it, and wrote up detailed responses to it (some of which I posted and some of which I just thought about privately), and I ran a meetup somewhat inspired by taking it seriously...

...despite all that, a year ago when I tried to remember what it was about, all I could remember was "givewell == ponzi scheme == bad", without any context of why the ponzi scheme metaphor mattered or how the principle was supposed to generalize. I'm similarly worried that a year from now, "werewolves == bad, hunt werewolves", is going to be the thing I remember about this.

The five-word-limit [LW · GW] isn't just for the uninformed public, it's for serious people trying to coordinate. The public can only coordinate around 5-word things. Serious people trying to be informed still have to ingest lots of information and form detailed models but those models are still going to have major bits that are compressed, out of pieces that end up being about five words. And this is a major part of why many people are confused about Effective Altruism and how to do it right in the first place.

Replies from: Benquo, Zack_M_Davis
comment by Benquo · 2019-05-29T16:59:25.415Z · LW(p) · GW(p)
I'm worried about this, concretely, because after reading Effective Altruism is Self Recommending a while, despite the fact that I thought lots about it, and wrote up detailed responses to it (some of which I posted and some of which I just thought about privately), and I ran a meetup somewhat inspired by taking it seriously...
...despite all that, a year ago when I tried to remember what it was about, all I could remember was "givewell == ponzi scheme == bad", without any context of why the ponzi scheme metaphor mattered or how the principle was supposed to generalize. I'm similarly worried that a year from now, "werewolves == bad, hunt werewolves", is going to be the thing I remember about this.
The five-word-limit [LW · GW] isn't just for the uninformed public, it's for serious people trying to coordinate. The public can only coordinate around 5-word things. Serious people trying to be informed still have to ingest lots of information and form detailed models but those models are still going to have major bits that are compressed, out of pieces that end up being about five words. And this is a major part of whymany people are confused about Effective Altruism and how to do it right in the first place.

If that's your outlook, it seems pointless to write anything longer than five words on any topic other than how to fix this problem.

Replies from: Raemon
comment by Raemon · 2019-05-30T01:39:32.560Z · LW(p) · GW(p)

I agree with the general urgency of the problem, although I think the frame of your comment is somewhat off. This problem seems... very information-theoretically-entrenched. I have some sense that you think of it as solvable in a way that it's fundamentally not actually solvable, just improvable, like you're trying to build a perpetual motion machine instead of a more efficient engine. There is only so much information people can process.

(This is based entirely off of reading between the lines of comments you've made, and I'm not confident what your outlook actually is here, and apologies for the armchair psychologizing).

I think you can make progress on it, which would look something like:

0) make sure people are aware of the problem

1) building better infrastructure (social or technological), probably could be grouped into a few goals:

  • nudge readers towards certain behavior
  • nudge writers towards certain behavior
  • provide tools that amplify readers capabilities
  • provide tools that amplify writer's capabilities

2) meanwhile, as a writer, make sure that the concepts you create for the public discourse are optimized for the right kind of compression. Some ideas compress better than others. (I have thought about the details of this

This *is* my outlook, and that yes this that both I, as well as you and Jessica, should probably be taking some kind of action that takes this outlook strategically seriously if you aren't already.

Distillation Technology

A major goal I have for LessWrong, which the team has talked about a lot, is improving distillation technology. It's not what we're currently working on because, well, there are *multiple* top priorities that all seem pretty urgent (and all seem like pieces of the same puzzle). But I think Distillation Tech is the sort of thing most likely to meaningfully improve the situation.

Right now the default mode people interact with LessWrong and many other blogging platforms is "write up a thing, post it, maybe change a few things in response to feedback." But for ideas that are actually going to become building blocks of the intellectual commons, you need to continuously invest in improving them.

Arbital tried to do this, and it failed because the problem is hard in weird ways, many of them somewhat hard to anticipate.

http://distill.pub tackles a piece of this but not in a way that seems especially scalable.

Scott Alexander's short story Ars Longa Vita Brevis is a fictional account of what seems necessary to me.

I do hope that by the end of this year the LW team will have made some concrete progress on this. I think it is plausibly a mistake that we haven't focused on it already – we discussed switching gears towards it at our last retreat but it seemed to make more sense to finish Open Questions.

Replies from: Benquo
comment by Benquo · 2019-05-30T04:08:07.710Z · LW(p) · GW(p)

Trying to nudge others seems like an attempt to route around the problem rather than solve it. It seems like you tried pretty hard to integrate the substantive points in my "Effective Altruism is self-recommending" post, and even with pretty extensive active engagement, your estimate is that you only retained a very superficial summary. I don't see how any compression tech for communication at scale can compete with what an engaged reader like you should be able to do for themselves while taking that kind of initiative.

We know this problem has been solved in the past in some domains - you can't do a thing like the Apollo project or build working hospitals where cardiovascular surgery is regularly successful based on a series of atomic five-word commands; some sort of recursive general grammar is required, and at least some of the participants need to share detailed models.

One way this could be compatible with your observation is that people have somewhat recently gotten worse at this sort of skill; another is that credit-assignment is an unusually difficult domain to do this in. My recent blog posts have argued that at least the latter is true.

In the former case (lost literacy), we should be able to reconstruct older modes of coordination. In the latter (politics has always been hard to think clearly about), we should at least internally be able to learn from each other by learning to apply cognitive architectures we use in domains where we find this sort of thing comparatively easy.

Replies from: Raemon
comment by Raemon · 2019-06-01T03:43:45.864Z · LW(p) · GW(p)

I think I may have communicatedly somewhat poorly by phrasing this in terms of 5 words, rather than 5 chunks, and will try to write a new post sometime that presents a more formal theory of what's going on.

I mentioned in the comments of the previous post [LW(p) · GW(p)]:

Coordinated actions can't take up more bandwidth than someone's working memory (which is something like 7 chunks, and if you're using all 7 chunks then they don't have any spare chunks to handle weird edge cases).
A lot of coordination (and communication) is about reducing the chunk-size of actions. This is why jargon is useful, habits and training are useful (as well as checklists and forms and bureaucracy), since that can condense an otherwise unworkably long instruction into something people can manage.

And:

The "Go to the store" is four words. But "go" actually means "stand up. walk to the door. open the door. Walk to your car. Open your car door. Get inside. Take the key out of your pocket. Put the key in the ignition slot..." etc. (Which are in turn actually broken into smaller steps like "lift your front leg up while adjusting your weight forward")
But, you are capable of taking all of that an chunking it as the concept "go somewhere" (as as well as the meta concept of "go to the place whichever way is most convenient, which might be walking or biking or taking a bus"), although if you have to use a form of transport you are less familiar with, remembering how to do it might take up a lot of working memory slots, leaving you liable to forget other parts of your plan.

I do in fact expect that the Apollo project worked via finding ways to cache things into manageable chunks, even for the people who kept the whole project in their head.

Chunks can be nested, and chunks can include subtle neural-network-weights that are part of your background experience and aren't quite explicit knowledge [LW(p) · GW(p)]. It can be very hard to communicate subtle nuances as part of the chunks if you don't have excess to high volume and preferably in-person communication.

I'd be interested in figuring out how to operationalize this as a bet and check how the project actually worked. What I have heard (epistemic status: heard it from some guy on the internet) is that actually, most people on the project did not have all the pieces in their head, and the only people who did were the pilots.

My guess is that the pilots had a model of how to *use* and *repair* all the pieces of the ship, but couldn't have built it themselves.

My guess it that "the people who actually designed and assembled the thing" had a model of how all the pieces fit together, but not as a deep a model of how and when to use it, and may have only understood the inputs and outputs of each piece.

And meanwhile, while I'm not quite sure how to operationalize the bet, I would bet maybe $50 that (conditional on us finding a good operationalization), that the number of people who had the full model or anything like it was quite small. ("You Have About Five Words" doesn't claim you can't have more than 5 words of nuance, it claims that you can't coordinate large groups of people that depend on more than 5 words of nuance. I bet there were less than 100 people and probably closer to 10 who had anything like a full model of everything going on)

Replies from: Benquo
comment by Benquo · 2019-06-03T01:36:03.091Z · LW(p) · GW(p)
and will try to write a new post sometime that presents a more formal theory of what's going on

I think I'm unclear on how this constrains anticipations, and in particular it seems like there's substantial ambiguity as to what claim you're making, such that it could be any of these:

  • You can't communicate recursive structures or models with more than five total chunks via mass media such as writing.
  • You can't get humans to act (or in particular to take initiative) based on such models, so you're limited to direct commands when coordinating actions.
  • There exist such people, but they're very few and stretched between very different projects and there's nothing we can do about that.
  • ??? Something else ???
Replies from: Raemon
comment by Raemon · 2019-06-03T03:55:10.463Z · LW(p) · GW(p)

I think there are two different anticipation-constraining-claims, similar but not quite what you said there:

Working Memory Learning Hypothesis – people can learn complex or recursive concepts, but each chunk that they learn cannot be composed of more than 7 other chunks. You can learn a 49 chunk concept but first must distill it into seven 7-chunk-concepts, learn each one, and then combine them together.

Coordination Nuance Hypothesis – there are limits to how nuanced a model you can coordinate around, at various scales of coordination. I'm not sure precisely what the limits are, but it seems quite clear that the more people you are coordinating the harder it is to get them to share a nuanced model or strategy. It's easier to have a nuanced strategy with 10 people than 100, 1000, or 10,000.

I'm less confident of the Working Memory hypothesis (it's an armchair inside view based on my understanding of how working memory works)

I'm fairly confident in the Coordination Nuance Hypothesis, which is based on observations about how people actually seem to coordinate at various scales and how much nuance they seem to preserve.

In both cases, there are tools available to improve your ability to learn (as an individual), disseminate information (as a communicator), and keep people organized (as a leader). But none of the tools changed the fundamental equation, just the terms.

Anticipation Constraints:

The anticipation-constraint of the WMLH is "if you try to learn a concept that requires more than 7 chunks, you will fail. If a concept requires 12 chunks, you will not successfully learn it (or will learn a simplified bastardization of it) until you find a way to compress the 12 chunks into 7. If you have to do this yourself, it will take longer than if an educator has optimized it for you in advance."

The anticipation constraint of the CNH is that if you try to coordinate with 100 people of a given level of intelligence, the shared complexity of the plan that you are enacting will be lower than the complexity of the plan you could enact with 10 people. If you try to implement a more complex plan or orient around a more complex model, your organization will make mistakes due to distorted simplifications of the plan. And this gets worse as your organizations scales.

Replies from: Benquo, Raemon, Raemon
comment by Benquo · 2019-06-03T05:44:50.241Z · LW(p) · GW(p)

CNH is still ambiguous between "nuanced plan" and "nuanced model" here, and those seem extremely different to me.

Replies from: Raemon
comment by Raemon · 2019-06-03T22:31:21.748Z · LW(p) · GW(p)

I agree they are different but think it is the case that with a larger group you have a harder time with either of them, for roughly the same reasons at roughly the same rate of increased difficulty.

comment by Raemon · 2019-06-03T04:06:15.879Z · LW(p) · GW(p)

The Working Memory Hypothesis says the Bell Labs is useful, in part, because whenever you need to combine multiple interdisciplinary concepts that are each complicated to invent a new concept...

instead of having to read a textbook that explains it one-particular-way (and, if it's not your field, you'd need to get up to speed on the entire field in order to have any context at all) you can just walk down the hall and ask the guy who invented the concept "how does this work" and have them explain it to you multiple times until they find a way to compress it down into a 7 chunks, optimized for your current level of understanding.

comment by Raemon · 2019-06-03T04:03:28.748Z · LW(p) · GW(p)

A slightly more accurate anticipation of the CNH is:

  • people need to spend time learning a thing in order to coordinate around it. At the very least, the more time you need to spend getting people up to speed on a model, the less time they have to actually act on that model
  • people have idiosyncratic learning styles, and are going to misinterpret some bits of your plan, and you won't know in advance which ones. Dealing with this requires individual attention, noticing their mistakes and correcting them. Middle managers (and middle "educators" can help to alleviate this, but every link in the chain reduces your control over what message gets distributed. If you need 10,000 people to all understand and act on the same plan/model, it needs to be simple or robust enough to survive 10,000 people misinterpreting it in slightly different ways
  • This gets even worse if you need to change your plan over time in response to new information, since now people are getting it confused with the old plan, or they don't agree with the new plan because they signed up for the old plan, and then you have to Do Politics to get them on board with the new plan.
    • At the very least, if you've coordinated perfectly, each time you change your plan you need to shift from "focusing on execution" to "focusing on getting people up to speed on the new model."
comment by Zack_M_Davis · 2019-05-30T04:43:32.731Z · LW(p) · GW(p)

when I tried to remember what it was about, all I could remember [...] I'm similarly worried that a year from now

Make spaced repetition cards?

Replies from: Raemon, Benquo
comment by Raemon · 2019-06-01T03:31:09.398Z · LW(p) · GW(p)

The way that I'd actually do this, and plan to do this (in line with Benquo's reply to you), is to repackage the concept into something that I understand more deeply and which I expect to unpack more easily in the future.

Part of this requires me to do some work for myself (no amount of good authorship can replace putting at least some work into truly understanding something)

Part of this has to do with me having my own framework (rooted in Robust Agency [LW · GW] among other things) which is different from Benquo's framework, and Ben's personal experience playing werewolf.

But a lot of my criticism of the current frame is that it naturally suggest compacting the model in the wrong way. (to be clear, I think this is fine for a post that represents a low-friction strategy to post your thoughts and conversations as they form, without stressing too much about optimizing pedagogy. I'm glad Ben posted the Villager/Werewolf post. But I think the presentation makes it harder to learn than it needs to be, and is particularly ripe for being misinterpreted in a way that benefits rather than harms werewolves, and if it's going to be coming up in conversation a lot I think it'd be worth investing time in optimizing it better)

comment by Benquo · 2019-05-30T13:06:11.403Z · LW(p) · GW(p)

That seems like the sort of hack that lets you pass a test, not the sort of thing that makes knowledge truly a part of you [LW · GW]. To achieve the latter, you have to bump it up against your anticipations, and constantly check to see not only whether the argument makes sense to you, but whether you understand it well enough to generate it in novel cases that don’t look like the one you’re currently concerned with.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-05-30T15:00:40.095Z · LW(p) · GW(p)

I think it's possible to use in a "mindful" way even if most people are doing it wrong? The system reminding you what you read n days ago gives you a chance to connect it to the real world today when you otherwise would have forgotten.

comment by Benquo · 2019-06-03T06:08:28.941Z · LW(p) · GW(p)

Holden Karnofsky explicitly disclaimed [EA(p) · GW(p)] the "independence via multiple funders" consideration as not one that motivated the partial funding recommendation.

comment by Raemon · 2019-05-29T04:35:36.106Z · LW(p) · GW(p)
If you give based on mass-marketed high-cost-effectiveness representations, you're buying mass-marketed high-cost-effectiveness representations, not lives saved. Doing a little good is better than buying a symbolic representation of a large amount of good. There's no substitute for developing and acting on your own models of the world.
As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent. Insofar as there's a way to fix these problems as a low-info donor, there's already enough money. Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits. And try to fix the underlying systems problems that got you so confused in the first place.

Some people on your blog have noted that this doesn't seem true, at least, because GiveDirectly still exists (both literally and as a sort of metaphorical principle), as well as noting that the Gates Foundation isn't actually *that* rich compared to what needs doing.

Partly I wanted to quickly note that for posterity on LW, but I have a slightly different criticism of the final paragraph(s) here. I think there's a version of this conclusion that I agree with, but some combination of the facts and the nuances seem wrong.

For most people, there are going to be something like three choices:

  • do nothing (other than, like, having fun hobbies and bolstering your own career for your own gain)
  • give money locally to things that feel good, without reflecting much,
  • spend some small amount of thinking about where to give on the global scale

Meanwhile, for many people in the west, the money sitting around in their bank account just isn't going to be spend on much, and just not get used for much benefit at all.

A message that I'd endorse is "if you want to help people seriously, you have to develop and act on your own models of the world." But for most people this just isn't an option on the table – the choices are "do something simple and easy that doesn't cost you much or require much agency" and "just chill and do whatever."

If you live in an environment where the default hobby is "think lots about the world and try to help it", then it might be that locally helping the people around you is a good strategy for helping significantly – maybe because you have access to other people's thoughts, and maybe because the people around you are more likely than-average to actually go off and make a dent in the universe if you help them. But that doesn't seem true for most people.

For most people, the default option of "just give to GiveDirectly" seems pretty fine to me.

I think there is some important major shift that the EA community needs to make (which, to it's credit, it seems to be making, just slowly for the aforementioned "simpler ideas outcompete nuanced ones and everyone goes through the simpler phase"). The EA community needs to come to terms with the fact that there are not going to be many overwhelming opportunities to help that don't require much thought. In particular, I think many EA-folk have a motivation which includes a mixture of "wanting to feel superior" and "wanting to feel confident they've actually found the right answer."

But actually, the two things that seem worth doing seem (to me) to basically be "GiveDirectly/GiveDirectly-esque things" and "actually build real models and try hard."

"Just donate to GiveDirectly" is fairly unsatisfying as a way to feel superior or confident – it leaves you knowing that there's still a lot of seriously broken, terrible things out there, and you can't really do anything to counterfactually fix those things without real work.

Replies from: Benquo, SaidAchmiz
comment by Benquo · 2019-05-29T16:48:30.447Z · LW(p) · GW(p)

It seems to me that if attending to the ordinary business of your life, including career and hobbies, amounts to doing nothing, there's something very deeply wrong happening, and people would do well to attend to that problem first. On the other hand, doing nothing is preferable to doing harm, and it's entirely possible that many people are actually causing harm, e.g. by generating misinformation, and it would be better if they just stopped, even if they can't figure out how to do whatever they were pretending to do.

I certainly don't think that someone donating their surplus to GiveDirectly, or living more modestly in order to share more with others, is doing a wrong thing. It's admirable to want to share one's wealth with those who have less.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-06-03T06:30:21.774Z · LW(p) · GW(p)

It seems to me that if attending to the ordinary business of your life, including career and hobbies, amounts to doing nothing, there’s something very deeply wrong happening, and people would do well to attend to that problem first.

I'm tempted to answer this statement by saying that something very deeply wrong is clearly happening, e.g., there's not nearly enough effort in the world to prevent coordination failures that could destroy most of the potential value of the universe, and attending to that problem would involve doing something besides or in addition to attending to the ordinary business of life. I feel like this is probably missing your point though. Do you want to spell out what you mean more, e.g., is there some other "something very deeply wrong happening" you have in mind, and if so what do you think people should do about it?

Replies from: Benquo
comment by Benquo · 2019-06-03T07:28:01.176Z · LW(p) · GW(p)

If people who can pay their own rent are actually doing nothing by default, that implies that our society's credit-allocation system is deeply broken. If so, then we can't reasonably hope to get right answers by applying simplified economic models that assume credit-allocation is approximately right, the way I see EAs doing, until we have a solid theoretical understanding of what kind of world we actually live in.

Here's a simple example: Robin Hanson's written a lot about how it's not clear that health care is beneficial on the margin. This is basically unsurprising if you think there are a lot of bullshit jobs. But 80,000 Hours's medical career advice assumes that the system basically knows what it's doing and that health care delivers health on the margin - the only question is how much.

It seems to me that if an intellectual community isn't resolving these kind of fundamental confusions (and at least one side has to be deeply confused here, or at least badly misinformed), then it should expect to be very deeply confused about philanthropy. Not just in the sense of "what is the optimal strategy," but in the sense of "what does giving away money even do."

Replies from: Thrasymachus, elityre
comment by Thrasymachus · 2019-06-04T14:58:07.511Z · LW(p) · GW(p)

[I wrote the 80k medical careers page]

I don't see there as being a 'fundamental confusion' here, and not even that much of a fundamental disagreement.

When I crunched the numbers on 'how much good do doctors do' it was meant to provide a rough handle on a plausible upper bound: even if we beg the question against critics of medicine (of which there are many), and even if we presume any observational marginal response is purely causal (and purely mediated by doctors), the numbers aren't (in EA terms) that exciting in terms of direct impact.

In talks, I generally use the upper 95% confidence bound or central estimate of the doctor coefficient as a rough steer (it isn't a significant predictor, and there's reasonable probability mass on the impact being negative): although I suspect there will be generally unaccounted confounders attenuating 'true' effect rather than colliders masking it, these sort of ecological studies are sufficiently insensitive to either to be no more than indications - alongside the qualitative factors - that the 'best (naive) case' for direct impact as a doctor isn't promising.

There's little that turns on which side of zero our best guess falls, so long as we be confident it is a long way down from the best candidates: on the scale of intervention effectiveness, there's not that much absolute distance between estimates (I suspect) Hanson or I would offer. There might not be much disagreement even in coarse qualitative terms: Hanson's work here - I think - focuses on the US, and US health outcomes are a sufficiently pathological outlier in the world I'm also unsure whether marginal US medical effort is beneficial; I'm not sure Hanson has staked out a view on whether he's similarly uncertain about positive marginal impact in non-US countries, so he might agree with my view it is (modestly) net-positive, despite its dysfunction (neither I nor what I wrote assumes the system 'basically knows what it's doing' in the common-sense meaning).

If Hanson has staked out this broader view, then I do disagree with it, but I don't think this disagreement would indicate at least one of us has to be 'deeply confused' (this looks like a pretty crisp disagreement to me) nor 'badly misinformed' (I don't think there are key considerations one-or-other of us is ignorant of which explains why one of us errs to sceptical or cautiously optimistic). My impressions are also less sympathetic to 'signalling accounts' of healthcare than his (cf. [LW(p) · GW(p)]) - but again, my view isn't 'This is total garbage', and I doubt he's monomaniacally hedgehog-y about the signalling account. (Both of us have also argued for attenuating our individual impressions in deference to a wider consensus/outside view for all things considered judgements).

Although I think the balance of expertise leans against archly sceptical takes on medicine, I don't foresee convincing adjudication on this point coming any time soon, nor that EA can reasonably expect to be the ones to provide this breakthrough - still less for all the potential sign-inverting crucial considerations out there. Stumbling on as best we can with our best guess seems a better approach than being paralyzed until we're sure we've figured it all out.

Replies from: Benquo, Douglas_Knight
comment by Benquo · 2019-06-04T16:35:33.576Z · LW(p) · GW(p)

Something that nets out to a small or no effect because large benefits and harms cancel out is very different (with different potential for impact) than something like, say, faith healing, where you can’t outperform just by killing fewer patients. A marginalist analysis that assumes that the person making the decision doesn’t know their own intentions & is just another random draw of a ball from an urn totally misses this factor.

Replies from: Thrasymachus
comment by Thrasymachus · 2019-06-04T17:38:26.419Z · LW(p) · GW(p)
A marginalist analysis that assumes that the person making the decision doesn’t know their own intentions & is just another random draw of a ball from an urn totally misses this factor.

Happily, this factor has not been missed by either my profile or 80k's work here more generally. Among other things, we looked at:

  • Variance in impact between specialties and (intranational) location (1) (as well as variance in earnings for E2G reasons) (2, also, cf.)
  • Areas within medicine which look particularly promising (3)
  • Why 'direct' clinical impact (either between or within clinical specialties) probably has limited variance versus (e.g.) research (4), also

I also cover this in talks I have given on medical careers, as well as when offering advice to people contemplating a medical career or how to have a greater impact staying within medicine.

I still think trying to get a handle on the average case is a useful benchmark.

comment by Douglas_Knight · 2019-11-27T15:44:12.579Z · LW(p) · GW(p)
US health outcomes are a sufficiently pathological outlier in the world

I just want to register disagreement.

comment by Eli Tyre (elityre) · 2019-11-26T01:46:42.308Z · LW(p) · GW(p)
that implies that our society's credit-allocation system is deeply broken

I want to double click on "credit-allocation system." it sounds like an important part of your model, but I don't really know what you mean. Something like "answering the question of 'who is responsible for the good in our world?'" Like I'm miss-allocating credit to the health sector, which is (maybe) not actually responsible for much good?

What does this have to do with if people who can pay their rent are doing something or nothing by default? Is your claim that by participating in the economy, they should be helping by default (they pay their landlord, who buys goods, which pays manufacturers, etc.) And if that isn't having a positive impact, that must mean that society is collectively able to identify the places where value come from?

I think I don't get it.

Replies from: Benquo, Raemon
comment by Benquo · 2019-11-26T11:31:18.175Z · LW(p) · GW(p)

helping by de­fault (they pay their land­lord, who buys goods, which pays man­u­fac­tur­ers, etc.)

The exact opposite - getting paid should imply something. The naive Econ 101 view is that it implies producing something of value. "Production" is generally measured in terms of what people are willing to pay for.

If getting paid has little to do with helping others on net , then our society’s official unit of account isn’t tracking production (Talents [LW · GW]), GDP is a measurement of the level of coercion in a society (There Is a War [LW · GW]), the bullshit jobs hypothesis is true, we can’t take job descriptions at face value, and CEA’s advice to build career capital just means join a powerful gang.

This undermines enough of the core operating assumptions EAs seem to be using that the right thing to do in that case is try to build better models of what's going on, not act based on what your own models imply is disinformation.

Replies from: elityre
comment by Eli Tyre (elityre) · 2019-11-27T21:34:32.236Z · LW(p) · GW(p)

I'm trying to make sense of what you're saying here, but bear with me, we have a large inferential distance.

Let's see.

  • The Talents piece was interesting. I bet I'm still missing something, but I left a paraphrase as a comment over there.
  • I read the all of "There Is a War", but I still don't get the claim, "GDP is a measurement of the level of coercion in a society." I'm going to keep working at it.
  • I basically already thought that lots of jobs are bullshit, but I might skim or listen to David Graeber's book to get more data.
    • Oh. He's the guy that wrote Debt: the First 5000 Years! (Which makes a very similar point about money as the middle parts of this post.)

Given my current understanding, I don't get either the claim that "CEA’s advice to build career capital just means join a powerful gang" or that "This undermines enough of the core operating assumptions EAs seem to be using."

I do agree that the main work to be done is figuring out what is actually going on in the world and how the world actually works.

I'm going to keep reading and thinking and try to get what you're saying.

. . .

My initial response before I followed your links, so this is at least partially obviated:

1.

The exact opposite - getting paid should imply something. The naive Econ 101 view is that it implies producing something of value. "Production" is generally measured in terms of what people are willing to pay for.

First of all...Yep it does seem pretty weird that we maybe live in a world where most people are paid but produce no wealth. As a case in point, my understanding is that a large fraction of programmers actually add negative value, by adding bugs to code.

It certainly seems correct to me, to stop and be like "There are millions of people up there in those skyscrapers, working in offices, and it seems like (maybe) a lot of them are producing literally no value. WTF?! How did we end up in a world like this! What is going on?"

My current best guess, the following: Some people are creating value, huge amounts of value in total (we live in a very rich society, by historical standards), but many (most?) people are doing useless work. But for employers, the overhead of identifying which work is creating value and which work isn't is (apparently) more costly than the resources that would be saved by cutting the people that aren't producing value.

It's like Paul Graham says: in a company your work is averaged together with a bunch of other people's and it is hard or impossible to assess each person's contribution. This gives rise to a funny dynamic where a lot of the people are basically occupying a free-rider / parasitic niche: they produce ~no wealth, but they average their work with some people who do.

(To be clear: the issue is rarely people being hired to do something that in principle could be productive, but then slacking off. I would guess that is much more frequently the case that an institution, because of its own [very hard to resolve] inadequacies, hire people specifically do do useless things.)

But an important point here is that on average, people are creating value, even if most of the human-working hours are useless. The basic formula of dollars = value created, still holds, it's just really noisy.

2.

It seems to me that if attending to the ordinary business of your life, including career and hobbies, amounts to doing nothing, there's something very deeply wrong happening, and people would do well to attend to that problem first.

Well, importantly, as an EA, I don't mean "doing nothing" in the sense of providing value to no one at all. I mean "doing nothing" as "doing nothing to make a dent in the big problems."

And this state of affairs isn't that surprising. My default models has it that I can engage in positive-sum trades, which do provide value to others in my economic sphere, but that by default none of that surplus gets directed at people outside of that economic sphere. The most salient example might be animals, who have no ability to advocate for themselves, or trade with us. They are outside the virtuous circle of our economy, and don't benefit from it unless people like me take specific action to save them.

The same basic argument goes for people in third world countries and far-future entities.

So, yeah, this is a problem. And your average EA thinks we should attend to it. But according to the narrative, EA is already on it.

Replies from: Benquo, Benquo
comment by Benquo · 2019-12-02T18:06:44.238Z · LW(p) · GW(p)
I read the all of "There Is a War", but I still don't get the claim, "GDP is a measurement of the level of coercion in a society." I'm going to keep working at it.

I think it's analytically pretty simple. GDP involves adding up all the "output" into a single metric. Output is measured based on others' willingness to pay. The more payments are motivated by violence rather than the production of something everyone is glad to have more of, the more GDP measures expropriation rather than production. There Is A War is mostly about working out the details & how this relates to macroeconomic ideas of "stimulus," "aggregate demand," etc, but if that analytic argument doesn't make sense to you, then that's the point we should be working out.

Replies from: elityre
comment by Eli Tyre (elityre) · 2019-12-03T01:19:42.556Z · LW(p) · GW(p)

Ok. This makes sense to me. GDP measures a mix of trades that occur due to simple mutual benefit and "trades" that occur because of extortion or manipulation.

If you look at the combined metric, and interpret it to be a measure of only the first kind of trade, you're likely overstating how much value is being created, perhaps by a huge margin, depending on what percentage of trades are based on violence.

But I'm not really clear on why you're talking about GDP at all. It seems like you're taking the claim that "GDP is a bad metric for value creation", and concluding that "interventions like give directly are a misguided."

Rereading this thread, I come to

If people who can pay their own rent are actually doing nothing by default, that implies that our society's credit-allocation system is deeply broken. If so, then we can't reasonably hope to get right answers by applying simplified economic models that assume credit-allocation is approximately right, the way I see EAs doing, until we have a solid theoretical understanding of what kind of world we actually live in.

Is the argument something like...

  • 1. GDP is is irreparably corrupt, as a useful measure. Folks often take it as a measure of how much value is created, but it is actually just as much a measure of how much violence is being done.
  • 2. This is an example of a more general problem: All of our metrics for tracking value are similarly broken. Our methods of allocating credit don't work at all.
  • 3. Given that we don't have robust methods for allocating credit, we can't trust that anything good happens when give money to the actual organization "Give Directly". For all we know that money gets squandered on activities that superficially look like helping, but are actually useless or harmful. (This is a reasonable supposition, because this is what most organizations do on priors.)
  • 4. Given that we can't trust giving money to Give Directly does any good, our only hope for doing good is to actually make sense of what is happening in the world so that we can construct credit allocation systems on which we can actually rely.

On a scale of 0 to 10, how close was that?


Replies from: Benquo
comment by Benquo · 2019-12-03T04:31:18.250Z · LW(p) · GW(p)

This is something like a 9 - gets the overall structure of the argument right with some important caveats:

I'd make a slightly weaker a claim for 2 - that credit-allocation methods have to be presumed broken until established otherwise, and no adequate audit has entered common knowledge.

An important part of the reason for 3 is that, the larger the share of "knowledge work" that we think is mostly about creating disinformation, the more one should distrust any official representations one hasn't personally checked, when there's any profit or social incentive to make up such stories. Based on my sense of the character of the people I met while working at GiveWell, and the kind of scrutiny they said they applied to charities, I'd personally be surprised if GiveDirectly didn't actually exist, or simply pocketed the money. But it's not at all obvious to me that people without my privileged knowledge should be sure of that.

Replies from: elityre
comment by Eli Tyre (elityre) · 2019-12-03T09:16:40.919Z · LW(p) · GW(p)

Ok. Great.

credit-allocation methods have to be presumed broken until established otherwise, and no adequate audit has entered common knowledge.

That does not seem obvious to me. It certainly does not seem to follow from merely the fact the GDP is not a good measure of national welfare. (In large part, because my impression is that economists say all the time that GDP is not a good measure of national welfare.)

Presumably you believe that point 2 holds, not just because of the GDP example, but because you've seen many, many examples (like health care, which you mention above). Or maybe because you have an analytical argument that the sort of thing that happens with GDP has to generalize to other credit allocation systems?

Is that right? Can you say more about why you expect this to be a general problem?

. . .

I have a much higher credence that give Directly Exists and is doing basically what it says it is doing than you do.

If I do a stack trace on why I think that...

  • I have a background expectation that the most blatant kinds of fraudulence will be caught. I live in a society that has laws, including laws about what sorts of things non-profits are allowed to do, and not do, with money. If they were lying about every having given any money to anyone in Africa, I'm confident that someone would notice that, and blow the whistle, and the perpetrators would be in jail. (A better hidden, but consequently less extreme incidence of embezzlement is much more plausible, though I would still expect it to be caught eventually.)
  • They're sending some somewhat costly-to-fake signals of actually trying to help. For instance, I heard on a blog once, that they were doing an RCT, to see if cash transfers actually improve people's lives. (I think. I may just be wrong about the simple facts, here.) Most charities don't do anything like that, and most of the world doesn't fault them for it. Plus it sounds like a hassle. The only reasons why you would organize an RCT, are 1) you are actually trying to figure out if your intervention works 2) you have a very niche marketing strategy that involves sending costly signals of epistemic virtue, to hoodwink people like me into thinking "Yay Give Directly", or 3) some combination of 1 and 2, whereby you're actually interested in the answer, and also part of your motivation is knowing how much it will impress the EAs.
    • I find it implausible that they are doing strictly 2, because I don't think the idea would occur to anyone who wasn't genuinely curious. 3 seems likely.
  • Trust-chains: They are endorsed by people who are respected by people who's epistemics I trust. GiveWell endorsed them. I personally have not read GiveWell's evaluations in much depth, but I know that many people around me including, for instance Carl Shulman, have engaged with them extensively. Not only does everyone around me have oodles of respect for Carl, but I can personally verify (with a small sample size of interactions), that his thinking is extremely careful and rigorous. If Carl thought that GiveWell's research was generally low quality, I would expect this to be a known, oft-mentioned thing (and I would expect his picture to not be on the OpenPhil website). Carl, is of course, only an example. There are other people around who's epistemics I trust, who find GiveWell's research to be good enough to be worth talking about. (Or at least old school GiveWell. I do have a sense that the magic has faded in recent years, as usually happens to institutions.)
    • I happen to know some of these people personally, but I don't think that's a Crux. Several years ago, I was a smart, but inexperienced college student. I came across LessWrong, and correctly identified that the people of that community had better epistemology than me (plus I was impressed with this Eliezer-guy who was apparently making progress on these philosophical problems, in sort of the mode that I had tried to make progress, but he was way ahead of me, and way more skilled.) On LessWrong, they're talking a lot about GiveWell, and GiveWell recommended charities. I think it's pretty reasonable to assume that the analysis going into choosing those charities is high quality. Maybe not perfect, but much better than I should be able to expect to do myself (as a college students).

It seems to me that I'm pretty correct in thinking that Give Directly does what it says it does.

You disagree though? Can you point at what I'm getting wrong?

My current understanding of your view: You think that institutional dysfunction and optimized misinformation is so common, that the evidence I note above is not sufficient to overwhelm a the prior, and I should assume that Give Directly is doing approximately nothing of value (and maybe causing harm), until I get much stronger evidence otherwise. (And that evidence should be of the form that I can check with my own eyes and my own models?)



Replies from: Benquo, Benquo
comment by Benquo · 2019-12-03T16:35:10.500Z · LW(p) · GW(p)
I have a background expectation that the most blatant kinds of fraudulence will be caught.

Consider how long Theranos operated, its prestigious board of directors, and the fact that it managed to make a major sale to Walgreens before blowing up. Consider how prominent Three Cups of Tea was (promoted by a New York Times columnist), for how long, before it was exposed. Consider that official US government nutrition advice still reflects obviously distorted, politically motivated research from the early 20th Century. Consider that the MLM company Amway managed to bribe Harvard to get the right introductions to Chinese regulators. Scams can and do capture the official narrative and prosecute whistleblowers.

Consider that pretty much by definition we're not aware of the most successful scams.

Related: The Scams Are Winning

Replies from: elityre
comment by Eli Tyre (elityre) · 2019-12-06T05:46:37.735Z · LW(p) · GW(p)

[Note that I'm shifting the conversation some. The grandparent was about things like Give Directly, and this is mostly talking about large, rich companies like Theanos.]

One could look at this evidence and think:

Wow. These fraudulent endeavors ran for really a long time. And the fact that they got caught means that they are probabilisticly not the best executed scams. This stuff must be happening all around us!

Or a person might look at this evidence and think:

So it seems that scams are really quite rare: there are only a dozen or-so scandals like this every decade. And they collapsed in the end. This doesn't seem like a big part of the world.

Because this is a situation involving hidden evidence, I'm not really sure how to distinguish between those worlds, except for something like a randomized audit: 0.001% of companies in the economy are randomly chosen for a detailed investigation, regardless of any allegations.

I would expect that we live in something closer to the second world, if for no other reason than that this world looks really rich, and that wealth has to be created by something other than outright scams (which is not to say that everyone isn't also dabbling in misinformation).

I would be shocked if more than one of the S&P 500 companies was a scam on the level of Theanos. Does your world model predict that some of them are?

Replies from: Benquo
comment by Benquo · 2019-12-06T13:56:07.705Z · LW(p) · GW(p)

Coca-Cola produces something about as worthless as Theranos machines, substituting the experience of a thing for the thing itself, & is pretty blatant about it. The scams that “win” gerrymander our concept-boundaries to make it hard to see. Likewise Pepsi. JPMorgan Chase & Bank of America, in different ways, are scams structurally similar to Bernie Madoff but with a legitimate state subsidy to bail them out when they blow up. This is not an exhaustive list, just the first 4 that jumped out at me. Pharma is also mostly a scam these days, nearly all of the extant drugs that matter are already off-patent.

Also Facebook, but “scam” is less obviously the right category.

Replies from: habryka4
comment by habryka (habryka4) · 2019-12-06T19:05:12.849Z · LW(p) · GW(p)

Somewhat confused by the coca-cola example. I don't buy coke very often, but it seems usually worth it to me when I do buy it (in small amounts, since I do think it tastes pretty good). Is the claim that they are not providing any value some kind of assumption about my coherent extrapolated volition? 

Replies from: Benquo
comment by Benquo · 2019-12-08T05:48:58.791Z · LW(p) · GW(p)

It was originally marketed as a health tonic, but its apparent curative properties were due to the powerful stimulant and analgesic cocaine, not any health-enhancing ingredients. Later the cocaine was taken out (but the “Coca” in the name retained), so now it fools the subconscious into thinking it’s healthful with - on different timescales - mass media advertising, caffeine, and refined sugar.

It’s less overtly a scam now, in large part because it has the endowment necessary to manipulate impressions more subtly at scale.

Replies from: habryka4
comment by habryka (habryka4) · 2019-12-08T18:09:12.498Z · LW(p) · GW(p)

I mean, I agree that Coca Cola engages in marketing practices that try to fabricate associations that are not particularly truth-oriented, but that's very different from the thing with Theranos. 

I model Coca Cola mostly as damaging for my health, and model its short-term positive performance effects to be basically fully mediated via caffeine, but I still think it's providing me value above and beyond those those benefits, and outweighing the costs in certain situations. 

Theranos seems highly disanalogous, since I think almost no one who knew the actual extend of Theranos' capabilities, and had accurate beliefs about its technologies, would give money to them. I have pretty confident bounds on the effects of coca-cola, and still decide to sometimes give them my money, and would be really highly surprised if there turns out to be a fact about coke that its internal executives are aware of (even subconsciously) that would drastically change that assessment for me, and it doesn't seem like that's what you are arguing for. 

comment by Benquo · 2019-12-03T16:20:47.646Z · LW(p) · GW(p)
Presumably you believe that point 2 holds, not just because of the GDP example, but because you've seen many, many examples (like health care, which you mention above). Or maybe because you have an analytical argument that the sort of thing that happens with GDP has to generalize to other credit allocation systems?

Both - it would be worrying to have an analytic argument but not notice lots of examples, and it would require much more investigation (and skepticism) if it were happening all the time for no apparent reason.

I tried to gesture at the gestalt of the argument in The Humility Argument for Honesty. Basically, all conflict between intelligent agents contains a large information component, so if we're fractally at war with each other, we should expect most info channels that aren't immediately life-support-critical to turn into disinformation, and we should expect this process to accelerate over time.

For examples, important search terms are "preference falsification" and "Gell-Mann amnesia [LW(p) · GW(p)]".

I don't think I disagree with you on GiveDirectly, except that I suspect you aren't tracking some important ways your trust chain is likely to make correlated errors along the lines of assuming official statistics are correct. Quick check: what's your 90% confidence interval for global population, after Googling the official number, which is around 7.7 billion?

Replies from: elityre
comment by Eli Tyre (elityre) · 2019-12-06T05:50:56.510Z · LW(p) · GW(p)
except that I suspect you aren't tracking some important ways your trust chain is likely to make correlated errors along the lines of assuming official statistics are correct.

Interesting.

Quick check: what's your 90% confidence interval for global population, after Googling the official number, which is around 7.7 billion?

I don't know, certainly not off by more than a half billion in either direction? I don't know how hard it is to estimate the number of people on earth. It doesn't seem like there's much incentive to mess with the numbers here.

Replies from: Raemon
comment by Raemon · 2019-12-06T06:18:47.408Z · LW(p) · GW(p)

It doesn't seem like there's much incentive to mess with the numbers here.

Guessing at potential comfounders - There may be incentives for individual countries (or cities) to inflate their numbers (to seem more important) – or, deflate their numbers, to avoid taxes. 

comment by Benquo · 2019-12-02T18:11:39.585Z · LW(p) · GW(p)
I basically already thought that lots of jobs are bullshit, but I might skim or listen to David Graeber's book to get more data.

It's not really about how many jobs are bullshit, so much as what it means to do a bullshit job. On Graeber's model, bullshit jobs are mostly about propping up the story that bullshit jobs are necessary for production. Moral Mazes [LW · GW] might help clarify the mechanism, and what I mean about gangs - a lot of white-collar work involves a kind of participatory business theater, to prop up the ego claims of one's patron.

The more we think the white-collar world works this way, the more skeptical we should be of the literal truth of claims to be "working on" some problem or other using conventional structures.

Replies from: elityre
comment by Eli Tyre (elityre) · 2019-12-03T01:55:19.672Z · LW(p) · GW(p)

My intuitive answer to the question "What is a gang?":

  • A gang is an organization of thugs that claims resources, like territory or protection money, via force or the threat of force.

Is that close to how you are using the term? What's the important/relevant feature of a "gang", when you say "CEA’s advice to build career capital just means join a powerful gang"?

Do you mean something like the following? (This is a probably incorrect paraphrase, not a quote)

A gang extracts resources from their victims by requiring they pay tribute or "protection money". Ostensibly, this involves the victim (perhaps a small business owner) paying the gang for a service, protection from other gangs. But in actuality, this tribute represents extortion: all parties involved understand that the gang is making a threat, "pay up, or we'll attack you."
Most white collar workers are executing a similar maneuver, except that instead of using force, they are corrupting the victim's ability to make sense of the situation. The management consulting firm is implicitly making the claim, "You need us. You can't make good decisions without us" to some client. While in actuality, the consultancy creates some very official looking documents that have almost no content.
Or, in the same vein, there is an ecosystem of people around a philanthropist, all of which are following their incentives to validate the philanthropist's ego, and convince him that / appear as if they're succeeding at his charitable goals.
So called "career capital" amounts to having more prestige, or otherwise be better at convincing people, and therefore being able to extort larger amounts.

Am I on the right track at all?

Or is it more direct than that?

Most so called "value creation" is actually adversarial extraction of value from others, things like programmers optimizing a social media feed to keep people on their platform for longer, or ad agencies developing advertisements that cause people to buy products against their best interests.
Since most of the economy is a 0-sum game like this, any "career capital" must cache out in terms of being better at this exploitative process, or providing value to people / entities that doing the exploiting (which is the same thing, but a degree or a few degrees removed).

Is any of that right?

Replies from: Benquo
comment by Benquo · 2019-12-03T04:40:46.480Z · LW(p) · GW(p)

Overall your wording seems pretty close.

Most white collar workers are executing a similar maneuver, except that instead of using force, they are corrupting the victim's ability to make sense of the situation.

I think it's actually a combination of this, and actual coordination to freeze out marginal gangs or things that aren't gangs, from access to the system. Venture capitalists, for example, will tend to fund people who feel like members of the right gang, use the right signifiers in the right ways, went to the right schools, etc. Everyone I've talked with about their experience pitching startups has reported that making judgments on the merits is at best highly noncentral behavior.

If enough of the economy is cartelized, and the cartels are taxing noncartels indirectly via the state, then it doesn't much matter whether the cartels apply force directly, though sometimes they still do.

So called "career capital" amounts to having more prestige, or otherwise be better at convincing people, and therefore being able to extort larger amounts.

It basically involves sending or learning how to send a costly signal of membership in a prestigious gang, including some mixture of job history, acculturation, and integrating socially into a network.

Replies from: elityre
comment by Eli Tyre (elityre) · 2019-12-06T05:48:23.482Z · LW(p) · GW(p)

If I replaced the word "gang" here, with the word "ingroup" or "club" or "class", does that seem just as good?

In these sentences in particular...

Venture capitalists, for example, will tend to fund people who feel like members of the right gang, use the right signifiers in the right ways, went to the right schools, etc.

and

It basically involves sending or learning how to send a costly signal of membership in a prestigious gang, including some mixture of job history, acculturation, and integrating socially into a network.

...I'm tempted to replace the word "gang" with the word "ingroup".

My guess is that you would say, "An ingroup that coordinates to exclude / freeze out non-ingroup-members from a market is a gang. Let's not mince words."

Replies from: Benquo
comment by Benquo · 2019-12-06T14:15:44.921Z · LW(p) · GW(p)

Maybe more specifically an ingroup that takes over a potentially real, profitable social niche, squeezes out everyone else, and uses the niche’s leverage to maximize rent extraction, is a gang.

comment by Raemon · 2019-11-26T02:53:49.959Z · LW(p) · GW(p)

While I'm not sure I get it either, I think Benquo's frame has a high level disagreement with the sort of question that utilitarianism asks in the first place (as well as the sort of questions that many non-utilitarian variants of EA are asking). Or rather, objects to the frame in which the question is often asked.

My attempt to summarize the objection is (curious how close this lands for Benquo) is:

"Much of the time, people have internalized moral systems not as something they get to reason about and have agency over, but as something imposed from outside, that they need to submit to. This is a fundamentally unhealthy way to relate to morality.

A person in a bad relationship is further away from a healthy relationship, than a single person, because first the person has to break up with their spouse, which is traumatic and exhausting. A person with a flawed moral foundation trying to figure out how to do good is further away from figuring out how to do good than a person who is just trying to make a generally good life for themselves.

This is important:

a) because if you try to impose your morality on people who are "just making a good life for themselves", you are continuing to build societal momentum in a direction that alienates people from their own agency and welbeing.

b) "just making a good life for themselves" is, in fact, one of the core goods one can do, and in a just world it'd be what most people were doing.

I think There is A War [LW · GW] is one of the earlier Benquo pieces exploring this (or: probably there are earlier-still-ones, but it's the one I happened to re-read recently). A more recent comment is his objection to Habryka's take on Integrity (link to comment deep in the conversation [LW(p) · GW(p)] that gets to the point, but might require reading the thread for context)

My previous attempt to pass his ITT [LW(p) · GW(p)] may also provide some context.

 

comment by Said Achmiz (SaidAchmiz) · 2019-05-29T15:46:14.560Z · LW(p) · GW(p)

For most people, there are going to be something like three choices:

  • do nothing (other than, like, having fun hobbies and bolstering your own career for your own gain)

  • give money locally to things that feel good, without reflecting much,

  • spend some small amount of thinking about where to give on the global scale

Why is it impossible to to give money locally, yet spend some small amount of thinking about where/how to do so? Is effectiveness incompatible with philanthrolocalism…?

Replies from: gjm
comment by gjm · 2019-05-31T10:56:28.395Z · LW(p) · GW(p)

Many people find that thinking about effectiveness rapidly makes local giving seem a less attractive option.
The thought processes I can see that might lead someone to give locally in pursuit of effectiveness are quite complex ones:

  • Trading off being able to do more good per dollar in poorer places against the difficulty of ensuring that useful things actually happen with those dollars. Requires careful thought about just how severe the principal/agent problems, lack of visibility, etc, are.
  • Giving explicitly higher weighting to the importance of people and causes located near to oneself, and trading off effectiveness against closeness. Requires careful thought about one's own values, and some sort of principled way of mapping closeness to importance.

Those are both certainly possible but I think they take more than a "small amount of thinking". Of course there are other ways to end up prioritizing local causes, but I think those go in the "without reflecting much" category. It seems to me that a modest amount of (serious) thinking about effectiveness makes local giving very hard to justify for its effectiveness, unless you happen to have a really exceptional local cause on your doorstep.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-05-31T20:19:53.773Z · LW(p) · GW(p)

I’m afraid I completely disagree, and in fact find this view somewhat ridiculous.

“Giving explicitly higher weighting to the importance of people and causes located near to oneself” (the other clause in that sentence strikes me as tendentious and inaccurate…) is not, in fact, complex. It is a perfectly ordinary—and perfectly sensible—way of thinking about, and valuing, the world. That doing good in contexts distant from oneself (both in physical and in social/culture space) is quite difficult (the problems you allude to are indeed very severe, and absolutely do not warrant a casual dismissal) merely turns the aforementioned perspective from “perfectly sensible” to “more sensible than any other view, absent some quite unusual extenuating circumstances or some quite unusual values”.

Now, it is true that there is a sort of “valley of bad moral philosophy”, where if you go in a certain philosophical direction, you will end up abandoning good sense, and embracing various forms of “globalist” perspectives on altruism (including the usual array of utilitarian views), until you reach a sufficient level of philosophical sophistication to realize the mistakes you were making. (Obviously, many people never make it out of the valley at all—or at least they haven’t yet…) So in that sense, it requires ‘more than a “small amount of thinking”’ to get to a “localist” view. But… another alternative is to simply not make the mistakes in question in the first place.

Finally, it is a historical and terminological distortion (and a most unfortunate one) to take “effectiveness” (in the context of discussions of charity/philanthropy) to mean only effectiveness relative to a moral value. There is nothing at all philosophically inconsistent in selecting a goal (on the basis, presumably, of your values), and then evaluating effectiveness relative to that goal. There is a good deal of thinking, and of research, to be done in service of discovering what sort of charitable activity most effectively serves a given goal; should someone who thinks and researches thus, and engages in charitable work or giving on the basis of the conclusions reached, be described as “giv[ing] money locally to things that feel good, without reflecting much”? That seems nonsensical to me…

comment by Evan_Gaensbauer · 2019-05-30T01:05:13.688Z · LW(p) · GW(p)

I haven't read your entire series of posts on Givewell and effective altruism. So I'm basing this comment mostly off of just this post. It seems like it is jumping all over the place.

You say:

Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated. My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that this would be an unfair way to save lives.

This sets up a false dichotomy. Both the Gates Foundation and Good Ventures are focused on areas in addition to funding interventions in the developing world. Obviously, they both believe those other areas, e.g., in Good Ventures' case, existential risk reduction, present them with the opportunity to prevent just as many, if not more, deaths than interventions in the developing world. Of course, a lot of people disagree with the idea something like AI alignment, which Good Ventures funds, is in any way comparable to cost-effective interventions into the developing world in terms of how many deaths it prevents, its cost-effectiveness, or its moral value. Yet based on how you used to work for Givewell, and you're now much more focused on AI alignment, it doesn't seem like you're one of those people.

If you were one of those people, you would be the kind of person to think that Good Ventures not spending all their money on developing-world interventions, and instead spreading out their grants over time to shape the longer-term future in terms of AI safety and other focus areas, quite objectionable. If you are that kind of person, i.e., you believe it is indeed objectionable Good Ventures is, from the viewpoint of thinking their top priority should be developing-world interventions, 'hoarding' their money for other focus areas like AI alignment is objectionable, that is not at all clear or obvious.

Unless you believe that, then, right here, there is a third option other than "Gates Foundation and Good Ventures are hoarding money at the price of millions of deaths", and the "numbers are wildly exaggerated". That is, both foundations believe the money they are reserving for focus areas other than developing-world interventions aren't being hoarded at the expense of millions of lives. Presumably, this is because both foundations also believe the counterfactual expected value of these other focus areas is at least comparable to the expected value of developing-world interventions.

If the Gates Foundation and Good Ventures appear not to, across the proportions of their endowments they've respectively allotted for developing-world interventions and other focus areas, be not giving away their money as quickly as they could while still being as effective as possible, then you objecting to it would make sense. However, that would be a separate thesis that you haven't covered in this post. Were you to put forward such a thesis, you've already laid out the case for what's wrong with a foundation like Good Ventures not fully funding each year the developing-world interventions of Givewell's recommended charities.

Yet you would still need to make additional arguments for what Good Ventures is doing wrong in only granting to another focus area like AI alignment as much as they annually are now, instead of grantmaking at a much higher annual rate or volume. Were you to do that, it would be appropriate to point out what is wrong with the reasons an organization like the Open Philanthropy Project (Open Phil) doesn't grant much more to their other focus areas each year.

For example, one reason it wouldn't make sense for AI alignment to be granting in total each year 100x as much to AI risk as they are now, starting this year, is because it's not clear AI risk as a field currently has that much room for more funding. It is at least not clear AI risk organizations could sustain such a high growth rate assuming their grants from Open Phil were 100x bigger than they are now. That's an entirely different point than any you made in this post. Also, as far as I'm aware, that isn't an argument you've made anywhere else.

Given that you are presumably familiar with these considerations, it seems to me you should have been able to anticipate the possibility of the third option. In other words, unless you're going to make the case that either:

  • it is objectionable for a foundation like Good Ventures to reserve some of their endowment for the long-term development of a focus area like AI risk, instead of using it all to fund cost-effective developing-world interventions, and/or;
  • it is objectionable Good Ventures isn't funding AI alignment more than they currently are, and why;

you should have been able to tell in advance the dichotomy you presented is indeed a false one. It seems like of the two options in the dichotomy you presented, you believe cost-effectiveness estimates like those from Givewell are wildly exaggerated. I don't know why you presented it as though you thought it might just as easily be one of the two scenarios you presented, but the fact you're exactly the kind of person who should have been able to anticipate a plausible third scenario and didn't undermines the point you're trying to make.

Either scenario clearly implies that these estimates are severely distorted and have to be interpreted as marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process.

One thing that falls out of my above commentary is that since it is not clearly the case that is only one of the two scenarios you presented is true, it is not necessarily the case either that the mentioned cost-effectiveness estimates "have to be interpreted as marketing copy designed to control your behaviour". What's more, you've presented another false dichotomy here. It is not the case Givewell's cost-effectiveness estimates must be only and exclusively one of either:

  • severely distorted marketing copy designed for behavioural control.
  • unbiased estimates designed to improve the quality of your decision-making process.

Obviously, Givewell's estimates aren't unbiased. I don't recall Givewell ever claiming to be unbiased, although it is a problem for other actors in EA to treat Givewell's cost-effectiveness estimates as unbiased. I recall from reading a couple posts from you series on Givewell it seemed as though you were trying to hold Givewell responsible for the exaggerated rhetoric made by others in EA using Givewell's cost-effectiveness estimates. It seems like you're doing that again now. I never understood then, and I don't understand now, why you've tried explaining all this as if Givewell is responsible for how other people are misusing their numbers. Perhaps Givewell should do more to discourage a culture of exaggeration and bluster in EA built on people using their cost-effectiveness estimates and prestige as a charity evaluator to make claims about developing-world interventions that aren't actually backed up by Givewell's research and analysis.

Yet that is another, different argument you would have to make, and one that you didn't. To hold Givewell as exclusively culpable for how their cost-effectiveness estimates and analyses have been misused as you have, in the past and present, would only be justified by some kind of evidence Givewell is actively trying to cultivate a culture of exaggeration and bluster and shiny-distraction-via-prestige around themselves. I'm not saying no such evidence exists, but if it does, you haven't presented any of it.

We should be more skeptical, not less, of vague claims by the same parties to even more spectacular returns on investment for speculative, hard to evaluate interventions, especially ones that promise to do the opposite of what the argument justifying the intervention recommends.

You make this claim as though it might be the exact same people in the organizations of Givewell, Open Phil, and Good Ventures who are responsible for all the following decisions:

  • presenting Givewell's cost-effectiveness estimates in the way they do.
  • making recommendations to Good Ventures via Givewell about how much Good Ventures should grant to each of Givewell's recommended charities.
  • Good Ventures' stake in OpenAI.

However, it isn't the same people making all of these decisions across these 3 organizations.

  • Dustin Moskowitz and Cari Tuna are ultimately responsible for what kinds of grants Good Ventures makes, regardless of focus area, but they obviously delegate much decision-making to Open Phil.
  • Good Ventures obviously has tremendous influence over how Givewell conducts their research and analysis to reach particular cost-effectiveness estimates, but by all appearances Good Ventures appears to have let Givewell operate with a great deal of autonomy, and haven't been trying to influence Givewell to dramatically alter how they conduct their research and analysis. Thus, it would make sense to look to Givewell, and not Good Ventures, for what to make of their research and analysis.
  • Elie Hassenfeld is the current executive director of Givewell, and thus is the one to be held ultimately accountable for Givewell's cost-effectiveness estimates, and recommendations to Good Ventures. Holden Karnofsky is a co-founder of Givewell, but for a long time has been focusing full-time on his role as executive director of Open Phil. Holden no longer co-directs Givewell with Elie.
  • As ED of Open Phil, Holden has spearheaded Open Phil's work in, and Good Ventures' funding of, AI risk research.
  • That their is a division of labour whereby Holden has led Open Phil's work, and Elie Givewell's, has been common knowledge in the effective altruism movement for a long time.

What many people disagreed with about Open Phil recommending Good Ventures take a stake in OpenAI, and Holden Karnofsky consequently being made a Board Member of Open Phil, is based on the particular roles played by the people involved in the grant investigation that I won't go through here. Also, like yourself, on the expectation OpenAI may make the state of things in AI risk worse rather than better, based on either OpenAI's ignorance or misunderstanding of how AI alignment research should be conducted, at least in the eyes of many people in the rationality and x-risk reduction communities.

The assertion Givewell is wildly exaggerating their cost-effectiveness estimates is an assertion the numbers are being fudged at a different organization than Open Phil. The common denominator is of course that Good Ventures made grants made on recommendations from both Open Phil and Givewell. Holden and Elie are co-founders of both Open Phil and Givewell. However, with the two separate cases of Givewell's cost-effectiveness estimates, and Open Phil's process for recommending Good Ventures take a stake in OpenAI, it is two separate organizations, run by two separate teams, led separately by Elie and Holden respectively. If in each of the cases you present of Givewell, and Open Phil's support for OpenAI, something wrong has been done, they are two very different kinds of mistakes made for very different reasons.

Again, Good Ventures is ultimately accountable for grants made in both cases. You could hold each organization accountable separately, but when you refer to them as the "same parties", you're making it out as though Good Ventures, and their satellite organizations, are either, generically, incompetent or dishonest. I say " generically", because while you set it up that way, you know as well as anyone the specific ways in which the two cases of Givewell's estimates, and Open Phil's/Good Venture's relationship with OpenAI, differ. You know this because you have been one of, if not the most, prominent individual critic in both cases for the last few years.

Yet when you call them all the "same parties", you're treating both cases as if the 'family' of Good Ventures and surrounding organizations generally can't be trusted, because it's opaque to us how they come to make these decisions that lead to dishonest or mistaken outcomes as you've alleged. Yet you're one of the people who made clear to everyone else how the decisions were made; who were the different people/organizations who made the decisions; and what one might find objectionable about them.

To substantiate the claim the two different cases of Givewell's estimates, and Open Phil's relationship to OpenAI, are sufficient grounds to reach the conclusion none of these organizations, nor their parent foundation Good Ventures, can generally be trusted, you could have held Good Ventures accountable for not being diligent enough in monitoring the fidelity of the recommendations they receive from either Givewell and Open Phil. Yet you didn't do that. You could have also, now or in the past, tried to make the arguments Givewell and Open Phil should each separately be held accountable for what you see as their mistakes on in the two separate cases. Yet you didn't do that either.

Making any of those arguments would have made sense. Yet what you did is you treated it as though Givewell, Open Phil, and Good Ventures all play the same kind of role in both cases. Not even all 3 organizations are involved in both cases. To summarize all this, the two cases of Givewell's estimates, and Open Phil's relationship to OpenAI, if they are problematic, are not the same kinds of problems caused by Good Ventures for the same reasons. Yet you're making it out as though they are.

It might make more sense if you were someone else who just saw the common connection of Good Ventures, and didn't know how to go about criticizing them other than to point out they were sloppy in both cases. Yet you know everything I've mentioned about who the different people are in each of the two cases, and the different kinds of decisions each organization is responsible for, and how they differ in how they make those decisions. So, you know how to hold each organization separately accountable for what you see as their separate mistakes. You know these things because you:

  • identified as an effective altruist for several years.
  • have been a member of the rationality community for several years.
  • are a former employee of Givewell.
  • have transitioned since you've left Givewell to focusing more of your time on AI alignment.

Yet you make it out as though Good Ventures, Givewell, and Open Phil are some unitary blob that makes poor decisions. If you wanted to make any one of, or even all, the other specific, alternative arguments I suggested about how to hold each of the 3 organizations individually accountable, it would have been a lot easier for you to make a solid and convincing argument than the one you've actually made regarding these organizations. Yet because you didn't, this is another instance of you undermining what you yourself are trying to accomplish with a post like this.

As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent.

You started this post off with what's wrong with Peter Singer's cost-effectiveness estimates from his 1997 essay. Then you pointed out what you see as being wrong similarly done by specific EA-aligned organizations today. Then you bridge to how, because funding gaps are illusory given the erroneous cost-effectiveness estimates, the Gates Foundation and Good Ventures are doing much less than they should with regards to developing-world interventions.

Then, you zoom in on what you see as the common pattern of bad recommendations being given to Good Ventures by Open Phil and Givewell. Yet the two cases of recommendations you've provided are from these 2 separate organizations who make their decisions and recommendations in very different ways, and are run by 2 different teams of staff, as I pointed out above. And as it's I've established you've known all this in intimate detail for years, you're making arguments that make much less sense than the ones you could have made based on the information available to you.

None of that has anything to do with the Gates Foundation. You told me in response to another comment I made on this post that it was another recent discussion on LW where the Gates Foundation came up that inspired you to make this post. You made your point about the Gates Foundation. Then, that didn't go anywhere, because you made unrelated points about unrelated organizations.

For the record, when you said:

If you give based on mass-marketed high-cost-effectiveness representations, you're buying mass-marketed high-cost-effectiveness representations, not lives saved. Doing a little good is better than buying a symbolic representation of a large amount of good. There's no substitute for developing and acting on your own models of the world.

and

Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits.

none of that applies to the Gates Foundation, because the Gates Foundation isn't an EA-aligned organization "mass-marketing high cost effectiveness representations" in a bid to get small, individual donors to build up a mass movement of effective charitable giving to fill illusory funding gaps they could easily fill themselves. Other things being equal, the Gates Foundation could obviously fill the funding gap. None of the rest of those things apply to the Gates Foundation, though, and they would have to for it to make sense that this post, and its thesis, were inspired by mistakes being made by the Gates Foundation, not just EA-aligned organizations.

However, going back to "the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent", it seems like you're claiming the thesis of Singer's 1997 essay, and the basis for effective altruism as a movement(?), are predicated exclusively on reliably nonsensical cost-effectiveness estimates from Givewell/Open Phil, not just for developing-world interventions, but in general. None of that is true, because Singer's thesis is not based exclusively on a specific set of cost-effectiveness estimates about specific causes form specific organizations, and Singer's thesis isn't the exclusive basis for the effective altruism movement. Even if that was a logically valid argument, your conclusion would not be sound either way, because, as I've pointed out above, the premise that it makes sense to treat Givewell, Open Phil, and Good Ventures, like a unitary actor, is false.

In other words, because " mass-marketed high-cost-effectiveness representations" are not the foundation of "the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent" in general, and certainly isn't some kind of primary basis for effective altruism if that was something you were suggesting, your conclusion destroys nothing.

To summarize:

  • you knowingly presented a false dichotomy about why the Gates Foundation and Good Ventures don't donate their entire endowments to developing-world interventions.
  • you knowingly set up a false dichotomy whereby everyone has been acting the whole time as if it's the case Givewell's and Open Phil's cost-effectiveness estimates are unbiased, or the reason they are wildly exaggerated is because those organizations are trying to deliberately manipulate people's behaviour.
  • you cannot deny you could not have been cognizant of the fact these dichotomies are false, because the evidence with which you present them are your own prior conclusions drawn in part from your personal and professional experiences.
  • you said this post was inspired by the point you made about the Gates Foundation, but that has nothing to do with the broader arguments you've made about Good Ventures, Open Phil, or Givewell, and those arguments don't back the conclusion you've consequently drawn about utilitarianism and effective altruism.

In this post, you've raised some broad concerns of things happening in the effective altruism movement I think are worth serious consideration.

My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that this would be an unfair way to save lives.

I don't believe the rationale for why Givewell doesn't recommend to Good Ventures to fully fund Givewell's top charities totally holds up, and I'd like to understand better why they don't. I think Givewell maybe should recommend Good Ventures fully fund their own top charities each year.

Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits.

That EA has a tendency to move people in a direction too far away from these more ordinary and concrete aspects of their lives is a valid one.

I am also unhappy with much of what has happened relating to OpenAI.

All these are valid concerns that would be much easier to take more seriously from you if you presented arguments for them on their own, as opposed to presenting them as a few of many different assertions that related to each other, at best, in a very tenuous manner, in a big soup of an argument against effective altruism that doesn't logically hold up, based on the litany of unresolved issues with it I've pointed out above. It's also not clear why you wouldn't have realized any of this before you made this post, based on all the knowledge that served as the evidence you used for your premises in this post you had before you made this post, as it was information you yourself published on the internet.

Even if all the apparent leaps of logic you've made in this post are artifacts of this post being a truncated summary of your entire, extensive series of posts on Givewell, and EA, the entire structure of this one post undermines the point(s) you're trying to make with it.

Replies from: Benquo, Benquo, Benquo, Benquo
comment by Benquo · 2019-06-24T19:08:54.667Z · LW(p) · GW(p)

I think I can summarize my difficulties with this comment a bit better now.

(1) It's quite long, and brings up many objections that I dealt with in detail in the longer series I linked to. There will always be more excuses someone can generate that sound facially plausible if you don't think them through. One has to limit scope somehow, and I'd be happy to get specific constructive suggestions about how to do that more clearly.

(2) You're exaggerating the extent to which Open Philanthropy Project, Good Ventures, and GiveWell, have been separate organizations. The original explanation of the partial funding decision - which was a decision about how to recommend allocating Good Ventures's capital - was published under the GiveWell brand, but under Holden's name. My experience working for the organizations was broadly consistent with this. If they've since segmented more, that sounds like an improvement, but doesn't help enough with the underlying revealed preferences problem.

Replies from: Raemon
comment by Raemon · 2019-06-24T21:12:33.068Z · LW(p) · GW(p)
I'd be happy to get specific constructive suggestions about how to do that more clearly.

I don't know that this suggestion is best – it's a legitimately hard problem – but a policy I think would be pretty reasonable is:

When responding to lengthy comments/posts that include at least 1-2 things you know you dealt with in a longer series, one option is to simply leave it at: "hmm, I think it'd make more sense for you to read through this longer series and think carefully about it before continuing the discussion" rather than trying to engage with any specific points.

And then shifting the whole conversation into a slower mode, where people are expected to take a day or two in between replies to make sure they understand all the context.

(I think I would have had similar difficulty responding to Evan's comment as what you describe here)

Replies from: Benquo
comment by Benquo · 2019-06-25T00:39:07.623Z · LW(p) · GW(p)

To clarify a bit - I'm more confused about how to make the original post more clearly scope-limited, than about how to improve my commenting policy.

Evan's criticism in large part deals with the facts that there are specific possible scenarios I didn't discuss, which might make more sense of e.g. GiveWell's behavior. I think these are mostly not coherent alternatives, just differently incoherent ones that amount to changing the subject.

It's obviously not possible to discuss every expressible scenario. A fully general excuse like "maybe the Illuminati ordered them to do it as part of a secret plot," for instance, doesn't help very much, since that posits an exogenous source of complications that isn't very strongly constrained by our observations, and doesn't constrain our future anticipations very well. We always have to allow for the possibility that something very weird is going on, but I think "X or Y" is a reasonable short hand for "very likely, X or Y" in this context.

On the other hand, we can't exclude scenarios arbitrarily. It would have been unreasonable for me, on the basis of the stated cost-per-life-saved numbers, to suggest that the Gates Foundation is, for no good reason, withholding money that could save millions of lives this year, when there's a perfectly plausible alternative - that they simply don't think this amazing opportunity is real. This is especially plausible when GiveWell itself has said that its cost per life saved numbers don't refer to some specific factual claim.

"Maybe partial funding because AI" occurred to enough people that I felt the need to discuss it in the long series (which addressed all the arguments I'd heard up to that point), but ultimately it amounts to a claim that all the discourse about saving "dozens of lives" per donor is beside the point since there's a much higher-leverage thing to allocate funds to - in which case, why even engage with the claim in the first place?

Any time someone addresses a specific part of a broader issue, there will be countless such scope limitations, and they can't all be made explicit in a post of reasonable length.

comment by Benquo · 2019-06-06T17:14:40.424Z · LW(p) · GW(p)

Yet what you did is you treated it as though Givewell, Open Phil, and Good Ventures all play the same kind of role in both cases. Not even all 3 organizations are involved in both cases.

They share a physical office! Good Ventures pays for it! I'm not going to bother addressing comments this long in depth when they're full of basic errors like this.

Replies from: habryka4, Evan_Gaensbauer, Evan_Gaensbauer
comment by habryka (habryka4) · 2019-06-07T00:08:32.224Z · LW(p) · GW(p)

For the record, this is no longer going to be true starting in I think about a month, since GiveWell is moving to Oakland and Open Phil is staying in SF.

comment by Evan_Gaensbauer · 2019-06-06T23:44:27.805Z · LW(p) · GW(p)

Otherwise, here is what I was trying to say:

1. Givewell focuses on developing-world interventions, and not AI alignment, or any other focus area of Open Phil other than developing-world interventions, which means they're aren't responsible for anything to do with OpenAI.

2. It's unclear from you what write what role, if any, Open Phil plays in the relationship between Givewell and Good Ventures in Givewell's annual recommendations to Good Ventures. If it was clear Open Phil was an intermediary in that regard somehow, then you treating all 3 projects under 1 umbrella as 1 project with no independence between any of them might make sense. You didn't establish that, so it doesn't make sense.

3. Good Ventures signs off on all the decisions Givewell and Open Phil make, and they should be held responsible for the decisions of both Givewell and Open Phil. Yet you know that that there are people who work for Givewell and Open Phil who make decisions that are completed before Good Ventures signs off on them. Or I assume you do, since you worked for Givewell. If you somehow know it's all-top down both ways, that Good Ventures tells Open Phil and Givewell each what they want from them, and Open Phil and Givewell just deliver the package, then say so.

Yes, they do share the same physical office. Yes, Good Ventures pays for it. Shall I point to mistakes made by one of MIRI, CFAR, or LW, but not more than one, and then link the mistake made, whenever, and however tenuously, to all of those organizations?

Should I do the same to any two or more other AI alignment/x-risk organizations you favour, who share offices or budgets in some way?

Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a "Vassar crowd" that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you've done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael's friends in the bunch too, along with as much of the rationality community as I feel like? After all, you're all friends, and you decided to make the effort together, even though you each made your own individual contributions.

I won't do those things. Yet that is what it would be for me to behave as you are behaving. I'll ask you one more question about what you might do: when can I expect you to publicly condemn FHI on the grounds it's justified to do so because FHI is right next door to CEA, yet Nick Bostrom lacks the decency to go over there and demand the CEA stop posting misleading stats, lest FHI break with the EA community forevermore?

comment by Evan_Gaensbauer · 2019-06-06T23:49:38.183Z · LW(p) · GW(p)
I'm not going to bother addressing comments this long in depth when they're full of basic errors like this.

While there is what you see as at least one error in my post, there are many items I see as errors in your post I will bring to everyone's attention. It will be revised, edited, and polished to not have what errors you see in it, or at least it will be clear enough what I am and am not saying won't be ambiguous. It will be a top-level article on both the EA Forum and LW. A large part of it is going to be that you at best are using extremely sloppy arguments, and at worst are making blatant attempts to use misleading info to convince others to do what you want, just as you accuse Good Ventures, Open Phil, and Givewell of doing. One theme will be that you're still in the x-risk space, employed in AI alignment, willing to do this towards your former employers, also involved in the x-risk/AI alignment space. So, while you may not want to bother with addressing these points, I imagine you will have to eventually for the sake of your reputation.

comment by Benquo · 2019-06-06T17:12:29.306Z · LW(p) · GW(p)

Singer’s thesis is not based exclusively on a specific set of cost-effectiveness estimates about specific causes form specific organizations, and Singer’s thesis isn’t the exclusive basis for the effective altruism movement.

Then why do Singer and CEA keep making those exaggerated claims? I don't see why they'd do that if they didn't think it was responsible for persuading at least some people.

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2019-06-06T23:13:27.702Z · LW(p) · GW(p)
Then why do Singer and CEA keep making those exaggerated claims?

I don't know. Why don't you ask Singer and/or the CEA?

I don't see why they'd do that if they didn't think it was responsible for persuading at least some people.

They probably believe it is responsible for persuading at least some people. I imagine the CEA does it through some combo of revering Singer, thinking it's good for optics, and not thinking the level of precision at which there error is taking place is so grievous as to be objectionable in the context they're presented in.

Replies from: Benquo
comment by Benquo · 2019-06-07T14:48:42.360Z · LW(p) · GW(p)

Why don't you ask Singer and/or the CEA?

I don't expect to get an honest answer to "why do you keep making dishonest claims?", for reasons I should hope to be obvious. I had hoped I might have gotten any answer at all from you about why *you *(not Singer or CEA) claim that Singer's thesis is not based exclusively on a specific set of cost-effectiveness estimates about specific causes form specific organizations, or why you think it's relevant that Singer’s thesis isn’t the exclusive basis for the effective altruism movement.

comment by Benquo · 2019-06-06T17:17:17.320Z · LW(p) · GW(p)

I don’t recall Givewell ever claiming to be unbiased, although it is a problem for other actors in EA to treat Givewell’s cost-effectiveness estimates as unbiased.

Pretty weird that restating a bunch of things GiveWell says gets construed as an attack on GiveWell (rather than the people distorting what it says), and that people keep forgetting or not noticing those things, in directions that make giving based on GiveWell's recommendations seem like a better deal than it is. Why do you suppose that is?

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2019-06-06T23:53:58.995Z · LW(p) · GW(p)

I believe it's because people get their identities very caught up in EA, and for EAs focused on global poverty alleviation, Givewell and their recommended charities. So, when someone like you criticizes Givewell, a lot of them react in primarily emotional ways, creating a noisy space where the sound of messages like yours get lost. So, the points you're trying to make about Givewell, or what similar points many others have tried making about Givewell, don't stick to enough for enough of the EA community, or whoever else the relevant groups of people are. Thus, in the collective memory of the community, these things are forgotten or not noticed. Then, the cycle repeats itself each time you write another post like this.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-06-07T00:26:51.268Z · LW(p) · GW(p)

So, EA largely isn't about actually doing altruism effectively (which requires having correct information about what things actually work, e.g. estimates of cost per life saved, and not adding noise to conversations about these), it's an aesthetic identity movement around GiveWell as a central node, similar to e.g. most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it), which is also claiming credit for, literally, evaluating and acting towards the moral good (as environmentalism claims credit for evaluating and acting towards the health of the planet). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated values of EA, EA-as-it-is ought to be replaced with something very, very different.

[EDIT: noting that what you said in another comment also agrees with the aesthetic identity movement view: "I imagine the CEA does it through some combo of revering Singer, thinking it’s good for optics, and not thinking the level of precision at which there error is taking place is so grievous as to be objectionable in the context they’re presented in."]

Replies from: SaidAchmiz, Evan_Gaensbauer, Raemon
comment by Said Achmiz (SaidAchmiz) · 2019-06-07T00:54:36.731Z · LW(p) · GW(p)

I agree with your analysis of the situation, but I wonder whether it’s possible to replace EA with anything that won’t turn into exactly the same thing. After all, the EA movement is the result of some people noticing that much of existing charity is like this, and saying “we should replace that with something very, very different”…

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-06-07T01:01:01.164Z · LW(p) · GW(p)

And EA did better than the previous things, along some important dimensions! And people attempting to do the next thing will have EA as an example to learn from, which will (hopefully) prompt them to read and understand sociology, game theory, etc. The question of "why do so many things turn into aesthetic identity movements" is an interesting and important one, and, through study of this (and related) questions, it seems quite tractable to have a much better shot at creating something that produces long-term value, than by not studying those questions.

Success is nowhere near guaranteed, and total success is quite unlikely, but, trying again (after a lot of study and reflection) seems like a better plan than just continuing to keep the current thing running.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-06-07T01:22:50.909Z · LW(p) · GW(p)

The question of “why do so many things turn into aesthetic identity movements” is an interesting and important one, and, through study of this (and related) questions, it seems quite tractable to have a much better shot at creating something that produces long-term value, than by not studying those questions.

I agree that studying this is quite important. (If, of course, such an endeavor is entered into with the understanding that everyone around the investigators, and indeed the investigators themselves, have an interest in subverting the investigation. The level of epistemic vigilance required for the task is very unusually high.)

It is not obvious to me that further attempts at successfully building the object-level structure (or even defining the object-level structure) are warranted, prior to having substantially advanced our knowledge on the topic of the above question. (It seems like you may already agree with me, on this; I am not sure if I’m interpreting your comment correctly.)

comment by Evan_Gaensbauer · 2019-06-08T21:24:09.471Z · LW(p) · GW(p)

I'm going to flip this comment on you, so you can understand how I'm seeing it, and thus I fail to see why the point you're trying to make matters.

So, rationality largely isn't actually about doing thinking clearly (which requires having correct information about what things actually work, e.g., well-calibrated priors, and not adding noise to conversations about these), it's an aesthetic identify movement around HPMoR as a central node, similar to, e.g., most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of rationality, rationality-as-it-is ought to be replaced with something very, very different.

One could nitpick about how HPMoR has done much more to save a number of lives through AI alignment than Givewell has ever done through developing-world interventions, and I'll go share that info as from Jessica Taylor in defence of (at least some of) what Ben Hoffman is trying to achieve, perhaps among other places on the public internet, and we'll see how that goes. The point I was trying to make is that much of the rationality community has nothing to do with the community's stated values. So, in stating as though a fact about EA your personal impression of it based on Sarah's blog post as if that means something unique about EA that isn't true about other human communities, you've argued for too much.

Also, in this comment [LW(p) · GW(p)] I indicated my awareness of what was once known as the "Vassar crowd", which I recall you were a part of:

Shall I point out to all the communities of x-risk reduction, long-term world improvement, EA, and rationality that Michael Arc/Vassar and some of his friends formed a "Vassar crowd" that formed a cell aimed at unilaterally driving a wedge between x-risk/rationality and EA, which included you, Sarah Constantin, Michael Arc, Alyssa Vance, among others? Should I not hold you or Michael Arc individually responsible for the things you've done since then that have caused you to have a mixed reputation, or should I castigate all of you and Michael's friends in the bunch too, along with as much of the rationality community as I feel like? After all, you're all friends, and you decided to make the effort together, even though you each made your own individual contributions.

While we're here, would you mind explaining with me what all of your beef was with the EA community as misleading in myriad ways to the point of menacing x-risk reduction efforts, and other pursuits of what is true and good, without applying the same pressure to parts of the rationality community that pose the same threat, or for that matter, any other group of people who does the same? What makes EA special?


Replies from: Zack_M_Davis, jessica.liu.taylor
comment by Zack_M_Davis · 2019-06-08T22:20:03.872Z · LW(p) · GW(p)

So, rationality largely isn't actually about doing thinking clearly [...] it's an aesthetic identity movement around HPMoR as a central node [...] This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of rationality, rationality-as-it-is ought to be replaced with something very, very different.

This just seems obviously correct to me, and I think my failure to properly integrate this perspective until very recently has been extremely bad for my sanity and emotional well-being.

Specifically: if you fail to make a hard mental disinction between "rationality"-the-æsthetic-identity-movement and rationality-the-true-art-of-systematically-correct-reasoning, then finding yourself in a persistent disagreement with so-called "rationalists" about something sufficiently basic-seeming creates an enormous amount of cognitive dissonance ("Am I crazy? Are they crazy? What's going on?? Auuuuuugh") in a way that disagreeing with, say, secular humanists or arbitrary University of Chicago graduates, doesn't.

But ... it shouldn't. Sure, self-identification with the "rationalist" brand name is a signal that someone knows some things about how to reason. And, so is graduating from the University of Chicago. How strong is each signal? Well, that's an empirical question that you can't answer by taking the brand name literally.

I thought the "rationalist" æsthetic-identity-movement's marketing literature expressed this very poetically

How can you improve your conception of rationality? Not by saying to yourself, "It is my duty to be rational." By this you only enshrine your mistaken conception. Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, "The sky is green," and you look up at the sky and see blue. If you think: "It may look like the sky is blue, but rationality is to believe the words of the Great Teacher," you lose a chance to discover your mistake.

Do not ask whether it is "the Way" to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.

Of course, not everyone is stupid enough to make the mistake I made—I may have been unusually delusional in the extent to which I expected "the community" to live up to the ideals expressed in our marketing literature. For an example of someone being less stupid than recent-past-me, see the immortal Scott Alexander's comments in "The Ideology Is Not the Movement" ("[...] a tribe much like the Sunni or Shia that started off with some pre-existing differences, found a rallying flag, and then developed a culture").

This isn't to say that the so-called "rationalist" community is bad, by the standards of our world. This is my æsthetic identity movement, too, and I don't see any better community to run away to—at the moment. (Though I'm keeping an eye on the Quillette people.) But if attempts to analyze how we're collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!

(Full disclosure: uh, I guess I would also count as part of the "Vassar crowd" these days??)

Replies from: Evan_Gaensbauer, Evan_Gaensbauer
comment by Evan_Gaensbauer · 2019-06-09T05:49:37.590Z · LW(p) · GW(p)
But if attempts to analyze how we're collectively failing to live up to our ideals are construed as an attack, that just makes us even worse than we already are at living up to our own ideals!

For Ben's criticisms of EA, it's my opinion that while I agree with many of his conclusions, I don't agree with some of the strongest conclusions he reaches, or how he makes the arguments for them, simply because I believe they are not good arguments. This is common for interactions between EA and Ben these days, though Ben doesn't respond to counter-arguments, as he often seems under the impression a counter-argument disagrees with Ben in a way he doesn't himself agree with, his interlocutors are persistently acting in bad faith. I haven't interacted directly with Ben myself as much for a while until he wrote the OP this week. So, I haven't been following as closely how Ben construes 'bad faith', and I haven't taken the opportunity to discover, if he were willing to relay it, what his model of bad faith is. I currently find some of his feelings of EAs he discusses with as acting in bad faith confusing. At least I don't find them a compelling account of people's real motivations in discourse.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-06-09T06:48:20.792Z · LW(p) · GW(p)

I haven't been following as closely how Ben construes 'bad faith', and I haven't taken the opportunity to discover, if he were willing to relay it what his model of bad faith is.

I think the most relevant post by Ben here is "Bad Intent Is a Disposition, Not a Feeling". (Highly recommended!)

Recently I've often found myself wishing for better (widely-understood) terminology for phenomena that it's otherwise tempting to call "bad faith", "intellectual dishonesty", &c. I think it's pretty rare [LW · GW] for people to be consciously, deliberately lying, but motivated [LW · GW] bad reasoning is horrifyingly ubiquitous and exhibits a lot of the same structural problems as deliberate dishonesty, in a way that's worth distinguishing from "innocent" mistakes because of the way it responds to incentives. (As Upton Sinclair wrote, "It is difficult to get a man to understand something when his salary depends upon his not understanding it.")

If our discourse norms require us to "assume good faith", but there's an important sense in which that assumption isn't true (because motivated misunderstandings resist correction in a way that simple mistakes don't), but we can't talk about the ways it isn't true without violating the discourse norm, then that's actually a pretty serious problem for our collective sanity!

Replies from: Evan_Gaensbauer, Evan_Gaensbauer
comment by Evan_Gaensbauer · 2019-06-10T08:11:19.735Z · LW(p) · GW(p)

So, I've read the two posts on Benquo's blog you've linked to. The first one "Bad Intent Is a Disposition, Not a Feeling", depended on the claim he made that mens rea is not a real thing. As was pointed out in comments that he himself acknowledged those comments made some good points that would cause him to rethink the theme he was trying to impart with his original post. I looked up both the title of that post, and 'mens rea' on his blog to see if he had posted any updated thoughts on the subject. There weren't results from the date of publication of that post onward on either of those topics on his blog, so it doesn't appear he has publicly updated his thoughts on these topics. That was over 2 years ago.

The second post on the topic was more abstract and figurative, and was using some analogy and metaphor to get its conclusion across. So, I didn't totally understand the relevance of all that in the second post to the first post, even though the second was intended as a sequel to the first. It seemed to me the crux of resolving the problem was:

Sadly, being honest about your sense that someone else is arguing in bad faith is Officially Not OK. It is read as a grave and inappropriate attack. And as long as that is the case, he could reasonably expect that bringing it up would lead to getting yelled at by everyone and losing the interaction. So maybe he felt and feels like he has no good options here.

Benquo's conclusion that for public discourse and social epistemology, at least in his experience, that to be honest about your sense someone else is arguing in bad faith is Officially Not OK because it is always construed as a grave and inappropriate personal attack. So, resolving the issue appears socially or practically impossible. My experience is that just isn't the case. It can lend itself to better modes of public discourse. One thing is it can move communities to states of discourse that are much different than where the EA and rationality communities currently are at. One problem is I'm not sure even those rationalists and EAs who are aware of such problems would prefer the options available, which would be just hopping onto different platforms with very different discourse norms. I would think that would be the most practical option, since the other viable alternative would be for these communities to adopt other communities' discourse norms, and replace their own with them, wholesale. That seems extremely unlikely to happen.

Part of the problem is that it seems how Benquo construes 'bad faith' is as having an overly reductionistic definition. This was what was fleshed out in the comments on the original post on his blog, by commenters AGB and Res. So, that makes it hard for me to accept the frame Benquo bases his eventual conclusions off of. Another problem for me is the inferential distance gap between myself, Benquo, and the EA and rationality communities, respectively, are so large now that it would take a lot of effort to write them up and explain them all. Since it isn't a super high priority for me, I'm not sure that I will get around to it. However, there is enough material in Benquo's posts, and the discussion in the comments, that I can work with it to explain some of what I think is wrong with how he construes bad faith in these posts. If I write something like that up, I will post it on LW.

I don't know if the EA community in large part disagrees with the OP for the same reasons I do. I think based off some of the material I have been provided with in the comments here, I have more to work with to find the cruxes of disagreement I have with how some people are thinking, whether critically or not, about the EA and rationality communities.

comment by Evan_Gaensbauer · 2019-06-09T21:20:47.799Z · LW(p) · GW(p)

I'll take a look at these links. Thanks.

comment by Evan_Gaensbauer · 2019-06-09T05:42:52.740Z · LW(p) · GW(p)

I understand the "Vassar Crowd" to be a group of Michael Vassar's friends who:

  • were highly critical of EA.
  • were critical of somewhat less so of the rationality community.
  • were partly at odds with the bulk of the rationality community in not being as hostile to EA as they thought they should have been.

Maybe you meet those qualifications, but as I understand it the "Vassar Crowd" started publishing blog posts on LessWrong and their own personal blogs, as well as on social media, over the course of a few months starting in the latter half of 2016. It was part of a semi-coordinated effort. While I wouldn't posit a conspiracy, it seems like a lot of these criticisms of EA were developed in conversations within this group, and, given the name of the group, I assume different people were primarily nudged by Vassar. This also precipitated of Alyssa Vance's Long-Term World Improvement mailing list.

It doesn't seem to have continued as a crowd to the present, as the lives of the people involved have obviously changed a lot, and it doesn't appear from the outside it is as cohesive anymore, I assume in large part because of Vassar's decreased participation in the community. Ben seems to be one of the only people who is sustaining the effort to criticize EA as the others were before.

So while I appreciate the disclosure, I don't know if in my previous comment was precise enough, as far as I understand it was that the Vassar Crowd was more a limited clique that was manifested much more in the past than present.

comment by jessicata (jessica.liu.taylor) · 2019-06-08T21:34:28.790Z · LW(p) · GW(p)

The point I was trying to make is that much of the rationality community has nothing to do with the community’s stated values.

Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn't think that?)

Geeks, Mops, Sociopaths happened to the rationality community, not just EA.

So, in stating as though a fact about EA your personal impression of it based on Sarah’s blog post as if that means something unique about EA that isn’t true about other human communities, you’ve argued for too much.

I don't think it's unique! I think it's extremely, extremely common for things to become aesthetic identity movements! This makes the phenomenon matter more, not less!

I have about as many beefs with the rationality movement as I do with the EA movement. I am commenting on this post because Ben already wrote it and I had things to add.

It's possible that I should feel more moral pressure than I currently do to actively (not just, as a comment on other people's posts) say what's wrong about the current state of the rationality community publicly. I've already been saying things privately. (This is an invitation to try morally pressuring me, using arguments, if you think it would actually be good for me to do this)

Replies from: Evan_Gaensbauer, Evan_Gaensbauer
comment by Evan_Gaensbauer · 2019-06-09T20:15:12.059Z · LW(p) · GW(p)

Thanks for acknowledging my point about the rationality community. However, I was trying to get across more generally that I think the 'aesthetic identity movement' model might be lacking. If a theory makes the same predictions everywhere, it's useless. I feel like the 'aesthetic identity movement' model might be one of those theories that is too general and not specific enough for me to understand what I'm supposed to take away from its use. For example:

So, the United States of America largely isn't actually about being a land of freedom to which the world's people may flock (which requires having everyone's civil liberties consistently upheld, e.g., robust support for the rule of law, and not adding noise to conversations about these), it's an aesthetic identify movement around the Founding Fathers as a central node, similar to, e.g., most popular environmentalism (which, for example, opposes nuclear power despite it being good for the environment, because nuclear power is discordant with the environmentalism identity/aesthetics, and Greenpeace is against it). This makes sense as an explanation of the sociological phenomenon, and also implies that, according to the stated value of America, America ought to be replaced with something very, very different.

Maybe if all kinds of things are aesthetic identity movements instead of being what htey actually say they are, I wouldn't be as confused, if I knew what I am supposed to do with this information.


Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-06-09T20:39:46.357Z · LW(p) · GW(p)

An aesthetic identity movement is one where everything is dominated by how things look on the surface, not what they actually do/mean in material reality. Performances of people having identities, not actions of people in reality. To some extent this is a spectrum, but I think there are attractor states of high/low performativity.

It's possible for a state not to be an aesthetic identity movement, e.g. by having rule of law, actual infrastructure, etc.

It's possible for a movement not to be an aesthetic identity movement, by actually doing the thing, choosing actions based on expected value rather than aesthetics alone, having infrastructure that isn't just doing signalling, etc.

Academic fields have aesthetic elements, but also (some of the time) do actual investigation of reality (or, of reasoning/logic, etc) that turns up unexpected information.

Mass movements are more likely to be aesthetic identity movements than obscure ones. Movements around gaining resources through signalling are more likely to be aesthetic identity movements than ones around accomplishing objectives in material reality. (Homesteading in the US is an example of a historical movement around material reality)

(Note, EA isn't only as aesthetic identity movement, but it is largely one, in terms of percentage of people, attention, etc; this is an important distinction)

It seems like the concept of "aesthetic identity movement" I'm using hasn't been communicated to you well; if you want to see where I'm coming from more in more detail, read the following.

(no need to read all of these if it doesn't seem interesting, of course)

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2019-06-09T21:22:18.112Z · LW(p) · GW(p)

I will take a look at them. Thanks.

comment by Evan_Gaensbauer · 2019-06-09T19:46:37.751Z · LW(p) · GW(p)
Yes, this is true, and also implies that the rationality community should be replaced with something very different, according to its stated goals. (Did you think I didn't think that?)
Geeks, Mops, Sociopaths happened to the rationality community, not just EA.

I don't think you didn't think that. My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction).

I asked because it's frustrating to me how inconsistent with your own efforts here to put way more pressure on EA than rationality. I'm guessing part of the reason for your trepidation in the rationality community is because you feel a sense of how much disruption it could cause, and how much risk nothing would change either. The same thing has happened when, not so much you, but some of your friends have criticized EA in the past. I was thinking it was because you are socially closer to the rationality community that you wouldn't be as willing to criticize them.

I am not as invested in the rationality as a community as I was in the past. So, while I feel some personal responsibility to seek to analyze the intellectual failure modes of rationality, I don't feel much of a moral urge anymore for correcting its social failure modes. So, I lack motivation to think through if it would be "good" or not for you to do it, though.

Replies from: jessica.liu.taylor, Benquo
comment by jessicata (jessica.liu.taylor) · 2019-06-09T20:16:43.055Z · LW(p) · GW(p)

I think I actually do much more criticism of the rationality community than the EA community nowadays, although that might be invisible to you since most of it is private. (Anyway, I don't do that much public criticism of EA either, so this seems like a strange complaint about me regardless)

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2019-06-09T21:18:00.316Z · LW(p) · GW(p)

Well, this was a question more about your past activity than the present activity, and also the greater activity of the same kind of some people you seem to know well, but I thought I would take the opportunity to ask you about it now. At any rate, thanks for taking the time to humour me.

comment by Benquo · 2019-06-09T21:08:04.419Z · LW(p) · GW(p)
My question was to challenge you to answer why you, and the others if you would feel comfortable speaking to their perspectives, focus so much of your attention on EA instead of the rationality community (or other communities perhaps presenting the same kind and degree of problems), if you indeed understand they share similar problems, and posing similarly high stakes (e.g., failure modes of x-risk reduction).

It doesn't seem to me like anyone I interact with is still honestly confused about whether and to what extent e.g. CFAR can teach rationality, or rationality provides the promised superpowers. Whereas some people still believe a few core EA claims (like the one the OP criticizes) which I think are pretty implausible if you just look at them in conjunction and ask yourself what else would have to be true.

If you or anyone else want to motivate me to criticize the Rationality movement more, pointing me at people who continue to labor under the impression that the initial promises were achievable is likely to work; rude and condescending "advice" about how the generic reader (but not any particular person) is likely to feel the wrong way about my posts on EA is not likely to work.

comment by Raemon · 2019-06-07T01:49:22.853Z · LW(p) · GW(p)

So, I agree with the claim that EA has a lot of aesthetic-identity-elements going on that compound (and in many cases cause) the problem. I think that's really important to acknowledge (although it's not obvious that the solution needs to include starting over)

But I also think, in the case of this particular post, though, that the answer is simpler. The OP says:

Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated. My former employer GiveWell in particular stands out as a problem here, since it publishes such cost-per-life-saved numbers

Which... sure uses language that sounds like it's an attack on Givewell to me. seems:

[edit] The above paragraph

a) dishonest and/or false, in that it claims Givewell publishes such cost-per-life numbers, but at the moment AFAICT Givewell goes to great lengths to hide those numbers (i.e. to find the numbers of AMF you get redirected to a post about how to think about the numbers which links to a spreadsheet, which seems like the right procedure to me for forcing people to actually think a bit about the numbers)

b) uses phrases like "hoarding" and "wildly exaggerated" that I generally associate with coalition politics rather than denotive-language-that-isn't-trying-to-be-enacting, while criticizing others for coalition politics, which seems a) like bad form, b) not like a process that I expect to result in something better-than-EA at avoiding pathologies that stem from coalition politics.

[double edit] to be clear, I do think it's fair to criticize CEA and/or the EA community collectively for nonetheless taking the numbers as straightforward. And I think their approach to OpenAI deserves, at the very least, some serious scrutiny. (Although I think Ben's claims about how off they are are overstated. This critique by Kelsey seems pretty straightforwardly true to me. AFAICT in this post Ben has made a technical error approximately of the same order of magnitude of what he's claiming others are making)


Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-06-07T02:10:35.555Z · LW(p) · GW(p)

My comment was a response to Evan's, in which he said people are reacting emotionally based on identity. Evan was not explaining people's response by referring to actual flaws in Ben's argumentation, so your explanation is distinct from Evan's.

a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben's claim is neither dishonest nor false.

b) So, the fact that you associate these phrases with coalitional politics, means Ben is attacking GiveWell? What? These phrases have denotative meanings! They're pretty clear to determine if you aren't willfully misinterpreting them! The fact that things that have clear denotative meanings get interpreted as attacking people is at the core of the problem!

To say that Ben creating clarity about what GiveWell is doing is an attack on GiveWell, is to attribute bad motives to GiveWell. It says that GiveWell wants to maintain a positive impression of itself, regardless of the facts, i.e. to defraud nearly everyone. (If GiveWell wants correct information about charities and charity evaluations to be available, then Ben is acting in accordance with their interests [edit: assuming what he's saying is true], i.e. the opposite of attacking them).

Perhaps you endorse attributing bad motives to GiveWell, but in that case it would be hypocritical to criticize Ben for doing things that could be construed as doing that.

Replies from: Zack_M_Davis, Zack_M_Davis, Evan_Gaensbauer, Raemon
comment by Zack_M_Davis · 2019-07-17T06:47:50.404Z · LW(p) · GW(p)

a) GiveWell does publish cost-effectiveness estimates. I found them in a few clicks. So Ben's claim is neither dishonest nor false.

While I agree that this is a sufficient rebuttal of Ray's "dishonest and/or false" charge (Ben said that GiveWell publishes such numbers, and GiveWell does, in fact, publish such numbers), it seems worth acknowleding Ray's point about context and reduced visibility: it's not misleading to publish potentially-untrustworthy (but arguably better than nothing) numbers surrounded by appropriate caveats and qualifiers, even when it would be misleading to loudly trumpet the numbers as if they were fully trustworthy.

That said, however, Ray's "GiveWell goes to great lengths to hide those numbers" claim seems false to me in light of an email I received from GiveWell today (the occasion of my posting this belated comment), which reads, in part:

GiveWell has made a lot of progress since your last recorded gift in 2015. Our current top charities continue to avert deaths and improve lives each day, and are the best giving opportunities we're aware of today. To illustrate, right now we estimate that for every $2,400 donated to Malaria Consortium for its seasonal malaria chemoprevention program, the death of a child will be averted.

(Bolding mine.)

Replies from: Raemon, Raemon
comment by Raemon · 2019-07-31T03:51:09.761Z · LW(p) · GW(p)

Further update on this. Givewell has since posted this blogpost [EA · GW]. I haven't yet reviewed this enough have a strong opinion on it, but I think it at least explains some of the difference in epistemic state I had at the time of this discussion.

Relevant bit:

Although we don’t advise taking our cost-effectiveness estimates literally, we do think they are one of the best ways we can communicate about the rough magnitude of expected impact of donations to our recommended charities.
A few years ago, we decided not to feature our cost-effectiveness estimates prominently on our website. We had seen people using our estimates to make claims about the precise cost to save a life that lost the nuances of our analysis; it seemed they were understandably misinterpreting concrete numbers as conveying more certainty than we have. After seeing this happen repeatedly, we chose to deemphasize these figures. We continued to publish them but did not feature them prominently.
Over the past few years, we have incorporated more factors into our cost-effectiveness model and increased the amount of weight we place on its outputs in our reviews (see the contrast between our 2014 cost-effectiveness model versus our latest one). We thus see our cost-effectiveness estimates as important and informative.
We also think they offer a compelling motivation to donate. We aim to share these estimates in such a way that it’s reasonably easy for anyone who wants to dig into the numbers to understand all of the nuances involved.
comment by Raemon · 2019-07-17T06:51:48.259Z · LW(p) · GW(p)

A friend also recently mentioned getting this email to me, and yes, this does significantly change my outlook here.

comment by Zack_M_Davis · 2019-06-07T05:06:45.899Z · LW(p) · GW(p)

These phrases have denotative meanings! They're pretty clear to determine if you aren't willfully misinterpreting them! The fact that things that have clear denotative meanings get interpreted as attacking people is at the core of the problem!

I wonder if it would help to play around with emotive conjugation? Write up the same denotative criticism twice, once using "aggressive" connotations ("hoarding", "wildly exaggerated") and again using "softer" words ("accumulating", "significantly overestimated"), with a postscript that says, "Look, I don't care which of these frames you pick; I'm trying to communicate the literal claims common to both frames."

comment by Evan_Gaensbauer · 2019-06-09T21:40:46.770Z · LW(p) · GW(p)

When he wrote:

Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense.

In most contexts when language liked this is used, it's usually pretty clear that you are implying someone is doing something closer to deliberately lying than some softer kind of deception. I am aware Ben might have some model about how Givewell or others in EA are acting in bad faith in some other manner, involving self-deception. If that is what he is implying that Givewell or Good Ventures are doing instead of deliberately lying, that isn't clear from the OP. He could have also stated the organizations in question are not fully aware they're just marketing obvious nonsense, and had been immune to his attempts to point this out to them. If that is the case, but he didn't state that in the OP either.

So, based on their prior experience, I believe it would appear to many people like he was implying Givewell, Good Ventures, and EA are deliberately lying. Deliberate lying is generally seen as a bad thing. So, to imply someone is deliberately lying seems to clearly be an attribution of bad motives to others. So if Ben didn't expect or think that is how people would construe part of what he was trying to say, I don't know what he was going for.

comment by Raemon · 2019-06-07T02:56:30.236Z · LW(p) · GW(p)

I think the current format isn't good venue for me to continue the current discussion. For now, roughly, I disagree with the framing in your most recent comment, and stand by my previous comment.

I'll try to write up a top level post that outlines more of my thinking here. I'd have some interest in a private discussion that gets turned into a google doc that gets turned into a post, or possibly some other format. I think public discussion threads are a uniquely bad format for this sort of thing.

comment by Wei Dai (Wei_Dai) · 2019-05-28T23:02:19.726Z · LW(p) · GW(p)

I agree with a lot of this (maybe not to the same degree) but I'm not sure what's causing it. You link to something about villagers vs werewolves. Does that mean you think GiveWell has been effectively taken over by werewolves or was run by werewolves from the beginning?

Assuming some version of "yes", I'm pretty sure the people running GiveWell do not think of themselves as werewolves. How can I rule out the possibility that I myself am a werewolf and not aware of it?

ETA: How much would it help to actually play the game? I got a copy of The Resistance a few days ago which is supposed to be the same kind of game as Werewolf, but I don't have a ready group of people to play with. Would it be worth the extra effort to find/make a group just to get this experience?

Replies from: Benquo
comment by Benquo · 2019-05-29T15:29:04.952Z · LW(p) · GW(p)

I found that it made a big subjective difference in my threat assessment for this kind of thing, when I'd had the subjective experience of figuring out how to play successfully as a werewolf. YMMV.

I don't think many people have a self-image as a "werewolf" trying to sabotage building of shared maps. I don't think anyone in GiveWell sees themselves that way.

I do think that many people are much more motivated to avoiding being blamed for things, than to create clarity about credit-assignment, and that this is sufficient to produce the "werewolf" pattern. If I ask people what their actual anticipations are about how they and others are likely to behave, in a way that is grounded and concrete, it usually seems like they agree with this assessment. I've had several conversations in which one person has said both of the following:

  • It's going too far to accuse someone of werewolfy behavior.
  • Expecting nonwerewolfy behavior from people is an unreasonably high expectation that's setting people up to fail.

As far as I can tell, the "werewolf" thing is how large parts of normal, polite society work by default, and most people trying to do something that requires accurate collective credit-assignment in high stakes situations just haven't reflected on how far a departure that would require from normal behavior.

Replies from: Dagon, Wei_Dai
comment by Dagon · 2019-05-29T16:28:45.954Z · LW(p) · GW(p)
As far as I can tell, the "werewolf" thing is how large parts of normal, polite society work by default

This is true, and important. Except "werewolf" is a misleading analogy for it - they're not intentionally colluding with other secret werewolves, and it's not a permanent attribute of the participants. It's more that misdirection and obfuscation are key strategies for some social-competitive games, and these games are part of almost all humans motivation sets, both explicitly (wanting to have a good job, be liked, etc.) and implicitly (trying to win every status game, whether it has any impact on their life or not).

The ones who are best at it (most visibly successful) firmly believe that the truth is aligned with their winning the games. They're werewolf-ing for the greater good, because they happen to be convincing the villagers to do the right things, not because they're eating villagers. And as such, calling it "werewolf behavior" is rejected.


Replies from: Benquo
comment by Benquo · 2019-05-29T17:06:41.581Z · LW(p) · GW(p)

I'm pretty sure this varies substantially depending on context - in contexts that demand internal coordination on simulacrum level 1 (e.g. a marginal agricultural community, or a hunting or raiding party, or a low-margin business in a very competitive domain), people often do succeed at putting the shared enterprise ahead of their egos.

Replies from: Dagon
comment by Dagon · 2019-05-29T19:02:47.433Z · LW(p) · GW(p)

This may be true - desperation encourages in-group cooperation (possibly with increased out-group competition) and wealth enables more visible social competition. Or it may be a myth, and there's just different forms of domination and information obfuscation in pursuit of power, based on different resources and luxuries to be competed over. We don't have much evidence either way of daily life in pre-literate societies (or illiterate subgroups within technically-literate "civilizations").

We do know that groups of apes have many of the same behaviors we're calling "werewolf", which is some indication that it's baked in rather than contextual.

comment by Wei Dai (Wei_Dai) · 2019-06-03T06:48:53.558Z · LW(p) · GW(p)

I do think that many people are much more motivated to avoiding being blamed for things, than to create clarity about credit-assignment, and that this is sufficient to produce the “werewolf” pattern.

I think I myself am often more motivated to avoiding being blamed for things, than to create clarity about credit-assignment. I feel like overall I'm still doing more good than harm (and therefore "it would be better if they just stopped", as you put it in another comment, doesn't apply to me). How can I tell if I'm wrong about this?

As far as I can tell, the “werewolf” thing is how large parts of normal, polite society work by default, and most people trying to do something that requires accurate collective credit-assignment in high stakes situations just haven’t reflected on how far a departure that would require from normal behavior.

How optimistic are you that "accurate collective credit-assignment in high stakes situations" can be greatly improved from what people are currently doing? If you're optimistic, can you give some evidence or arguments for this, aside from the fact that villagers in Werewolf can win if they know what to do?

Replies from: Benquo
comment by Benquo · 2019-06-03T07:37:24.915Z · LW(p) · GW(p)

I'm more confident that we can't solve AI alignment without fixing this, than I am that we can fix it.

Accounts of late-20th-Century business practices seem like they report much more Werewolfing than accounts of late-19th-Century business practices - advice on how to get ahead has changed a lot, as have incidental accounts of how things work. If something's changed recently, we should have at least some hope of changing it back, though obviously we need to understand the reasons. Taking a longer view, new civilizations have emerged from time to time, and it looks to me like often rising civilizations have superior incentive-alignment and information-processing than the ones they displace or conquer. This suggests that at worst people get lucky from time to time.

Pragmatically, a higher than historically usual freedom of speech and liberalism more generally seem like they ought to both make it easier to think collectively about political problems than it's been in the past, and make it more obviously appealing, since public reason seemed to do really well at improving a lot of people's lives pretty recently.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-06-03T22:00:50.496Z · LW(p) · GW(p)

I’m more confident that we can’t solve AI alignment without fixing this, than I am that we can fix it.

Can you give some examples of technical AI alignment efforts going wrong as a result of bad credit assignment (assuming that's what you mean)? To me it seems that to the extent things in that field aren't headed in the right direction, it's more a result of people underestimating the philosophical difficulty, or being too certain about some philosophical assumptions, or being too optimistic in general, that kind of thing.

Accounts of late-20th-Century business practices seem like they report much more Werewolfing than accounts of late-19th-Century business practices—advice on how to get ahead has changed a lot, as have incidental accounts of how things work.

This seems easily explainable by the fact that businesses have gotten a lot bigger to take advantage of economies of scale offered by new technologies, so coordination / principal-agent problems have gotten a lot worse as a result.

Taking a longer view, new civilizations have emerged from time to time, and it looks to me like often rising civilizations have superior incentive-alignment and information-processing than the ones they displace or conquer. This suggests that at worst people get lucky from time to time.

I guess this gives some hope, but not much.

Pragmatically, a higher than historically usual freedom of speech and liberalism more generally seem like they ought to both make it easier to think collectively about political problems than it’s been in the past, and make it more obviously appealing, since public reason seemed to do really well at improving a lot of people’s lives pretty recently.

Not sure I understand this part. Are you proposing to increase freedom of speech and liberalism beyond the current baseline in western societies? If so how?

Replies from: Benquo, Benquo
comment by Benquo · 2019-06-06T02:16:38.170Z · LW(p) · GW(p)
Are you proposing to increase freedom of speech and liberalism beyond the current baseline in western societies?

No, just saying that while I agree the problem looks quite hard - like, world-historically, a robust solution would be about as powerful as, well, cities - current conditions seem like they're unusually favorable to people trying to improve social coordination via explicit reasoning. Conditions are slightly less structurally favorable than the Enlightenment era, but on the other hand we have the advantage of being able to look at the Enlightenment's track record and try to explicitly account for its failures.

comment by Benquo · 2019-06-06T02:13:10.117Z · LW(p) · GW(p)
Can you give some examples of technical AI alignment efforts going wrong as a result of bad credit assignment (assuming that's what you mean)?

If orgs like OpenAI and Open Philanthropy Project are sincerely trying to promote technical AI alignment efforts, then they're obviously confused about the fundamental concept of differential intellectual progress.

If, on the other hand, we think they're just being cynical and collecting social credit for labeling things AI safety rather than making a technical error, then the honest AI safety community seems to have failed to create clarity about this fact among their supporters. Not only would this level of coordination fail to create an FAI due to "treacherous turn" considerations, it can't even be bothered to try to deny resources to optimizing processes that are already known to be trying to deceive us!

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-06-06T04:14:05.712Z · LW(p) · GW(p)

If orgs like OpenAI and Open Philanthropy Project are sincerely trying to promote technical AI alignment efforts, then they’re obviously confused about the fundamental concept of differential intellectual progress.

I think I have more uncertainty than you do about whether OpenAI/OpenPhil is doing the right thing, but conditional on them not doing the right thing, and also not just being cynical, I don't think being confused about the fundamental concept of differential intellectual progress is the best explanation of why they're not doing the right thing. It seems more likely that they're wrong about how much of a broad base of ML expertise/capability is needed internally in an organization to make progress in AI safety, or about what is the best strategy to cause differential intellectual progress or bring about an aligned AGI or prevent AI risk.

If, on the other hand, we think they’re just being cynical and collecting social credit for labeling things AI safety rather than making a technical error, then the honest AI safety community seems to have failed to create clarity about this fact among their supporters.

I personally assign less than 20% probability that "they’re just being cynical and collecting social credit" so I don't see why I would want to "create clarity about this fact". If you think "the honest AI safety community" should assign a much higher credence to this, it doesn't seem like you've made enough of a case for it. (I'm not sure that OpenAI/OpenPhil is doing the wrong thing, and if they are there seem to be a lot of explanations for it besides "being cynical and collecting social credit". Paul Christiano works there and I'm pretty sure he isn't being cynical and wouldn't continue to work there if most people or the leaders there are being cynical.)

It seems more plausible that their technical errors are caused by subconscious biases that ultimately stem from motivations or evolutionary pressures related to social credit. But in that case I probably have similar biases and I don't see a strong reason to think I'm less affected by them than OpenAI/OpenPhil, so it doesn't seem right to accuse them of that when I'm trying to argue for my own positions. Accusing others of bias also seems less effective in terms of changing minds (i.e., it seems likely to antagonize people and make them stop listening to you) than just making technical arguments. To the extent I do think technical errors are caused by biases related to social credit, I think a lot of those biases are "baked in" by evolution and won't go away quickly if we do improve credit assignment.

Replies from: Benquo, Benquo
comment by Benquo · 2019-06-06T05:35:22.023Z · LW(p) · GW(p)
But in that case I probably have similar biases and I don't see a strong reason to think I'm less affected by them than OpenAI/OpenPhil, so it doesn't seem right to accuse them of that when I'm trying to argue for my own positions.

This seems backwards to me. Surely, if you're likely to make error X which you don't want to make, it would be helpful to build shared models of the incidence of error X and help establish a norm of pointing it out when it occurs in others, so that others will be willing and able to correct you in the analogous situation.

It doesn't make any sense to avoid trying to help someone by pointing out their mistake because you might need the same kind of help in the future, at least for nonrivalrous goods like criticism. If you don't think of correcting this kind of error as help, then you're actually just declaring intent to commit fraud. And if you'd find it helpful but expect others to think of it as unwanted interference, then we've found an asymmetric weapon that helps with honesty but not with dishonesty.

Accusing others of bias also seems less effective in terms of changing minds (i.e., it seems likely to antagonize people and make them stop listening to you) than just making technical arguments. To the extent I do think technical errors are caused by biases related to social credit, I think a lot of those biases are "baked in" by evolution and won't go away quickly if we do improve credit assignment.

A major problem here is that people can collect a disproportionate amount of social credit for "working on" a problem by doing things that look vaguely similar to a coherent program of action for addressing the problem, while making "errors" in directions that systematically further their private or institutional interests. (For instance, OpenAI's habit of taking both positions on questions like "should AI be open?", depending on the audience.) We shouldn't expect people to stop being tempted to employ this strategy, as long as it's effective at earning social capital. That's a reason to be more clear, not less clear, about what's going on - as long as people who understand what's going on obscure the issue to be polite, this strategy will continue to work.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-06-06T06:40:57.325Z · LW(p) · GW(p)

This seems backwards to me. Surely, if you’re likely to make error X which you don’t want to make, it would be helpful to build shared models of the incidence of error X and help establish a norm of pointing it out when it occurs in others, so that others will be willing and able to correct you in the analogous situation.

I think that would make sense if I had a clear sense of how exactly biases related to social credit is causing someone to make a technical error, but usually it's more like "someone disagrees with me on a technical issue and we can't resolve the disagreement, it seems pretty likely that one or both of us is affected by some sort of bias that's related to social credit and that's the root cause of the disagreement, but it could also be something else like being naturally optimistic vs pessimistic, or different past experiences/backgrounds". How am I supposed to "create clarity" in that case?

That’s a reason to be more clear, not less clear, about what’s going on—as long as people who understand what’s going on obscure the issue to be polite, this strategy will continue to work.

As I mentioned before, I don't entirely understand what is going on, in other words I have a lot of uncertainty about what is going on. Maybe that subjective uncertainty is itself a kind of subconscious "werewolfy" or blame-avoiding behavior on my part, but I also have uncertainty about that, so given the potential downsides if you're wrong, overall I don't see enough reason to adopt the kind of policy that you're suggesting.

To backtrack a bit in our discussion, I think I now have a better sense of what kind of problems you think bad credit assignment is causing in AI alignment. It seems good that someone is working in this area (e.g., maybe you could figure out the answers to my questions above) but I wish you had more sympathy for people like me (I'm guessing my positions are pretty typical for what you think of as the "honest AI safety community").

BTW you seem to be strongly upvoting many (but not all?) of your own comments, which I think most people are not doing or doing very rarely. Is it an attempt to signal that some of your comments are especially important?

Replies from: Benquo, Benquo
comment by Benquo · 2019-06-06T07:36:24.560Z · LW(p) · GW(p)

In a competitive attention market without active policing of the behavior pattern I'm describing, it seems wrong to expect participants getting lots of favorable attention and resources to be honest, as that's not what's being selected for.

There's a weird thing going on when, if I try to discuss this, I either get replies like Raemon's claim elsewhere [LW(p) · GW(p)] that the problem seems intractable at scale (and it seems like you're saying a similar thing at times), or replies to the effect that there are lots of other good reasons why people might be making mistakes, and it's likely to hurt people's feelings if we overtly assign substantial probability to dishonesty, which will make it harder to persuade them of the truth. The obvious thing that's missing is the intermediate stance of "this is probably a big pervasive problem, and we should try at all to fix it by the obvious means before giving up."

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-06-06T11:57:33.064Z · LW(p) · GW(p)

It doesn't seem very surprising to me that a serious problem has already been addressed to the extent that it's true that both 1) it's very hard to make any further progress on the problem and 2) the remaining cost from not fully solving the problem can be lived with.

The obvious thing that’s missing is the intermediate stance of “this is probably a big pervasive problem, and we should try at all to fix it by the obvious means before giving up.”

It seems to me that people like political scientists, business leaders, and economists have been attacking the problem for a while, so it doesn't seem that likely there's a lot of low hanging fruit to be found by "obvious means". I have some more hope that the situation with AI alignment is different enough from what people thought about in the past (e.g., a lot of people involved are at least partly motivated by altruism compared to the kinds of people described in Moral Mazes) that you can make progress on credit assigning as applied to AI alignment, but you still seem to be too optimistic.

Replies from: Benquo
comment by Benquo · 2019-06-06T17:04:21.372Z · LW(p) · GW(p)

What are a couple clear examples of people trying to fix the problem locally in an integrated way, rather than just talking about the problem or trying to fix it at scale using corrupt power structures for enforcement?

It seems to me like the nearest thing to a direct attempt was the Quakers. As far as I understand, while they at least tried to coordinate around high-integrity discourse, they put very little work into explicitly modeling the problem of adversarial behavior or developing robust mechanisms for healing or routing around damage to shared information processing.

I'd have much more hope about existing AI alignment efforts if it seemed like what we've learned so far had been integrated into the coordination methods of AI safety orgs, and technical development were more focused on current alignment problems.

comment by Benquo · 2019-06-06T07:30:26.451Z · LW(p) · GW(p)

I generally have a bias towards strong upvote or strong downvote, and I don't except my own comments from this.

comment by Benquo · 2019-06-06T06:11:18.491Z · LW(p) · GW(p)
But in that case I probably have similar biases and I don't see a strong reason to think I'm less affected by them than OpenAI/OpenPhil, so it doesn't seem right to accuse them of that when I'm trying to argue for my own positions.

You're not, as far as I know, promoting an AI safety org raising the kind of funds, or attracting the kind of attention, that OpenAI is. Likewise you're not claiming mainstream media attention or attracting a large donor base the way GiveWell / Open Philanthropy Project is. So there's a pretty strong reason to expect that you haven't been selected for cognitive distortions that make you better at those things anywhere near as strongly as people in those orgs have.

comment by Lukas Finnveden (Lanrian) · 2019-05-28T18:17:36.860Z · LW(p) · GW(p)

I'm confused about the argument you're trying to make here (I also disagree with some things, but I want to understand the post properly before engaging with that). The main claims seem to be

There are simply not enough excess deaths for these claims to be plausible.

and, after telling us how many preventable deaths there could be,

Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated.

But I don't understand how these claims interconnect. If there were more people dying from preventable diseases, how would that dissolve the dilemma that the second claim poses?

Also, you say that $125 billion is well within the reach of the GF, but their website says that their present endowment is only $50.7 billion. Is this a mistake, or do you mean something else with "within reach"?

Replies from: Lanrian, Benquo, Benquo
comment by Lukas Finnveden (Lanrian) · 2019-05-29T21:54:50.015Z · LW(p) · GW(p)

I still have no idea of how the total amount of dying people is relevant, but my best reading of your argument is:

  • If givewells cost effectiveness estimates were correct, foundations would spend their money on them.
  • Since the foundations have money that they aren't spending on them, the estimates must be incorrect.

According to this post, OpenPhil intends to spend rougly 10% of their money on "straightforward charity" (rather than their other cause areas). That would be about $1B (though I can't find the exact numbers right now), which is a lot, but hardly unlimited. Their worries about displacing other donors, coupled with the possibility of learning about better opportunities in the future, seems sufficient to justify partial funding to me.

That leaves the Gates Foundation (at least among the foundations that you mentioned, of course there's a lot more). I don't have a good model of when really big foundations does and doesn't grant money, but I think Carl Shulman makes some interesting points in this old thread [LW(p) · GW(p)].

comment by Benquo · 2019-06-06T17:11:14.030Z · LW(p) · GW(p)

Right now a major excuse for not checking outcomes [LW · GW] is that effect sizes are too small relative to noise. This is plainly incompatible with the belief that there's a large funding gap at cost-per-life-saved numbers close to the current GiveWell estimates, because if you believe the latter, it should be possible to bring excess deaths down to literally zero.

Gates and Buffett have stated intent to give a lot more via the Gates Foundation.

Replies from: Lanrian
comment by Lukas Finnveden (Lanrian) · 2019-06-06T17:47:18.607Z · LW(p) · GW(p)

I don't think anyone has claimed that "there's a large funding gap at cost-per-life-saved numbers close to the current GiveWell estimates", if "large" means $50B. GiveWell seem to think that their present top charities' funding gaps are in the tens of millions.

comment by Benquo · 2019-05-28T19:35:52.694Z · LW(p) · GW(p)

Gates has stated intent to give more $ away (he still has $100B) and Warren Buffett also promised to give away his fortune (some tens of billions) via GF.

comment by avturchin · 2019-05-28T18:02:03.990Z · LW(p) · GW(p)

In some sense, EA is arbitrage between the price of life in rich and poor countries, and such price will eventually become more equal.

Another point is that saving life locally sometimes is possible almost for free, if you happen to be in right place and thus have unique information. For example, calling 911 if you see a drawing child may be very effective and cost you almost nothing. There were several cases in my life when I had to put attention of a taxi driver to a pedestrian ahead - not sure if I actually saved life. But to save life locally one needs to pay attention on what is going around him and know how to react effectively.

Replies from: Benquo
comment by Benquo · 2019-05-28T18:05:28.621Z · LW(p) · GW(p)
EA is arbitrage between the price of life in rich and poor countries

That's in the "best-case" scenario where this particular claim, made by parties making other incompatible claims, happens to be the true one. I no longer believe such arbitrage is reliably available and haven't seen a persuasive argument to the contrary.

Replies from: jkaufman, avturchin
comment by jefftk (jkaufman) · 2019-05-29T17:00:04.933Z · LW(p) · GW(p)

I no longer believe such arbitrage is reliably available

Do you not believe GiveDirectly represents this kind of arbitrage?

Replies from: Benquo
comment by Benquo · 2019-05-29T23:46:28.423Z · LW(p) · GW(p)

Given the kinds of information gaps and political problems I describe here, it seems to me that while in expectation they’re a good thing to do with your surplus, the expected utility multiplier should be far less than the very large one implied by a straightforward “diminishing marginal utility of money” calculation.

comment by avturchin · 2019-05-28T18:24:43.941Z · LW(p) · GW(p)

Sure, in the best case; moreover, as poor countries are becoming less poor, the price of saving life in them is growing. (And one may add that functional market economy is the only known way to make them less poor eventually.)

However, I also think that EA could reach even higher efficiency in saving lives in other cause areas, like fighting aging and preventing global risks.

comment by habryka (habryka4) · 2019-05-28T20:10:31.676Z · LW(p) · GW(p)
If we assume that all of this is treatable at current cost per life saved numbers - the most generous possible assumption for the claim that there's a funding gap - then at $5,000 per life saved (substantially higher than GiveWell's current estimates), that would cost about $50 Billion to avert.
Of course, that’s an annual number, not a total number. But if we think that there is a present, rather than a future, funding gap of that size, that would have to mean that it’s within the power of the Gates Foundation alone to wipe out all fatal communicable diseases immediately, a couple times over - in which case the progress really would be permanent, or at least quite lasting. And infections are the major target of current mass-market donor recommendations.

I am confused. You list a $50 billion price tag, and then say that the Gates Foundation has enough money to pay that sum "a couple times over". But the Gates Foundation has a total endowment of only $50.7 billion, which is definitely not "a couple times over".

Replies from: Benquo
comment by Benquo · 2019-05-28T20:24:34.192Z · LW(p) · GW(p)

Gates and Buffett have pledged a lot more than that.

Replies from: habryka4
comment by habryka (habryka4) · 2019-05-28T20:30:04.223Z · LW(p) · GW(p)

Hmm, what's your source for that? They have a total net-wealth of something like $160 billion, so it can't be more than a factor of 3. And it seems quite likely to me that both of them have at least some values that are not easily captured by "save as many lives as possible" such that I don't expect all of that $160 billion to go towards that goal (e.g. I expect a significant fraction of that money to go things like education, scientific achievement and other things that don't have the direct aim of saving lives but are pursuing other more nebulous goals).

comment by Evan_Gaensbauer · 2019-05-28T21:54:25.659Z · LW(p) · GW(p)

I have one or more comments I'd like to make, but I'd like to know what sorts of comments you consider to be either 'annoying' or 'counterproductive' before I make them. I agree with some aspects of this article, but I disagree with others. I've checked, and I think my disagreements will be greater in both number and degree than other comments here. I wouldn't expect you to find critical engagement based on some strong disagreement to "be annoying or counterproductive", but I'd like to get a sense if you think for me to come out of the gate disagreeing or criticizing is too annoying or counterproductive.

I ask because I wouldn't like to make a lengthy response that would go deleted. If you think that might end up being the case, then I will respond to this article with one of my own.

Replies from: Wei_Dai, Benquo
comment by Wei Dai (Wei_Dai) · 2019-05-29T06:40:16.940Z · LW(p) · GW(p)

Did you know that if a comment gets deleted, the author of it is notified via PM and given a copy of the deleted comment? So if you don't mind the experience of having a comment deleted, you can just post your comment here, and repost it as an article later if it does get deleted.

Replies from: Evan_Gaensbauer, Raemon
comment by Evan_Gaensbauer · 2019-05-29T15:17:20.425Z · LW(p) · GW(p)

I didn't know that, but neither do I mind the experience of having a comment deleted. I would mind:

  • that Benquo might moderate this thread to a stringent degree according to a standard he might fail to disclose, and thus can use moderation as a means to move the goal posts, while under the social auspices of claiming to delete my comment because he is saw it as wilfully belligerent, without substantiating that claim.
  • that Benquo will be more motivated to do this than he otherwise would be with on other discussions he would moderate on LW, as he has initiated this discussion with an adversarial frame, and is one that Benquo feels personally quite strongly about (e.g., it is based on a long-lasting public dispute he has had with his former employer, and Benquo here is not shy here about his hostility to at least large portions of the EA movement).
  • that were he to delete my comment on such grounds, there would be no record by which anyone reading this discussion would be able to hold Benquo accountable to the standards he used to delete my comments, unduly stacking the deck against an appeal I could make that in deleting my comment Benquo had been inconsistent in his moderation.

Were this to actually happen, of course I would take my comment and re-post it as its own article. However, I would object to how Benquo would have deleted my comment in that case, not the fact that he did do it, on the grounds I'd see it as legitimately bad for the state of discourse LW should aspire to. By checking what form Benquo's moderation standard specifically takes beyond a reign of tyranny against any comments he sees as vaguely annoying or counterproductive, I am trying to:

1. externalize a moderation standard to which Benquo could be held accountable.

2. figure out how I can write my comment so it meets Benquo's expectations for quality, so as to minimize unnecessary friction.


comment by Raemon · 2019-05-29T06:59:11.816Z · LW(p) · GW(p)

This does seem like something we should telegraph better than we currently do, although I'm not sure how.

Replies from: Lanrian, Evan_Gaensbauer
comment by Lukas Finnveden (Lanrian) · 2019-05-29T11:24:15.809Z · LW(p) · GW(p)

In general, I'd very much like a permanent neat-things-to-know-about-LW post or page, which receives edits when there's a significant update (do tell me if there's already something like this). For example, I remember trying to find information about the mapping between karma and voting power a few months ago, and it was very difficult. I think I eventually found an announcement post that had the answer, but I can't know for sure, since there might have been a change since that announcement was made. More recently, I saw that there were footnotes in the sequences, and failed to find any reference whatsoever on how to create footnotes. I didn't learn how to do this until a month or so later, when the footnotes came to the EA forum and aaron wrote a post about it.

Replies from: habryka4, Evan_Gaensbauer
comment by habryka (habryka4) · 2019-05-30T20:44:56.798Z · LW(p) · GW(p)

I agree with this. We are working on an updated About/Welcome page which will have info in this reference class (or at least links to other posts that have all of that info).

comment by Evan_Gaensbauer · 2019-05-29T15:18:41.055Z · LW(p) · GW(p)

Strongly upvoted.

comment by Evan_Gaensbauer · 2019-05-29T15:18:17.304Z · LW(p) · GW(p)

Just to add user feedback, I did indeed have no idea this was what happens when comments are deleted.

comment by Benquo · 2019-05-29T16:44:13.046Z · LW(p) · GW(p)

The only comment I recall deleting here is this one [LW(p) · GW(p)], in which case as you can see I clearly asked for that line of discussion to be discontinued first.

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2019-05-29T19:16:41.302Z · LW(p) · GW(p)

Okay, thanks. Sorry for the paranoia. I just haven't commented on any LW posts with the 'reign of terror' commenting guidelines before, so I didn't know what to expect. That gives me enough context to feel confident my comment won't be like that one you deleted.


Replies from: Benquo
comment by Benquo · 2019-05-29T23:39:50.671Z · LW(p) · GW(p)

I picked Reign of Terror because I wasn’t sure I wanted to commit to the higher deletion thresholds (I think the comment I deleted technically doesn’t meet them), so I wanted to avoid making a false promise.

I do want to hold myself to the standard of welcoming criticism & only deleting stuff that seems like it’s destroying importance-weighted information on net. I don’t want to be held to the standard of having to pretend that everyone being conventionally polite and superficially relevant is really trying.

Replies from: Raemon
comment by Raemon · 2019-05-30T00:57:28.337Z · LW(p) · GW(p)

I have updated (partly in this thread, although it retroactively fits into past observations that were model-less at the time), that's it's probably best to have a moderation setting that clearly communicates what you've described here.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2019-05-30T01:28:16.487Z · LW(p) · GW(p)

Can't individuals just list 'Reign of Terror' and then specify in their personalized description that they have a high bar for terror?

Replies from: Evan_Gaensbauer, Raemon
comment by Evan_Gaensbauer · 2019-05-30T02:31:45.872Z · LW(p) · GW(p)

As an aside, 'high bar for terror' is the best new phrase I've come across in a long while.

comment by Raemon · 2019-05-30T01:31:16.717Z · LW(p) · GW(p)

Yes, but the short-handle given to the description might radically change how people conceive of it.

comment by Rafael Harth (sil-ver) · 2019-05-29T21:44:39.504Z · LW(p) · GW(p)
Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths

My assumption before reading this has been that this is the case. Given that, does a reason remain to update away from the position that the GiveWell claim is basically correct?

For the rest of this post, let's suppose the true amount of money needed to save a life through GiveWell's top charities is 50.000$. I don't think anything about Singer's main point changes.

For one, it's my understanding that decreasing animal suffering is at least an order of magnitude more effective than decreasing human suffering. If the arguments you make here apply equally to that (which I don't think they do), and we take the above number, well that's 5000$ for a benefit-as-large-as-one-life-saved, which is still sufficient for Singer's argument

Secondly, I don't think your arguments apply to existential risk prevention and even if they did and we decrease effectiveness there by one order of magnitude, that'd also still validate Singer's argument if we take my priors.

I notice that I'm very annoyed at your on-the-side link to the article about OpenAI with the claim that they're doing the opposite of what the argument justifying the intervention recommends. It's my understanding that the article, though plausible at the time, was very speculative and has been falsified since it's been written. In particular, OpenAI has pledged not to take part in an arms race under reasonable conditions, which directly contradicts one of the points of that article. Quote:

Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

That, and they seem to have an ethics board with significant power (this is based on deciding not to release the full version of GPT). I believe they also said that they won't publish capability results in the future, which also contradicts one of the main concerns (which, again, was reasonable at the time). Please either reply or amend your post.

comment by Evan_Gaensbauer · 2019-05-28T22:20:40.188Z · LW(p) · GW(p)

Was your posting this inspired by another criticism of Givewell recently published by another former staffer at Givewell?

Replies from: Benquo
comment by Benquo · 2019-05-29T15:31:11.170Z · LW(p) · GW(p)

It was inspired by this comment [LW(p) · GW(p)].

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2019-05-29T18:00:01.189Z · LW(p) · GW(p)

That's surprising. Have you been exposed to the other aforementioned critique of Givewell? I ask because it falls along very similar lines to yours, but it appears to have been written without reference to yours whatsoever.

Replies from: Benquo
comment by Benquo · 2019-05-29T23:52:08.467Z · LW(p) · GW(p)

I don’t think so. Mind linking to it?

I also mostly didn’t mean this as a critique of GiveWell so much as a critique of a specific claim being made by many that sometimes references GiveWell numbers. Unfortunately it seems like making straightforward revealed preferences arguments reliably causes people to come up with ad hoc justifications for GiveWell’s behavior which (I claim) are incoherent, and I can’t respond to those without talking about GiveWell’s motives somewhat.

Replies from: Evan_Gaensbauer, Evan_Gaensbauer, Evan_Gaensbauer
comment by Evan_Gaensbauer · 2019-05-30T17:59:46.808Z · LW(p) · GW(p)

Here is the link.

comment by Evan_Gaensbauer · 2019-05-30T16:58:53.374Z · LW(p) · GW(p)

Based on how you write, it is clear in your writing you understand the mistake may be made more by others referencing Givewell's numbers rather than Givewell itself. Yet the tones of your post seem to be holding Givewell, and not others, culpable for how others use Givewell's numbers. To make an ethical appeal to effective altruists drawing unjustified conclusions based on Givewell's numbers to them directly that what they're doing is misleading, dishonest, or wrong, may be less savoury than a merely rational appeal of how or why what they're doing is misleading, dishonest, or wrong, based on the expectation people are not fully cognizant of their own dishonesty. Yet your approach thus far doesn't appear to be working. You've been trying this for a few years now, so I'd suggest trying some new tactics or strategy.

I think at this point it would be fair for you to be somewhat less charitable to those who make wildly exaggerated claims that make Givewell's numbers, and write as though you are explaining to Givewell and others as your audience that people who use Givewell's numbers are being dishonest, rather than explaining to people who aren't acting entirely in good faith that they are being dishonest.

The way you write makes it seems as though you believe in this whole affair Givewell itself is the most dishonest actor, which I think readers find senseless enough they're less inclined to take the rest of what you're trying to say seriously. I think you should try talking more about the motives of the actors your'e referring to other than Givewell, in addition to Givewell's motives.

Replies from: Benquo
comment by Benquo · 2019-05-30T19:39:05.693Z · LW(p) · GW(p)

What are a couple examples of how this tone shows up in my writing, and how would you have written them to communicate the proper emphasis?

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2019-06-01T16:24:56.444Z · LW(p) · GW(p)

So, first of all, when you write this:

Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths, or the low cost-per-life-saved numbers are wildly exaggerated.

It seems like what you're trying to accomplish for rhetorical effect, but not irrationally, is to demonstrate that the only alternative to "wildly exaggerated" cost-effectiveness estimates is that foundations like these are doing something even worse, that they are hoarding money. There are a few problems with this.

  • You're not distinguishing who the specific cost-effectiveness estimates you're talking about are coming from. While it's a bit of a nitpick to point out it's Givewell rather than Good Ventures that makes the estimate, when the 2 organizations are so closely connected, and Good Ventures can be held responsible for the grants they make on the basis of the estimates, if not the original analysis that informed them.
  • At least in the case of Good Ventures, there is a third alternative that they are reserving billions of dollars not at the price of millions of preventable deaths, because, for a variety of reasons, they intend in the present and future to give that money to their diverse portfolio of causes they believe present just as if not more opportunity to prevent millions of deaths, or to otherwise do good. Thus, in the case of Good Ventures, you knew as well as anyone that the idea only one of the two conclusions you've posed here is wildly misleading.

So, what might have worked better as something like:

Foundations like Good Ventures apportion a significant amount of their endowment to developing-world interventions. If the low cost-per-life-saved numbers Good Ventures is basing this giving off of are not wildly exaggerated, then Good Ventures is saving millions fewer lives than they could with this money.

The differences in my phrasing are:

  • it doesn't imply foundations like Good Ventures or the Gates Foundation are the only ones to be held responsible for the fact the cost-effectiveness estimates are wildly exaggerated.
  • it doesn't create the impression Good Ventures and the Gates Foundation, in spite of common knowledge, ever intended exclusively to use their respective endowments to save lives with developing-world interventions, which sets up a false dichotomy that the organizations are necessarily highly deceptive, or doing something even more morally indictable.

You say a couple sentences later:

Either scenario clearly implies that these estimates are severely distorted and have to be interpreted as marketing copy designed to control your behavior, not unbiased estimates designed to improve the quality of your decisionmaking process.

As you've covered in discussions elsewhere, the implication of the fact, based on the numbers they're using these foundations they could be saving more lives they aren't with the money they've intended to use to save lives through developing-world interventions, is the estimates are clearly distorted. You don't need an "either scenario", one of which you wrote about in a way that implies something could be true you know is false, to get across that implication is clear.

There aren't 2 scenario, one which makes Good Ventures look worse than they actually are, and one about the actual quality of their mission that is less than the impression people have of it. There is just 1 scenario where it is the case the ethical quality of these foundations' progress on their goals is less than the impression much of the public has gotten of it.

As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent. Insofar as there's a way to fix these problems as a low-info donor, there's already enough money. Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense.

Here, you do the same thing of conflating multiple, admittedly related actors. When you say "the same people", you could be referring to any or all of the following, and it isn't clear who you are holding responsible for what:

  • Good Ventures
  • The Open Philanthropy Project
  • Givewell
  • The Gates Foundation
  • 'effective altruism' as a movement/community, independent of individual, officially aligned or affiliated non-profit organizations

In treating each of these actors part and parcel with each other, you appear to hold each of them equally culpable for all the mistakes you've listed here, which I've covered in this, and my other, longer comment, as false in myriad ways. Were you make clear in your conclusion who you are holding responsible for each respective factor in the total outcome of the negative consequences of the exaggerated cost-effectiveness estimates, your injunctions for how people should change their behaviour in the face of how they should respond to these actors differently would have rung more true.


comment by Evan_Gaensbauer · 2019-05-30T15:36:55.828Z · LW(p) · GW(p)

I can't find the link right now, but I asked others for it. So, hopefully it'll come back up again. If I come across it again, I'll respond back here again with it.

comment by DirectedEvolution (AllAmericanBreakfast) · 2019-05-30T18:24:50.788Z · LW(p) · GW(p)

Singer's argument is that

1) We have a moral obligation to try to do the most net good we can.

2) Your obligation to do so holds regardless of distance or the neglect of others.

3) This creates an unconventionally rigorous and demanding moral standard.


Benquo's is that

1) Even the best charity impact analysis is too opaque to be believable.

2) The rich have already pledged enough to solve the big problems.

3) Therefore, spend on yourself, spend locally, and on "specific concrete things that might have specific concrete benefits;" also, try to improve our "underlying systems problems." "There's no substitute for developing and acting on your own models of the world."

We must inevitably develop our own models of the world, and it's important to read impact assessments critically, as we would anything else. I don't think Benquo makes much of an argument for why or how we should instead spend on ourselves, our local community, or on "specific concrete benefits" as an alternative method of doing good. My understanding of how the world works and what constitutes the good has a strong social basis, and we ought to be just as skeptical of our own observations as we are of others. The reason why EA and impact assessment excites me is because it creates a basis for improving our altruistic strategy over the long term.

I'm open to the idea that local altruism is ultimately the better strategy, but I would need to see an equally strong argument for that side. I just don't yet. I've spent too much time engaged in personal, face-to-face relationship and activism poor people in America and around the world to dismiss the call to almost exclusively focus on populations in extreme poverty. I'm more skeptical of X-risk as an altruistic project, for the same reasons that Benquo critiques GiveWell and because it's hard for me to see how we sway the military to eschew new weapons.

If he's rejecting not just earning-to-give, but the whole philosophy of utilitarianism, he hasn't really refuted any of the core points of Singer's argument. Opaque analysis should lead us to do our own research, not reject the project of increasing our impact. Neglect by the rich doesn't mean we too can neglect these funding gaps. If these problems indicate the need for revolutionary change rather than philanthropy, that's fine, though I have to say that leading off the call to action with "spend money on taking care of yourself and your friends" makes it sound a lot more like motivated reasoning.

Replies from: Benquo
comment by Benquo · 2019-05-30T20:24:10.140Z · LW(p) · GW(p)

I'm rejecting the claim that there exists an infinite pit of suffering that can be remedied cheaply per unit of suffering. If I don't make constructive suggestions, people claim that I'm just saying everything is terrible or something. If I do, they seem to think that the whole post is an argument for the constructive suggestions. I'm not sure what to do here.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2019-05-30T23:43:50.055Z · LW(p) · GW(p)

This is meant as a constructive suggestion. I find some of your posts here to be ambiguous.

For example, in your reply here, I can’t tell whether you’re complaining that I, too, am playing into this catch-22 that you describe, or whether instead you feel that my post is more sympathetic to you and thus a place where you can more safely vent your frustration.

As you can see from my first comment in this chain, I was also unsure of how to interpret your original post. Was it an argument for giving up on a moral imperative of altruistic utility-maximization entirely, a re-evaluation how that imperative is best achieved, or a claim that maximization is good in theory but such opportunities don’t exist in practice?

Although everyone should give others a sympathetic and careful reading, if I was in your shoes I might consider whether my writing is clear enough.

comment by FeepingCreature · 2019-05-28T20:58:41.716Z · LW(p) · GW(p)

Doesn't this only hold if you abdicate all moral judgment to Gates/Good Ventures, such that if Gates Foundation/Good Ventures pass up an opportunity to save lives, it follows necessarily that the offer was fraudulent?

comment by habryka (habryka4) · 2019-05-28T20:37:59.259Z · LW(p) · GW(p)

Edit note: Fixed the formatting.

comment by Jiro · 2019-05-31T22:33:47.352Z · LW(p) · GW(p)

I would not, in fact, save a drowning child.

Or rather, I'd save a central example of a drowning child, but I wouldn't save a drowning child under literally all circumstances, and I think most people wouldn't either. If a child was drowning in a scenario similar to one that Singer uses it as an analogy for, it would be something like a scenario where there is an endless series of drowning children in front of me with an individually small but cumulatively large cost to saving them. Under those circumstances, I would not save every drowning child, or even try to maximize the number of drowning children I do save.

comment by Conner_Masolin · 2019-05-30T23:06:59.151Z · LW(p) · GW(p)

There's 2 reasons people far away may not try to save these lives.

1) They don't know if others are donating, maybe they don't need to.

2) They can't see the foundation acting on their part, maybe they will fail rescuing them, or maybe they are only cashing in.

There's a large feedback issue here.