Posts

Comments

Comment by Howie Lempel (howie-lempel) on LTFF and EAIF are unusually funding-constrained right now · 2023-09-03T13:41:38.832Z · LW · GW

"I'm pretty sure that most EAs I know have ~100% confidence that what they're doing is net positive for the long-term future"

Fwiw, I think this is probably true for very few if any of the EAs I've worked with, though that's a biased sample.

I wonder if the thing giving you this vibe might be they they actually think something like "I'm not that confident that my work is net positive for the LTF but my best guess is that it's net positive in expectation. If what I'm doing is not positive, there's no cheap way for me to figure it out, so I am confident (though not ~100%) that my work will keep seeming positive EV to me for the near future." One informal way to describe this is that they are confident that their work is net positive in expectation/ex ante but not that it will be net positive ex post

I think this can look a lot like somebody being ~sure that what they're doing is net positive even if in fact they are pretty uncertain.

Comment by Howie Lempel (howie-lempel) on Lessons learned from offering in-office nutritional testing · 2023-07-08T06:29:37.985Z · LW · GW

Fyi - this series of posts caused me to get a blood test for nutritional deficiencies, learn that I have insufficient vitamin D and folic acid, and take supplements on a bunch of days that I otherwise would not have (though less often than I should given knowledge of a deficiency). Thanks!

Comment by Howie Lempel (howie-lempel) on What are the best non-LW places to read on alignment progress? · 2023-07-07T17:04:34.368Z · LW · GW

Whoops - thanks!

Comment by Howie Lempel (howie-lempel) on What are the best non-LW places to read on alignment progress? · 2023-07-07T14:15:37.691Z · LW · GW

I haven't kept up with it so can't really vouch for it but Rohin's alignment newsletter should also be on your radar. https://rohinshah.com/alignment-newsletter/

Comment by Howie Lempel (howie-lempel) on You are probably underestimating how good self-love can be · 2021-11-14T14:11:38.806Z · LW · GW

Thanks for this! I found this much more approachable than other writing on this topic, which I've generally had trouble engaging with because it's felt like it's (implicitly or explicitly) claiming that: 1) this mindset is right for ~everyone; and 2) there are ~no tradeoffs (at least in the medium-term) for (almost?) anyone.

Had a few questions:

Your goals and strategies might change, even if your values remain the same.

Have your values in fact remained the same?

For example, as I walked down the self-love path I felt my external obligations start to drop away. 

What is your current relationship to external obligations? Do they feel like they exist for you now (whatever that means)?

While things are clearly better now, I’m still figuring out how to be internally motivated and also get shit done, and for a while I got less shit done than when I was able to coerce myself.

Do you now feel as able to get things done as you did when you were able to coerce yourself? What do you expect will be the medium-to-long run effect on your ability to get things done? How confident do you feel in that?

***

More broadly, I'm curious whether this has felt like an unamibiguously positive change by the lights of Charlie from 1-3 years ago (whatever seems like the relevant time period)? In the long-run do you expect it to be a Pareto improvement by past Charlie's lights?

Comment by Howie Lempel (howie-lempel) on Glen Weyl: "Why I Was Wrong to Demonize Rationalism" · 2021-10-09T13:08:39.554Z · LW · GW

Someone's paraphrase of the article: "I actually think they're worse than before, but being mean is bad so I retract that part"

 

Weyl's response: "I didn’t call it an apology for this reason."

https://twitter.com/glenweyl/status/1446337463442575361

Comment by Howie Lempel (howie-lempel) on The LessWrong 2018 Book is Available for Pre-order · 2020-12-05T18:02:39.396Z · LW · GW

First of all, I think the books are beautiful. This seems like a great project to me and I'm really glad you all put it together.

I didn't think of this on my own but now that Ozzie raised it, I do think it's misleading not to mention (or at least suggest) that this is selecting the best posts from a particular year in a salient way on the cover.[1] This isn't really because anybody cares whether it's from 2018 or 2019. It's because I think most reasonable readers looking at a curated collection of LessWrong posts titled "Epistemology," "Agency," or "Alignment" would assume that this was a collection of the best ever LW[2] posts on that topic as of ~date of publication. That's a higher bar than 'one of the best posts on epistemology on LW in 2018' and many (most?) readers might prefer it.

Counterargument: maybe all of your customers already know about the project and are sufficiently informed about what this is that putting it on the cover isn't necessary.

Apologies if the ship's already sailed on this and feedback is counterproductive at this point. Overall, I don't think this is a huge deal.

[1] Though not intentionally so.

[2] Maybe people think of LW 2.0 as a sufficient break that they wouldn't be surprised if it was restricted to that.

Comment by Howie Lempel (howie-lempel) on Limits of Current US Prediction Markets (PredictIt Case Study) · 2020-10-30T01:30:58.098Z · LW · GW

"As far as I can tell, it does not net profits against losses before calculating these fees."
 

I can confirm this is the case based on the time I lost money on an arbitrage because I assumed the fees were on net profits.

Comment by Howie Lempel (howie-lempel) on A long reply to Ben Garfinkel on Scrutinizing Classic AI Risk Arguments · 2020-09-29T10:21:51.996Z · LW · GW

On the documents:

Unfortunately I read them nearly a year ago so my memory's hazy. But (3) goes over most of the main arguments we talked about in the podcast step by step, though it's just slides so you may have similar complaints about the lack of close analysis of the original texts.

(1) is a pretty detailed write up of Ben's thoughts on discontinuities, sudden emergence, and explosive aftermath. To the extent that you were concerned about those bits in particular, I'd guess you'll find what you're looking for there.

Comment by Howie Lempel (howie-lempel) on A long reply to Ben Garfinkel on Scrutinizing Classic AI Risk Arguments · 2020-09-29T10:16:52.921Z · LW · GW

Thanks! Agree that it'd would've been useful to push on that point some more.

I know Ben was writing up some additional parts of his argument at some point but I don't know whether finishing that up is still something he's working on.

Comment by Howie Lempel (howie-lempel) on A long reply to Ben Garfinkel on Scrutinizing Classic AI Risk Arguments · 2020-09-28T12:21:33.036Z · LW · GW

The Podcast/Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons:

Lack of precision. It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben's sentences correctly.

Lack of references. The "Classic AI Risk Arguments" are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized.

Expansiveness. There are a lot of arguments presented, and many of them deserve formal answers. Unfortunately, this is a large task, and I hope you will forgive me for replying in the form of a video.

tl;dw: A number of the arguments Ben Garfinkel criticize are in fact not present in "Superintelligence" and "The AI Foom Debate". (This summary is incomplete.)

Hi Soren,

I agree that podcasts/interviews have some major disadvantages, though they also have several advantages. 

Just wanted to link to Ben's written versions of some (but not all) of these arguments in case you haven't seen them. I don't know whether they address the specific things you're concerned about. We linked to these in the show notes and if we didn't explicitly flag that these existed during the episode, we should have.[1] 

  1. On Classic Arguments for AI Discontinuities
  2. Imagining the Future of AI: Some Incomplete but Overlong Notes
  3. Slide deck: Unpacking Classic AI Risk Arguments
  4. Slide deck: Potential Existential Risks from Artificial Intelligence

(1) and (3) are most relevant to the things we talked about on the podcast. My memory's hazy but I think (2) and (4) also have some relevant sections. 

Unfortunately, I probably won't have time to watch your videos though I'd really like to.[2] If you happen to have any easy-to-write-down thoughts on how I could've made the interview better (including, for example, parts of the interview where I should've pushed back more), I'd find that helpful. 

[1] JTBC, I think we should expect that most listeners are going to absorb whatever's said on the show and not do any additional reading.

[2] ETA: Oh - I just noticed that youtube now has an 'open transcript' feature which makes it possible I'll able to get to this.

Comment by Howie Lempel (howie-lempel) on Jimrandomh's Shortform · 2020-04-23T07:17:54.545Z · LW · GW

Do you still think there's a >80% chance that this was a lab release?

Comment by Howie Lempel (howie-lempel) on Jimrandomh's Shortform · 2020-04-15T15:29:42.417Z · LW · GW

[I'm not an expert.]

My understanding is that SARS-CoV-1 is generally treated as a BSL-3 pathogen or a BSL-2 pathogen (for routine diagnostics and other relatively safe work) and not BSL-4. At the time of the outbreak, SARS-CoV-2 would have been a random animal coronavirus that hadn't yet infected humans, so I'd be surprised if it had more stringent requirements.

Your OP currently states: "a lab studying that class of viruses, of which there is currently only one." If I'm right that you're not currently confident this is the case, it might be worth adding some kind of caveat or epistemic status flag or something.

---

Some evidence:

Comment by Howie Lempel (howie-lempel) on How to have a happy quarantine · 2020-03-18T12:29:37.479Z · LW · GW

I used to play Innovation online here - dunno if it still works. https://innovation.isotropic.org/

Also looks like you can play here: https://en.boardgamearena.com/gamepanel?game=innovation

Comment by Howie Lempel (howie-lempel) on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-03-04T12:04:00.519Z · LW · GW

Thanks for confirming!

How ill do they have to be? If a contact is feeling under the weather in a nonspecific way and has a cough, is that enough for them to get tested?

Do you feel like you have any insight into whether underreporting of mild/minimally symptomatic/asymptomatic cases?

Comment by Howie Lempel (howie-lempel) on How to fly safely right now? · 2020-03-03T20:56:03.591Z · LW · GW

I was able to buy hand sanitizer after going through security at JFK on Sunday but I wouldn't count on that. Fwiw, Purell bottles small enough to take through security seem pretty common.

Comment by Howie Lempel (howie-lempel) on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-03-03T19:37:19.691Z · LW · GW

Seems possible but I don't really understand where China's claims about asymptomatic cases are coming from so I've been hesitant about putting too much weight on them. Copying some thoughts on this over from a FB comment I wrote (apologies that some of it doesn't make total sense w/o context).

tl;dr I'm pretty unsure whether China actually has so few minimally symptomatic/asymptomatic cases.
---
Those 320,000 people were at fever clinics, so I think none of them should be asymptomatic.
The report does say "Asymptomatic infection has been reported, but the majority of the relatively rare cases who are asymptomatic on the date of identification/report went on to develop disease. The proportion of truly asymptomatic infections is unclear but appears to be relatively rare and does not appear to be a major driver of transmission."
But from a quick skim, I don't think the basis for that finding is mentioned anywhere in the report. My guess is that Chinese officials told them that there were very few asymptomatic cases among people who were tested through contact tracing (which theoretically should test cases whether or not they're symptomatic.
I haven't really read anything from experts on this but my speculative guess is that we shouldn't rely too heavily on claims about data from China's contact tracing. The report claims that 100% of contacts were successfully traced in Shenzen and Guangdong and 99% in Sichuan. "100%" is a bit of a red flag coming from that regime.
Fwiw, here's Anthony Fauci (head of infectious disease for the NIH) assuming* (w/o data afaict) that "the number of asymptomatic or minimally symptomatic cases is several times as high as the number of reported cases." https://www.nejm.org/doi/full/10.1056/NEJMe2002387
Link via Divia Caroline Eden

https://www.facebook.com/permalink.php?story_fbid=1073098183053785&id=100010608396052&comment_id=1073152789714991&reply_comment_id=1073889599641310

Comment by Howie Lempel (howie-lempel) on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-03-03T18:33:51.233Z · LW · GW

Some more suggestive evidence that Singapore might not be testing asymptomatic/minimally symptomatic people:

The COVID-19 swab test kit deployed at [travel] checkpoints allows us to test beyond persons who are referred to hospitals, and extend testing to lower-risk symptomatic travellers as an added precautionary measure. This additional testing capability deployed upfront at checkpoints further increases our likelihood of detecting imported cases at the point of entry. As with any test, a negative result does not completely rule out the possibility of infection. As such, symptomatic travellers with a negative test result should continue to minimise social contact and seek medical attention should symptoms not improve over the next three days. 

https://www.moh.gov.sg/news-highlights/details/additional-precautionary-measures-in-response-to-escalating-global-situation

If they were already testing lots of asymptomatic cases, it would be odd to say testing *symptomatic travelers* is allowing them to test beyond people referred to hospitals."

I wonder if people are assuming that intense contact tracing means that contacts will be tested by default even if asymptomatic. I'm not an expert but my understanding is that this isn't necessarily the default (and particularly not in a situation where they presumably don't have an infinite supply of kits or healthcare workers to do the diagnostics). Depends on how close the contact was, the specific disease, etc, but I think default is to call the contact every day to check if they've developed symptoms. Would be great if an actual doctor/epidemiologist chimed in.

Singapore's description of their contact tracing is vague but consistent with my understanding:

Once identified, MOH will closely monitor all close contacts. As a precautionary measure, they will be quarantined for 14 days from their last exposure to the patient. In addition, all other identified contacts who have a low risk of being infected will be under active surveillance, and will be contacted daily to monitor their health status.

14. As of 3 March 2020, 12pm, MOH has identified 3,173 close contacts who have been quarantined. Of these, 336 are currently quarantined, and 2,837 have completed their quarantine.

https://www.moh.gov.sg/news-highlights/details/two-new-cases-of-covid-19-infection-confirmed

If they were administering tests to asymptomatic contacts, I think it's likely they'd have said so here.

Comment by Howie Lempel (howie-lempel) on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-03-03T18:22:11.914Z · LW · GW
Connect these dots, along with the fact that Singapore has been doing extremely aggressive contact tracing and has been successful enough to almost stop the spread, I think Singapore can't have many uncounted mild or asymptomatic cases, and their severely ill rate is still 10% to 20%.

Do you have a citation for the claim that Singapore can't have many mild or asymptomatic cases? The article you cite says:

Close contacts are identified and those individuals without symptoms are quarantined for 14 days from last exposure. As of February 19, a total of 2593 close contacts have been identified. Of these, 1172 are currently quarantined and 1421 have completed their quarantine.5 Contacts with symptoms are tested for COVID-19 using RT-PCR.

The bold bit suggests that asymptomatic [or, I suspect, minimally symptomatic] people aren't being tested

Comment by Howie Lempel (howie-lempel) on Maybe Lying Doesn't Exist · 2019-12-26T04:05:23.822Z · LW · GW

[I'm not a lawyer and it's been a long time since law school. Also apologies for length]

Sorry - I was unclear. All I meant was that civil cases don't require *criminal intent.* You're right that they'll both usually have some intent component, which will vary by the claim and the jurisdiction (which makes it hard to give a simple answer).

---

tl;dr: It's complicated. Often reckless disregard for the truth r deliberate ignorance is enough to make a fraud case. Sometimes a "negligent misrepresentation" is enough for a civil suit. But overall both criminal and ccivil cases usually have some kind of intent/reckless in difference/deliberate ignorance requirement. Securities fraud in NY is an important exception.

Also I can't emphasize enough that there are 50 versions in 50 states and also securities fraud, mail fraud, wire fraud, etc can all be defined differently in each state.

----

After a quick Google., it looks to me like the criminal and civil standards are usually pretty similar.

It looks like criminal fraud typically (but not always) requires "fraudulent intent" or "knowledge that the fraudulent claim was false." However, it seems "reckless indifference to the truth" is enough to satisfy this in many jurisdictions.[1]

New York is famous for the Martin Act, which outlaws both criminal and civil securities fraud without having any intent requirement at all.[2] (This is actually quite important because a high percentage of all securities transactions go through New York at some point, so NY gets to use this law to prosecute transactions that occur basically anywhere).

The action most equivalent to civil fraud is Misrepresentation of material facts/fraudulent misrepresentation. This seems a bit more likely than criminal law to accept "reckless indifference" as a substitute for actually knowing that the relevant claim was false.[3] For example, thee Federal False Claims Act makes you liable if you display "deliberate ignorance" or "reckless disregard of the truth" even if you don't knowingly make a false claim.[4]

However, in at least some jurisdictions you can bring a civil claim for negligent misrepresentation of material facts, which seems to basically amount too fraud but with a negligence standard, not an intent standardd.[5]


P.S. Note that we seem to be discussing the aspect of "intent" pertaining to whether the defendant knew the relevant statement was false.There's also often a required intent to deceive or harm in both the criminal and civil context (I'dd guess the requirement is a bit weaker in civil law.

------

[1] "Fraudulent intent is shown if a representation is made with reckless indifference to its truth or falsity." https://www.justice.gov/jm/criminal-resource-manual-949-proof-fraudulent-intent

[2] "In some instances, particularly those involving civil actions for fraud and securities cases, the intent requirement is met if the prosecution or plaintiff is able to show that the false statements were made recklessly—that is, with complete disregard for truth or falsity."

[3] https://en.wikipedia.org/wiki/False_Claims_Act#1986_changes

[4] "Notably, in order to secure a conviction, the state is not required to prove scienter (except in connection with felonies) or an actual purchase or sale or damages resulting from the fraud.[2]

***

.In 1926, the New York Court of Appeals held in People v. Federated Radio Corp. that proof of fraudulent intent was unnecessary for prosecution under the Act.[8] In 1930, the court elaborated that the Act should "be liberally and sympathetically construed in order that its beneficial purpose may, so far as possible, be attained."[9]

https://en.wikipedia.org/wiki/Martin_Act#Investigative_Powers

[5] "Although a misrepresentation fraud case may not be based on negligent or accidental misrepresentations, in some instances a civil action may be filed for negligent misrepresentation. This tort action is appropriate if a defendant suffered a loss because of the carelessness or negligence of another party upon which the defendant was entitled to rely. Examples would be negligent false statements to a prospective purchaser regarding the value of a closely held company’s stock or the accuracy of its financial statements." https://www.acfe.com/uploadedFiles/Shared_Content/Products/Self-Study_CPE/Fraud-Trial-2011-Chapter-Excerpt.pdf

Comment by Howie Lempel (howie-lempel) on We run the Center for Applied Rationality, AMA · 2019-12-25T18:16:58.981Z · LW · GW

Thanks! forgot about that post.

Comment by Howie Lempel (howie-lempel) on We run the Center for Applied Rationality, AMA · 2019-12-25T17:51:20.547Z · LW · GW

I'm not sure I understand what you mean by "something to protect." Can you give an example?

[Answered by habryka]

Comment by Howie Lempel (howie-lempel) on We run the Center for Applied Rationality, AMA · 2019-12-25T17:48:57.723Z · LW · GW

[Possibly digging a bit too far into the specifics so no worries if you'd rather bow out.]

Do you think these confusions[1] are fairly evenly dispersed throughout the community (besides what you already mentioned: "People semi-frequently have them at the beginning and then get over them.")?

Two casual observations: (A) the confusions seem less common among people working full-time at EA/Rationalist/x-risk/longtermist organisation than in other people who "take singularity scenarios seriously."[2] (B) I'm very uncertain but they also seem less prevalent to me in the EA community than the rationalist community (to the extent the communities can be separated).[3] [4]

Do A and B sound right to you? If so, do you have a take on why that is?

If A or B *are* true, do you think this is in any part caused by the relative groups taking the singularity [/x-risk/the future/the stakes] less seriously? If so, are there important costs from this?


[1] Using your word while withholding my own judgment as to whether every one of these is actually a confusion.

[2] If you're right that a lot of people have them at the beginning and then get over them, a simple potential explanation would be that by the time you're working at one of these orgs, that's already happened.

Other hypothesis: (a) selection effects; (b) working FT in the community gives you additional social supports and makes it more likely others will notice if you start spiraling; (c) the cognitive dissonance with the rest of society is a lot of what's doing the damage. It's easier to handle this stuff psychologically if the coworkers you see every day also take the singularity seriously.[i]

[3] For example perhaps less common at Open Phil, GPI, 80k, and CEA than CFAR and MIRI but I also think this holds outside of professional organisations.

[4] One potential reason for this is that a lot of EA ideas are more "in the air" than rationalist/singularity ones. So a lot of EAs may have had their 'crisis of faith' before arriving in the community. (For example, I know plenty of EAs (myself included) who did some damage to themselves in their teens or early twenties by "taking Peter Singer really seriously."

[i] I've seen this kind of dissidence offered as a (partial) explanation of why PTSD has become so common among veterans & why it's so hard for them to reintegrate after serving a combat tour. No clue if the source is reliable/widely held/true. It's been years but I think I got it from Odysseus in America or perhaps its predecessor, Achilles in Vietnam.

Comment by Howie Lempel (howie-lempel) on We run the Center for Applied Rationality, AMA · 2019-12-25T17:16:58.935Z · LW · GW
My closest current stab is that we’re the “Center for Bridging between Common Sense and Singularity Scenarios.

[I realise there might not be precise answers to a lot of these but would still be interested in a quick take on any of them if anybody has one.]

Within CFAR, how much consensus is there on this vision? How stable/likely to change do you think it is? How long has this been the vision for (alternatively, how long have you been playing with this vision for)? Is it possible to describe what the most recent previous vision was?

Comment by Howie Lempel (howie-lempel) on We run the Center for Applied Rationality, AMA · 2019-12-25T17:15:02.891Z · LW · GW

This seemed really useful. I suspect you're planning to write up something like this at some point down the line but wanted to suggest posting this somewhere more prominent in the meantime (otoh, idea inoculation, etc.)

Comment by Howie Lempel (howie-lempel) on We run the Center for Applied Rationality, AMA · 2019-12-25T16:32:22.905Z · LW · GW
The need to coordinate in this way holds just as much for consequentialists or anyone else.

I have a strong heuristic that I should slow down and throw a major warning flag if I am doing (or recommending that someone else do) something I believe would be unethical if done by someone not aiming to contribute to a super high impact project. I (weakly) believe more people should use this heuristic.

Comment by Howie Lempel (howie-lempel) on We run the Center for Applied Rationality, AMA · 2019-12-25T16:16:38.196Z · LW · GW

Thanks for writing this up. Added a few things to my reading list and generally just found it inspiring.

Things like PJ EBY's excellent ebook.

FYI - this link goes to an empty shopping cart. Which of his books did you mean to refer to?

The best links I could find quickly were:

Comment by Howie Lempel (howie-lempel) on We run the Center for Applied Rationality, AMA · 2019-12-25T16:06:19.471Z · LW · GW
I think I also damaged something psychologically, which took 6 months to repair.

I've been pretty curious about the extent to which circling has harmful side effects for some people. If you felt like sharing what this was, the mechanism that caused it, and/or how it could be avoided I'd be interested.

I expect, though, that this is too sensitive/personal so please feel free to ignore.

Comment by Howie Lempel (howie-lempel) on Maybe Lying Doesn't Exist · 2019-12-25T15:26:49.951Z · LW · GW

Note that criminal intent is *not* required for a civil fraud suit which could be brought simultaneously with or after a criminal proceeding.

Comment by Howie Lempel (howie-lempel) on We run the Center for Applied Rationality, AMA · 2019-12-22T15:08:02.915Z · LW · GW

"For example, we spent a bunch of time circling for a while"

Does this imply that CFAR now spends substantially less time circling? If so and there's anything interesting to say about why, I'd be curious.

Comment by Howie Lempel (howie-lempel) on Ben Hoffman's donor recommendations · 2018-06-25T22:42:54.238Z · LW · GW

This doesn't look to me like an argument that there is so much funging between EA Funds and GiveWell recommended charities that it's odd to spend attention distinguishing between them? For people with some common sets of values (e.g. long-termist, placing lots of weight on the well-being of animals) it doesn't seem like there's a decision-relevant amount of funging between GiveWell recommendations and the EA Fund they would choose. Do we disagree about that?

I guess I interpreted Rob's statement that "the EA Funds are usually a better fallback option than GiveWell" as shorthand for "the EA Fund relevant to your values is in expectation a better fallback option than GiveWell." "The EA Fund relevant to your values" does seem like a useful abstraction to me.

Comment by Howie Lempel (howie-lempel) on Ben Hoffman's donor recommendations · 2018-06-22T20:44:04.660Z · LW · GW

Here's a potentially more specific way to get at what I mean.

Let's say that somebody has long-termist values and believes that the orgs supported by the Long Term Future EA Fund in expectation have a much better impact on the long-term future than GW recommended charities. In particular, let's say she believes that (absent funging) giving $1 to GW recommended charities would be as valuable as giving $100 to the EA Long Term Future Fund.

You're saying that she should reduce her estimate because Open Phil may change its strategy or the blog post may be an imprecise guide to Open Phil's strategy so there's some probability that giving $1 to GW recommended charities could cause Open Phil to reallocate some money from GW recommended charities toward the orgs funded by the Long Term Future Fund.

In expectation, how much money do you think is reallocated from GW recommended charities toward orgs like those funded by the Long Term Future Fund for every $1 given to GW recommended charities? In other words, by what percent should this person adjust down their estimate of the difference in effectiveness?

Personally, I'd guess it's lower than 15% and I'd be quite surprised to hear you say you think it's as high as 33%. This would still leave a difference that easily clears the bar for "large enough to pay attention to."

Fwiw, to the extent that donors to GW are getting funged, I think it's much more likely that they are funging with other developing world interventions (e.g. one recommended org's hits diminishing returns and so funding already targeted toward developing world interventions goes to a different developing world health org instead).

I'm guessing that you have other objections to EA Funds (some of which I think are expressed in the posts you linked although I haven't had a chance to reread them). Is it possible that funging with GW top charities isn't really your true objection?

Comment by Howie Lempel (howie-lempel) on Ben Hoffman's donor recommendations · 2018-06-22T18:05:59.796Z · LW · GW

I see you as arguing that GW/Open Phil might change its strategic outlook in the future and that their disclosures aren't high precision so we can't rule out that (at some point in the future or even today) giving to GW recommended charities could lead Open Phil to give more to orgs like those in the EA Funds.

That doesn't strike me as sufficient to argue that GW recommended charities funge so heavily against EA funds that it's "odd to spend attention distinguishing them, vs spending effort distinguishing substantially different strategies."

Comment by Howie Lempel (howie-lempel) on Ben Hoffman's donor recommendations · 2018-06-22T01:18:17.378Z · LW · GW

What's the reason to think EA Funds (other than the global health and development one) currently funges heavily with GiveWell recommended charities? My guess would have been that that increased donations to GiveWell's recommended charities would not cause many other donors (including Open Phil or Good Ventures) to give instead to orgs like those supported by the Long-Term Future, EA Community, or Animal Welfare EA Funds.

In particular, to me this seems in tension with Open Phil's last public writing on it's current thinking about how much to give to GW recommendations versus these other cause areas ("world views" in Holden's terminology). In his January "Update on Cause Prioritization at Open Philanthropy," Holden wrote:

"We will probably recommend that a cluster of 'long-termist' buckets collectively receive the largest allocation: at least 50% of all available capital. . . .
We will likely recommend allocating something like 10% of available capital to a “straightforward charity” bucket (described more below), which will likely correspond to supporting GiveWell recommendations for the near future."

There are some slight complications here but overall it doesn't seem to me that Open Phil/GV's giving to long-termist areas is very sensitive to other donors' decisions about giving to GW's recommended charities. Contra Ben H, I therefore think it does currently make sense for donors to spend attention distinguishing between EA Funds and GW's recommendations.

For what it's worth, there might be a stronger case that EA Funds funges against long-termist/EA community/Animal welfare grants that Open Phil would otherwise make but I think that's actually an effect with substantially different consequences.

[Disclosure - I formerly worked at GiveWell and Open Phil but haven't worked there for over a year and I don't think anything in this comment is based on any specific inside information.]

[Edited to make my disclosure slightly more specific/nuanced.]