CFAR fundraiser far from filled; 4 days remaining

post by AnnaSalamon · 2015-01-27T07:26:36.878Z · LW · GW · Legacy · 48 comments

Contents

48 comments

We're 4 days from the end of our matching fundraiser, and still only about 1/3rd of the way to our target (and to the point where pledged funds would cease being matched).

If you'd like to support the growth of rationality in the world, do please consider donating, or asking me about any questions/etc. you may have.  I'd love to talk.  I suspect funds donated to CFAR between now and Jan 31 are quite high-impact.

As a random bonus, I promise that if we meet the $120k matching challenge, I'll post at least two posts with some never-before-shared (on here) rationality techniques that we've been playing with around CFAR.

48 comments

Comments sorted by top scores.

comment by Arran_Stirton · 2015-01-28T13:56:23.690Z · LW(p) · GW(p)

Donated $180.

I was planning on donating this money, my yearly 'charity donation' budget (it's meager - I'm an undergraduate), to a typical EA charity such as the Against Malaria Foundation; a cash transaction for the utlilons, warm fuzzies and general EA cred. However the above has forced me to reconsider this course of action in light of the following:

  • The possibility CFAR may not receive sufficient future funding. CFAR expenditure last year was $510k (ignoring non-staff workshop costs that are offset by workshop revenue) and their current balance is something around $130k. Without knowing the details, a similarly sized operation this year might therefore require something like $380k in donations (a ballpark guesstimate, don't quote me on that). The winter matching fundraiser has the potential to fund $240k of that, so a significant undershoot would put the organization in precarious position.

  • A world that has access to a well written rationality curriculum over the next decade has significant advantage over one that doesn't. I already accept that 80,000 hours is a high impact organization and they also work by acting as an impact multiplier for individuals. Given that rationality is an exceptionally good impact multiplier I must accept that CFAR existing is much better than it not existing.

  • While donations to a sufficiently-funded CFAR are most likely much lower utility than donations to AMF, donations to ensure CFAR's continued existence are exceptionally high utility. For comparison (as great as AMF is) diverting all donations from Wikipedia to AMF would be a terrible idea, as would over funding Wikipedia itself. The world gets a large amount of utility out of the existence of at least one Wikipedia, but not a great deal of marginal utility by an over funded Wikipedia. By my judgement the same applies to CFAR.

  • CFAR isn't a typical EA cause. This means that while if I don't donate to keep AMF going, another EA will. However if I don't donate to keep CFAR going there's a reasonable chance that someone else won't. In other words my donations to CFAR aren't replaceable.

  • To put my utilons where my mouth is, it looks like the funding gap for CFAR is something like ~400k a year. GiveWell reckons that you can save a life for $5k by donating to the right charity. So CFAR costs 80 dead people a year to run, so there's the question: do I think CFAR will save more than 80 lives in the next year? The answer to that might be no, even though CFAR seems to be instigating high-impact good, but if I ask myself do I think CFAR's work over the next decade will save more than 800 lives? the answer becomes a definite yes.

comment by AnnaSalamon · 2015-01-28T04:07:50.831Z · LW(p) · GW(p)

Re: CFAR's impact: Max Tegmark of the Future of Life Institute emails (and offers for us to publicly quote him):

"CFAR was instrumental in the birth of the Future of Life Institute: 4 of our 5 co-founders are CFAR alumni, and seeing so many talented idealistic people motivated to make the world more rational gave me confidence that we could succeed with our audacious goals."

(FLI is the group that recently organized the Puerto Rico conference, and seems in general to be doing loads of high-impact good.)

comment by Danny_Hintze · 2015-01-28T00:42:29.054Z · LW(p) · GW(p)

Donated $200

comment by James_Miller · 2015-01-27T16:50:03.500Z · LW(p) · GW(p)

Donated $100.

comment by Daniel_Burfoot · 2015-01-27T20:34:30.009Z · LW(p) · GW(p)

Donated $100.

comment by berekuk · 2015-01-27T23:27:42.853Z · LW(p) · GW(p)

Donated $100.

comment by philh · 2015-01-28T22:29:12.346Z · LW(p) · GW(p)

Donated £400.

comment by Furcas · 2015-01-29T02:21:56.086Z · LW(p) · GW(p)

Donated $400.

comment by John_Maxwell (John_Maxwell_IV) · 2015-01-29T06:52:45.498Z · LW(p) · GW(p)

Donated $400.

comment by plex (ete) · 2015-01-29T03:06:53.353Z · LW(p) · GW(p)

Donated £100.

comment by Dr_Manhattan · 2015-01-29T20:32:12.766Z · LW(p) · GW(p)

Donated $100 second time. Let's raise that sanity waterline!

ETA: I like the random bonus idea. Suspecting one of the techniques is variable reinforcement? :)

comment by oge · 2015-01-31T03:50:28.046Z · LW(p) · GW(p)

Donated $800. Good luck, CFAR!

comment by LyleN · 2015-01-30T15:33:12.141Z · LW(p) · GW(p)

Donated $125.

comment by Davorak · 2015-01-29T07:13:32.877Z · LW(p) · GW(p)

Donated.

I would recommend making the donate link large, currently it is the smaller link on the page and is harder to notice. "Donate" or "Donate here" in the link text would also make it more noticeable.* Putting a donate link at the top of the fundraising page, http://lesswrong.com/lw/lfg/cfar_in_2014_continuing_to_climb_out_of_the/ would also make it more noticable and more likely to capture vistors and therefore donations.

  • These things are so common I look for them by default. Some might argue that putting the link at the top or making it larger might be untasteful or communicates a spammy signal, I would argue that at least these techniques and more are so standard as to be expected and missed when not present to many.
comment by Benya (Benja) · 2015-01-31T02:01:11.483Z · LW(p) · GW(p)

Donated $300.

comment by aime15 · 2015-01-30T16:42:47.921Z · LW(p) · GW(p)

Donated $50.

comment by Larks · 2015-01-30T01:06:00.853Z · LW(p) · GW(p)

Donated. Go CFAR!

comment by folkTheory · 2015-01-31T19:27:54.842Z · LW(p) · GW(p)

Donated $1000

comment by MalcolmOcean (malcolmocean) · 2015-02-01T04:07:15.690Z · LW(p) · GW(p)

Donated $200!

comment by AnnaSalamon · 2015-02-01T08:23:39.698Z · LW(p) · GW(p)

Closed for the year at $119,269, which is basically awesome. Thanks, everyone!

Replies from: RobbBB
comment by AlexSchell · 2015-02-01T04:40:23.640Z · LW(p) · GW(p)

Donated $100.

comment by [deleted] · 2015-01-27T21:49:30.317Z · LW(p) · GW(p)

Is there an explanation or argument for why a CFAR donation is at least in the same ballpark of effectiveness as, say, a donation to MIRI or GiveWell?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2015-01-27T22:00:36.879Z · LW(p) · GW(p)

Yes; in very brief: thinking skill has been one of the biggest historical drivers of positive progress, and seems liable to be even more important in the coming decades if humanity ends up existential risks of unprecedented trickiness. CFAR seems well-positioned to make some progress on this problem. See details of where we're at here: http://lesswrong.com/lw/lfg/cfar_in_2014_continuing_to_climb_out_of_the/

See a bigger-picture take on our situation (if a bit outdated) in our previous year-end post: http://lesswrong.com/lw/jej/why_cfar/

Also happy to talk.

comment by cursed · 2015-01-28T02:42:01.144Z · LW(p) · GW(p)

On CFAR's front page:

In the process, we’re breaking new ground in studying the long-term effects of rationality training on life outcomes using randomized controlled trials.

Despite CFAR's 2-3 year existence (probably longer informally, as well) they have yet to publish a single paper on these "randomized controlled trials". I would advise not donating until they make good on their claims.

edit: I've also made some notes on CFAR and their use of science as an applause light in previous comments.

Replies from: AnnaSalamon, JoshuaZ
comment by AnnaSalamon · 2015-01-28T04:04:31.760Z · LW(p) · GW(p)

Our vision page sure is out of date there; which I agree reflects badly on us. We should not make that claim at this time.

I do suspect we're a good use of donation, though, for reasons discussed in the links above; happy to engage on specifics.

Replies from: JoshuaZ
comment by JoshuaZ · 2015-01-28T13:50:57.916Z · LW(p) · GW(p)

Are there any plans to try randomized controlled trials in the future?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2015-01-28T20:40:33.557Z · LW(p) · GW(p)

Yes, small ones -- but not peer-reviewed or published (which is much harder). We did a tiny randomized admissions experiment early on (with the admissions in summer 2012, the follow-up surveys one year later), and a small randomized online experiment; we will probably run another randomized admissions set sometime in 2015, w/ results coming in in 2016. (In some years, if/when we get to a point where funding is more secure, we will likely do more.)

Replies from: JoshuaZ
comment by JoshuaZ · 2015-01-29T00:20:41.158Z · LW(p) · GW(p)

Is there a chance that a written up version of these small results will get posted on the CFAR website?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2015-01-29T00:51:26.514Z · LW(p) · GW(p)

Yep; that would be good to do, and I suspect we'll get to it; I'm a bit embarrassed that we haven't done it, but we are in fact getting better with time at doing the things that it's embarrassing not to do, while also prioritizing the few things that most yield compound progress.

comment by JoshuaZ · 2015-01-28T02:48:32.423Z · LW(p) · GW(p)

2-3 years seems like a reasonable time span to not have published if one is trying to measure some sort of long-term effect.

Replies from: cursed
comment by cursed · 2015-01-28T02:54:41.148Z · LW(p) · GW(p)

As noted in http://lesswrong.com/lw/lfg/cfar_in_2014_continuing_to_climb_out_of_the/, they haven't even started yet. Also, just replicating a study they cite in their rationality training would be a good step.

One of the future premises of CFAR is that we can eventually apply the full scientific method to the problem of constructing a rationality curriculum (by measuring variations, counting things, re-testing, etc.) -- we aim to eventually be an evidence-based organization. In our present state this continues to be a lot harder than we would like; and our 2014 workshop, for example, was done via crude "what do you feel you learnt?" surveys and our own gut impressions.

Replies from: JoshuaZ
comment by JoshuaZ · 2015-01-28T02:55:33.708Z · LW(p) · GW(p)

Ok. That's a little more worrisome. So how much of that situation is itself caused by lack of funding and the currently small nature of the organization?

Replies from: cursed
comment by cursed · 2015-01-28T03:57:37.195Z · LW(p) · GW(p)

I'm not sure if this response was directed towards me, because I don't know what their reasonings are.

Replies from: JoshuaZ
comment by JoshuaZ · 2015-01-28T13:19:05.603Z · LW(p) · GW(p)

The response was directed towards you, I should have maybe phrased it as adding something like "We should then ask" before the question. If this is caused by a lack of funding then it isn't by itself that much of a worry.

comment by AnnaSalamon · 2015-02-01T04:23:18.808Z · LW(p) · GW(p)

Fundraiser is at $100.8k (~19k of matching and of target remaining); three hours and 39 minutes left in the matching window. A couple people emailed asking if monthly pledges counted, and then told me I should mention it on here: if you pledge $n/month, and send me a message/email/etc. noting that you intend to keep the pledge up across 2015, your full $12n can be counted toward the matching drive. (Corporate matching can also be counted, if you've donated and have an employer who will match your donation; that, too, should be emailed to anna@rationality.org.)

comment by brazil84 · 2015-01-31T23:08:24.682Z · LW(p) · GW(p)

What percentage of alumni make donations and what is their average donation?

comment by Dorikka · 2015-01-27T18:27:11.757Z · LW(p) · GW(p)

Looks like CFAR met the 150k target last year. Currently treating apparent (quite large) decrease in donations as evidence of decreasing perceived effectiveness. Any reason why this conclusion is likely to be incorrect? I am not sure whether there are some monthly pledge matches that are not counted until the very end, etc.

ETA: Donated.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2015-01-27T20:23:58.386Z · LW(p) · GW(p)

Donations are lumpy, and much support last year came from a few large gifts; and much donation often comes in at the end. Also, last year we were younger and more vulnerable as an organization than we are now, which probably led to some giving us extra help once.

I suspect donation is high-impact; I'd be glad to talk with you about the details if you're interested; I suspect talking with me, with folks who've been to workshops, or with others who've had contact would be better for hugging the query than trying to reason from "CFAR hasn't received donations yet" to "CFAR would not use donations well".

comment by AnnaSalamon · 2015-02-01T07:42:36.350Z · LW(p) · GW(p)

18 minutes remaining; $16k of matching funds left unfilled (out of $120k).

comment by Username · 2015-01-27T17:50:41.661Z · LW(p) · GW(p)

I'll throw out that I strongly dislike CFAR having a priveldged spot as one of the nonprofits that it's ok to advertise for on LW. Just because it's in-tribe doesn't mean it's not spammy. I'd much rather have a blanket ban on advertising than allow anyone to prosthelytize for their favorite charity.

Replies from: Luke_A_Somers, Larks, RobbBB, John_Maxwell_IV, JoshuaZ
comment by Luke_A_Somers · 2015-01-27T18:35:02.778Z · LW(p) · GW(p)

Strongly disagree. At the top of the page, you see 3 logos: CFAR, MIRI, and FHI. This is not a 'hey, we like these organizations' deal. It's a 'These are our babies' deal.

comment by Larks · 2015-01-28T00:09:03.170Z · LW(p) · GW(p)

It's a (perhaps the only) rationality charity, founded by core members of this website, which is a website dedicated to rationality!

comment by Rob Bensinger (RobbBB) · 2015-01-27T18:37:51.182Z · LW(p) · GW(p)

Do you feel the same way about MIRI and FHI advertising here? (Or CEA or GiveWell?)

CFAR strikes me as the organization that's uniquely closely-aligned with LW, or at least with LW as it describes itself. (A blog about "refining the art of human rationality" and "improving your reasoning and decision-making skills" where people collaborate and network about personal obstacles and challenging high-impact problems.)

Replies from: Username
comment by Username · 2015-01-28T09:26:59.601Z · LW(p) · GW(p)

Do you feel the same way about MIRI and FHI advertising here? (Or CEA or GiveWell?)

Yes, to the extent that I don't think there should be posts on here asking for funds for these orgs from people. It's perfectly fine to have discussion about their merits in the context of EA or in their own discussions, but again, I do not like that this site is used as a soundboard for soliciting donations.

Clearly, the community consensus is against my opinion, if these vote spreads are any indication.

comment by John_Maxwell (John_Maxwell_IV) · 2015-01-28T04:15:30.975Z · LW(p) · GW(p)

Just because it's in-tribe doesn't mean it's not spammy.

Taboo "spammy"?

Consider the position that users should upvote material they see as high-value and downvote material they see as low-value, regardless of whether the material is an advertisement. If an ad is good and it's delivering value to people, seems silly to censor it. (In this case, this post's high score suggests that on balance many think it's helping them achieve their values.) If it's a bad advertisement, downvote it like you would downvote any post you dislike. If it's actual spam (a non-targeted advertisement from a non-community-member that's highly unlikely to create significant value and creates perverse incentives for further such ads if not censored), then yeah, censor it.

I'd much rather have a blanket ban on advertising than allow anyone to prosthelytize for their favorite charity.

Hm, personally I see effective altruism as a core LW topic and discussion of which charities to donate to, including persuasive writing, seems very on topic to me.

comment by JoshuaZ · 2015-01-27T18:12:08.680Z · LW(p) · GW(p)

CFAR seems to have interests which are directly aligned with LW.

I'd much rather have a blanket ban on advertising than allow anyone to prosthelytize for their favorite charity.

Why? If people keep doing so and the better ones float to the top and the non-effective charities get downvoted or ignored, isn't that still a net-win?

Replies from: brazil84
comment by brazil84 · 2015-02-01T09:54:50.677Z · LW(p) · GW(p)

CFAR seems to have interests which are directly aligned with LW.

It it the alignment of interests or is it overlapping management and control?

If I started my own rationality organization with basically the same stated goals as CFAR, would it get the same kind of billing? I pretty much doubt it. Actually, I think a lot of people would resent it as a competitor.