CFAR’s Inaugural Fundraising Drive

post by Dr_Manhattan · 2012-12-18T01:19:00.272Z · LW · GW · Legacy · 3 comments

(interested in hearing how other donors frame allocation between SI and CFAR)


Comments sorted by top scores.

comment by VNKKET · 2012-12-18T02:53:55.742Z · LW(p) · GW(p)

(interested in hearing how other donors frame allocation between SI and CFAR)

I still only donate to SI. It's great that we can supposedly aim the money at FAI now, due to the pivot towards research.

But I would also love to see EY's appeal to MoR readers succeed:

I don’t work for the Center for Applied Rationality and they don’t pay me, but their work is sufficiently important that the Singularity Institute (which does pay me) has allowed me to offer to work on Methods full-time until the story is finished if HPMOR readers donate a total of $1M to CFAR.

comment by pcm · 2012-12-24T02:02:47.945Z · LW(p) · GW(p)

I'm donating to CFAR but not SI because CFAR would help in a wider variety of scenarios.

If AGI will be developed by a single person or a very small team, it seems likely that it won't be done by someone we recognize in advance as likely to do it (for example, think of the inventions of the airplane or the web). CFAR is more oriented toward influencing large enough numbers of smart people that it will be more likely to reach such a developer.

Single-person AGI development seems like a low probability scenario to me, but the more people that are needed to create an AGI, the less plausible it seems that intelligence will be intelligible enough to go foom. So I imagine a relatively high fraction of scenarios in which UFAI takes over the world as coming from very small development teams.

Plus it's quite possible that we're all asking the wrong questions about existential risks. CFAR seems more likely than SI to help in those scenarios.

comment by palladias · 2012-12-18T16:43:37.397Z · LW(p) · GW(p)

I was a July minicamp attendee, AMA that will help inform your donation decisions.