Donating to MIRI vs. FHI vs. CEA vs. CFAR
post by ChrisHallquist · 2013-12-27T03:43:04.752Z · LW · GW · Legacy · 45 commentsContents
45 comments
In a discussion a couple months ago, Luke said, "I think it's hard to tell whether donations do more good at MIRI, FHI, CEA, or CFAR." So I want to have a thread to discuss that.
My own very rudimentary thoughts: I think the research MIRI does is probably valuable, but I don't think it's likely to lead to MIRI itself building FAI. I'm convinced AGI is much more likely to be built by a government or major corporation, which makes me more inclined to think movement-building activities are likely to be valuable, to increase the odds of the people at that government or corporation being conscious of AI safety issues, which MIRI isn't doing.
It seems like FHI is the obvious organization to donate to for that purpose, but Luke seems to think CEA (the Centre for Effective Altruism) and CFAR could also be good for that, and I'm not entirely clear on why. I sometimes get the impression that some of CFAR's work ends up being covert movement-building for AI-risk issues, but I'm not sure to what extent that's true. I know very little about CEA, and a brief check of their website leaves me a little unclear on why Luke recommends them, aside from the fact that they apparently work closely with FHI.
This has some immediate real-world relevance to me: I'm currently in the middle of a coding bootcamp and not making any money, but today my mom offered to make a donation to a charity of my choice for Christmas. So any input on what to tell her would be greatly appreciated, as would more information on CFAR and CEA, which I'm sorely lacking in.
45 comments
Comments sorted by top scores.
comment by JGWeissman · 2013-12-27T06:08:14.543Z · LW(p) · GW(p)
I'm convinced AGI is much more likely to be built by a government or major corporation, which makes me more inclined to think movement-building activities are likely to be valuable, to increase the odds of the people at that government or corporation being conscious of AI safety issues, which MIRI isn't doing.
MIRI's AI workshops get outside mathematicians and AI researchers involved in FAI research, which is good for movement building within the population of people likely to be involved in creating an AGI.
Replies from: Nonecomment by Kaj_Sotala · 2013-12-27T07:44:00.876Z · LW(p) · GW(p)
There seems to have been a very lopsided flow of funds into the MIRI and CFAR fundraisers. The balance is tilted sufficiently that I'm now willing to call it for the next marginal dollar being more valuable at CFAR than at MIRI, at least until CFAR's fundraiser completes.
(Of course that only applies to the current fundraisers, not to the question in general.)
comment by John_Maxwell (John_Maxwell_IV) · 2013-12-27T07:19:21.398Z · LW(p) · GW(p)
I've heard that CFAR is already trying to move in the direction of being self-sustaining by charging higher fees and stuff. I went to a 4-day CFAR workshop and was relatively unimpressed; my feeling about CFAR is that they are providing a service to individuals for money and it's probably not a terrible idea to let the market determine if their services are worth the amount they charge. (In other words, if they're not able to make a sustainable business or at least a university-style alum donor base out of what they're doing, I'm skeptical that propping them up as a non-alum is an optimal use of your funds.)
FHI states that they are interested in using marginal donations to increase the amount of public outreach they do. It seems like FHI would have a comparative advantage over MIRI in doing outreach, given that they are guys with PhDs from Oxford and thus would have a higher level of baseline credibility with the media, etc. So it's kind of disappointing that MIRI seems to be more outreach-focused of the two, but it seems like the fact that FHI gets most of its funding from grants means they're restricted in what they can spend money on. FHI strikes me as more underfunded than MIRI, given that they are having to do a collaboration with an insurance company to stay afloat, whereas MIRI has maxed out all of their fundraisers to date. (Hence my decision to give to FHI this year.)
If you do want to donate to MIRI, it seems like the obvious thing to do would be to email them and tell them that you want to be a matching funds provider for one of their fundraisers, since they're so good at maxing those out. (I think Malo would be the person to contact; you can find his email on this page.)
Replies from: JGWeissman, Sean_o_h, ChrisHallquist, None↑ comment by JGWeissman · 2013-12-27T13:42:13.971Z · LW(p) · GW(p)
my feeling about CFAR is that they are providing a service to individuals for money and it's probably not a terrible idea to let the market determine if their services are worth the amount they charge.
I think that CFAR's workshops are self funding and contribute to paying for organizational overhead. Donated funds allow them to offer scholarships to their workshops to budding Effective Altruists (like college students) and run the SPARC program (targeting mathematically gifted children who may be future AI researchers). So, while CFAR does provide a service to individuals for money, donated money buys more services targeted at making altruistic people more effective and getting qualified people working on important hard problems.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2013-12-27T17:05:01.884Z · LW(p) · GW(p)
Yes; this is correct. The workshops pay for themselves, and partially subsidize the rest of our activities; but SPARC, scholarships for EA folk, and running CFAR training at the effective altruism summit don't; nor does much desirable research. We should have a longer explanation of this up later today.
Edited to add: Posted.
↑ comment by Sean_o_h · 2013-12-28T16:28:48.866Z · LW(p) · GW(p)
"FHI strikes me as more underfunded than MIRI, given that they are having to do a collaboration with an insurance company to stay afloat, whereas MIRI has maxed out all of their fundraisers to date. (Hence my decision to give to FHI this year.)"
It's true that due to a grant ending and a failure to secure a followon grant we were heavily dependent on success with the insurance collaboration in order to hold onto key researchers and core staff needed to run the FHI. However, I would add that this is not my/our only motivation for heavily pursuing this area.
Wherever possible, I think there is a high value in FHI (and CSER) pursuing large pots of funding that are not available to other Xrisk/EA organisations (i.e. not diverting funds from other worthy organisations), and I think this type of funding is a prime example of this. If the current collaboration goes well, I believe there's the possibility of substantial further funding in this area.
It is true that some staff time is being diverted towards systemic risk research, but it still represents a substantial increase in FHI's overall Xrisk research output (and depending on availability of unconstrained funds and the development of the project, we may be in a position to "sub out" people as the project goes on, in favour of their doing 100% Xrisk research).
Lastly, I believe there is a value to us producing high-quality research on a more "mainstream" risk topic that gains us additional academic prestige and credibility, and thus lends credibility to the more speculative existential risk work we do. It also introduces new, potentially influential, audiences to our existential risk work.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-12-29T00:13:35.500Z · LW(p) · GW(p)
Thanks for the clarification! I'm glad to hear about these side benefits from the insurance project.
↑ comment by ChrisHallquist · 2013-12-27T15:52:16.877Z · LW(p) · GW(p)
Can anyone confirm the claim about FHI underfunding?
Replies from: Sean_o_h, Sean_o_h↑ comment by Sean_o_h · 2013-12-30T16:51:28.255Z · LW(p) · GW(p)
Ways in which we're underfunded:
Most of our funding comes from academic sources, which for the most part are quite constrained and not suitable for funding the full range of activities of a centre like the FHI. In order to be successful, we need to produce "lean and tight" funding applications that typically cover researchers' salaries, perhaps some workshop/conference funds and a bare minimum of admin support. Most academic institutes within philosophy do not do nearly as much public outreach, projects of various types, visitor hosting and policy engagement as FHI does.
The result of this is that:
We're typically underfunded in "core staff". E.g. last year I was doubling up for the FHI as staff manager, fundraiser, grantwriter/coordinator, communications+media manager, administrator, researcher, workshop/conference organiser, research strategist, research networks person, and general purpose project manager. It also means that a lot of Nick Bostrom's time ends up spend on this stuff (as well as on admin for himself that a PA could be doing), whereas ideally he would be able to focus on FHI research strategy and his own research. (In academia the expectation appears to be that a centre Director is a manager/administrator/networker/research strategist, whereas ideally we would like to free up Nick from as much of the admin/management/fundraising/networking burden as possible to allow research time). This year, our acquisition of industry funding has allowed us to get two new core staff members: an office administrator+PA for Nick Bostrom, and a general purpose academic project manager, which will help a lot - however, our overall research staff has also expanded (we currently have 13 staff members + 3 active collaborators, whereas we had 4 or 5 when I arrived in 2011). Therefore I believe core operations funding still represents a bottleneck (though less severely than before). I.e. a minimal (in the overall scheme) increase in our funding here would represent a very cost-effective way to increase our overall output of various types (research, outreach, fundraising, policy engagement, networking). While our "core" to "pure research" staffing ratio is large compared to a traditional academic research unit (e.g. a group in a room researching religious epistemology), it's still very small compared to the organisations I know that are more directly similar to the FHI.
Many of the projects we would like to do are ones that are difficult to fund through academic means - e.g. the thesis prize competitions, currently-shelved plans for a "top young talent" summer school, some public communications/outreach projects. Also, having a reserve fund to hold onto key researchers between grants is extremely valuable (I'm trying to rebuild this at present after we depleted it last year holding onto Stuart Armstrong and Anders Sandberg. Several current researchers' funds will run out in late 2014, so this fund may need to be put to use again!). This fund can also double up as funds FHI (as host institute) can commit towards supporting aspects of larger grant applications (e.g. support a conference/academic visitor programme to complement the research programme), which apparently "looks" very good to reviewers and may increase their likelihood of success.
On the research front:
There are several areas of research that we think would be valuable to expand into - some examples include surveillance technology and synthetic biology/genomics/biotech, while maintaining our AI risk research - and the in-house consensus is that doing more work on "core" existential risk concepts would also be very valuable. We are currently writing grants to support work in this area, but given success is not a guarantee, philanthropic support of additional researchers in these areas would be very valuable.
As mentioned elsewhere, at all times there appears to be at least one researcher available who we would love to hire, and who we have not been able to afford to.
As mentioned elsewhere, there are several researchers who are having to spend quite a bit of their time suboptimally (e.g. doing some direct X-risk research, but also producing less X-risk-relevant work in order to satisfy a funder) who it would be valuable to be able to "buy out" and have work full-time on X-risk, were the funds available.
Having funds to hire a research assistant to support the research work of key researchers - e.g. Nick Bostrom, Anders Sandberg, Stuart Armstrong, Toby Ord, Nick Beckstead - would be a very cost-effective way to increase their output and productivity (we did this to good effect for Nick's Superintelligence book, but don't currently have the funds to hire someone to do this full-time).
Different amounts of funding allow us to tackle different ones of these. I have a rough priority list based on what I think most cost-effectively increases our output/situation at a given time (and what I think we may not be able to get academic funding for) that currently goes Reserve fund -> "hard to fund normally" projects -> core activities -> research assistant -> hire more researchers/sub out researchers (depending on what talent is available at a given time). However, we're constantly reassessing this as our research and situation evolves. It's also the case that larger donations can be used in a different way than smaller ones (e.g. a new researcher may need to be funded 100% or not at all depending on situation). It's also the case that if a donor specifically wants to fund desirable thing X, then that's what we use the funds for.
Sorry for the essay, hope this is the kind of information you wanted.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-12-30T16:53:13.225Z · LW(p) · GW(p)
Don't apologize! This is great info!
Replies from: Sean_o_h, eggman↑ comment by eggman · 2014-07-15T08:37:00.568Z · LW(p) · GW(p)
If anything, I could use more information from the CEA, the FHI, and the GPP. Within effective altruism, there's a bit of a standard of expecting some transparency of the organizations, purportedly effective, which are supported. In terms of financial support, this would mean the open publishing of budgets. Based upon Mr. O'Heigeartaigh's report above, the FHI itself might be strapped for available time, among all its other core activities, to provide this sort of insight.
comment by Sean_o_h · 2013-12-28T17:55:29.640Z · LW(p) · GW(p)
I think it's worth considering the funding landscape and the funding situations of the various organisations - "how much good" is obviously not static, and will depend on factors including this one. I've commented in other places in this thread that FHI is underfunded relative to its needs as an organisation and the research we think is important to do. I believe this is true.
I also believe that FHI has funding opportunities available to it that aren't available to CEA/CFAR/MIRI. For example, at present ~80% of our overall funding comes from a combination of industry and academic grants.
This has its downsides - while this allows us scope to do a large amount of pure existential risk research, some of our research has to be tailored towards satisfying our funders, and so from a pure X-risk reduction point of view we are somewhat constrained. There are also a number of valuable uses of money that we can't use these funds for.
It means the ~20% that we gain from philanthropic sources has very high utility. These are the funds that allow us to do public outreach, thesis prize competitions, further philanthropic/industry fundraising, extensive networking and visitor programmes, and other important work that doesn't fit into a traditional academic research project - thus these funds add utility to the other ~80% of funds. Also, we have one researcher currently funded from a philanthropic donation, and he has the freedom to work purely on the Xrisk work he (or Nick Bostrom) considers most important, rather than having to devote some time to e.g. framing research output to fit an academic funder. This unconstrained research capability is of obvious value.
However, it also means that we can (at least at present) get the bulk of the funds needed to run the FHI from sources that may not be available to these other valuable organisations. While we can't usually get grants to get exactly what we want to do, we can usually get grants to do something reasonably close - enough so that we can use them to get useful work done (even if we lose some efficiency compared to unconstrained funding). Hence, while I think donations sent to FHI will do quite a lot of good, I would be very reluctant to see too much funding diverted away from organisations that may not have the same breadth of funding options available to them if at all possible. (Frankly, I prefer to get funding from companies that sponsor rugby teams when they're not sponsoring FHI ;) )This is also a reason CSER's been targeting academic councils in the first instance.
tl;dr: We get quite a lot of our funding from other sources, and so may not always need additional funds as much as organisations with a less broad funding profile. That said, philanthropic funds that come to us play an important role in our running and have high utility.
Replies from: joaolkf, John_Maxwell_IV↑ comment by joaolkf · 2013-12-28T22:43:41.927Z · LW(p) · GW(p)
Frankly, I prefer to get funding from companies that sponsor rugby teams when they're not sponsoring FHI
Given if all goes really well I will be part-time researching for FHI for three years funded by the Brazilian government, I can share that sentiment. The money will be going from researching Descartes's lost letters to Spinoza to X-Risks. It will feel like I've just used a cheat code in utility space.
EDIT: After stating the above my motivation increased about omega-fold. I will print it and affix it on my fridge!
↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-01-01T05:01:53.768Z · LW(p) · GW(p)
So it sounds as though while FHI values funds a lot in its current somewhat underfunded state, your utility function has steeply diminishing marginal returns for additional dollars past a certain point. Maybe you could start making annual announcements regarding the rough dollar amount beyond which you think FHI changes from a "great" giving opportunity to an "ok" giving opportunity, and also keep us up to date on what progress donors are making towards this dollar amount? This seems like it could be valuable for donors to better coordinate their giving and maximize overall x-risk reduction.
Replies from: Sean_o_h↑ comment by Sean_o_h · 2014-01-02T10:55:05.364Z · LW(p) · GW(p)
This sounds like a good idea. I'll either try to do so, or ask a member of CEA if they'd be willing to talk to me and then do so. I think I prefer the latter, as they may be able to give an assessment that is less biased and more informed by how "great" FHI is as a giving opportunity relative to other opportunities.
comment by joaolkf · 2013-12-27T15:57:31.136Z · LW(p) · GW(p)
Nick, Anders and Toby have been consulted by government agencies in the past, particularly Nick has done that several times (even by the Thailand's government apparently). If your concern is influence over government, FHI wins, given I don't think movement building would get us as far as having a Prime Minister meeting with FHI staff. It would have to be one serious movement to match just one or two meetings. It's likely that there aren't even enough eligible brains for a "AI-risks movement" of such scale.
However, it is not the case "influence over the government" should be the most important criteria. Mainly, because right now we wouldn't even know exactly what to tell them, and it might take decades until we do. Hence, the most important criteria is level of basic research. The mere fact your question hasn't any clear answers means we need more basic research, and thus that MIRI/ FHI have preference. I couldn't know whether FHI or MIRI would be better. As a wild guess, I would say FHI does more research but that it somehow feeds from MIRI non-academic flexibility and movement building. Likely, whichever had the preference over resources would lose this preference relatively fast as it outgrew the other.
On the other hand, I have heard MIRI/FHI/CEA staff claiming they are much more in need of qualified people than money. So, if CFAR is increasing qualification then they ought to have priority. But it's not clear if they are really doing that yet.
Replies from: Sean_o_h, Benjamin_Todd↑ comment by Sean_o_h · 2013-12-28T09:49:17.283Z · LW(p) · GW(p)
"On the other hand, I have heard MIRI/FHI/CEA staff claiming they are much more in need of qualified people than money"
"Getting the right people is the most important thing" is a general principle at FHI. However, in my 2.5 years managing FHI there have consistently been people we haven't been able to hire and research areas we haven't been able to address, although we've been successful in expanding and improving our funding situation quite a lot. If you presented us with another qualified person right now, I don't see how we would be able to hire them at present (although that may not be the case in some months, depending on grant successes, etc). We've also consistently been understaffed and underfunded on core operations, and thus have only been able to avail of a fraction of the opportunities available to us.
↑ comment by Benjamin_Todd · 2013-12-30T20:26:52.167Z · LW(p) · GW(p)
Note that Toby is a trustee of CEA and did most of his government consulting due to GWWC, not the FHI, so it's not clear that FHI wins out in terms of influence over government.
Moreover, if your concern is influence over government, CEA could still beat FHI (even if FHI is doing very high level advocacy) by acting as a multiplier on the FHI's efforts (and similar orgs): $1 donated to CEA could lead to more than $1 of financial or human capital delivered to the FHI or similar. I'm not claiming this is happening, but just pointing out that it's too simple to say FHI wins out just because they're doing some really good advocacy.
Disclaimer: I'm the Executive Director of 80,000 Hours, which is part of CEA.
Replies from: Sean_o_h, joaolkf↑ comment by Sean_o_h · 2013-12-31T00:16:42.134Z · LW(p) · GW(p)
Re: point 1: The bulk of our policy consultations to date have actually been Nick Bostrom, although Anders Sandberg has done quite a bit, Toby has been regularly consulting with the UK government recently, and I've been doing some lately (mostly wearing my CSER hat, but drawing on my FHI expertise, so I would give FHI credit there ;) ) and others have also done bits and pieces.
↑ comment by joaolkf · 2013-12-30T21:18:12.136Z · LW(p) · GW(p)
I don't have the numbers of the top of my head, but the bulk of the consultations in my list are due to Nick. I believe there are even much more done by him previous to FHI even existing back in the 90s. Nonetheless, I would guess he is probably very much willing to transfer the advocacy to CEA and similar organizations, as it seems to be already happening. In my opinion, that isn't FHI main role at all, even though they been doing it a lot. As a wild guess, I would be inclined to say he probably actively rejects a few consultations by now. As I said, we need research. Influence over the government is useless - and perhaps harmful - without it.
While they work together, I'm not sure advocacy and influence over the government are quite the same. I think advocacy here might just be seen as close to advertising and movement building, which in turn will create political pressure. Quite another thing is to be asked by the government to offer ones opinion.
Replies from: Benjamin_Todd↑ comment by Benjamin_Todd · 2013-12-30T22:31:14.959Z · LW(p) · GW(p)
I think both research and advocacy (both to governments and among individuals) are highly important, and it's very unclear which is more important at the margin.
It's too simple to say basic research is more important, because advocacy could lead to hugely increased funding for basic research.
comment by Peter Wildeford (peter_hurford) · 2013-12-27T15:05:09.675Z · LW(p) · GW(p)
movement-building activities are likely to be valuable, to increase the odds of the people at that government or corporation being conscious of AI safety issues
CEA and CFAR don't do anything, to my knowledge, that would increase these odds, except in exceedingly indirect ways. FHI might be the most credible opportunity here because of their academic associations, which give them more credibility in PR. I remember Luke saying that FHI and CSER's academic ties as the reason why they -- an not MIRI -- are better suited to do publicity than FHI.
Therefore, while I disagree with you that the most important thing is to increase the odds of the people at that government or corporation being conscious of AI safety issues, I think that given what values you have told me, FHI is the most likely to maximize them.
Replies from: wdmacaskill, Larks↑ comment by wdmacaskill · 2013-12-27T20:43:36.705Z · LW(p) · GW(p)
CEA and CFAR don't do anything, to my knowledge, that would increase these odds, except in exceedingly indirect ways.
People from CEA, in collaboration with FHI, have been meeting with people in the UK government, and are producing policy briefs on unprecedented risks from new technologies, including AI (the first brief will go on the FHI website in the near future). These meetings arose as a result of GWWC media attention. CEA's most recent hire, Owen Cotton-Barratt, will be helping with this work.
↑ comment by Larks · 2013-12-28T21:49:09.820Z · LW(p) · GW(p)
I remember Luke saying that FHI and CSER's academic ties as the reason why they're better suited to do publicity than FHI.
I assume you mean "than CEA", but you should probably clarify as it is important.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-12-28T22:31:56.107Z · LW(p) · GW(p)
I actually meant compared to MIRI. I edited it to make that clear. Thanks!
comment by Benjamin_Todd · 2013-12-30T21:07:26.149Z · LW(p) · GW(p)
We've collated a list of all the approaches that seem to be on the table in the effective altruism community for improving the long-run future. There's some other options, including funding GiveWell and GCRI. This doc also explains a little more of the reasoning behind the approaches. If you like more detail on how 80k might help reduce the risk of extinction, drop me an email at ben@80000hours.org.
In general, I think the question of how best to improve the long-run future is highly uncertain, but has high value of information, so the most important activities are: (i) more prioritisation research (ii) building flexible capacity which can act on whatever turns out to be best in the future.
MIRI, FHI, GW, 80k, CEA, CFAR, GCRI all aim to further these causes, and are mutually supporting, so are particularly hard to disentangle. My guess is that if you buy the basic picture, the key issues will be things like 'which organisation has the most pressing room for more funding at the moment?' rather than questions about the value of the particular strategies.
Another option would be to fund research into which org can best use donations. There's a chance this could be commissioned through CEA, though we'll need to think of some ways to reduce bias!
Disclaimer: I'm the Executive Director of 80,000 Hours, which is part of CEA.
comment by Adele_L · 2013-12-27T18:30:38.771Z · LW(p) · GW(p)
Seems like a case of harder choices matter less.
Replies from: Dorikka, ChrisHallquist, RobbBB↑ comment by ChrisHallquist · 2013-12-27T18:44:13.614Z · LW(p) · GW(p)
Yes. Except this is one of those cases where more information could help, which I'm trying to gather by starting this thread.
↑ comment by Rob Bensinger (RobbBB) · 2013-12-29T08:53:24.343Z · LW(p) · GW(p)
That article is half-joking, and still qualifies its advice a lot:
This is a bit of a tongue-in-cheek suggestion, obviously - more appropriate for choosing from a restaurant menu than choosing a major in college. [...]
Does this have any moral for larger dilemmas, like choosing a major in college? Here, it's more likely that you're in a state of ignorance, than that you would have no real preference over outcomes. Then if you're agonizing, the obvious choice is "gather more information" - get a couple of part-time jobs that let you see the environment you would be working in. And, logically, you can defer the agonizing until after that.[...]
I do think there's something to be said for agonizing over important decisions, but only so long as the agonization process is currently going somewhere, not stuck.
We can actually try to quantify 'Is this process going somewhere?', by calculating the expected value for 'donate to MIRI', 'donate to FHI', etc., thinking about it some more, and then re-calculating (say, a week later). If after thinking about it and researching it a lot, your estimates are approximately the same (in absolute terms, and correcting for e.g. anchoring), then the process hasn't been useful, and this may be a case where agonizing is a waste of time.
If you're weighing Cause A against Cause B, and in March you expect 38 utilons from A and 41 from B, and in April you expect 42 utilons from A and 40 from B, then you'll have a hard time making up your mind, but the decision probably isn't very important.
On the other hand, if in March you expect 38 from A and 41 from B, and in April you expect 1500 from A and 90 from B, and in May you expect 150 from A and 190 from B.... then your decision is still difficult, but now it's probably reasonable for you to continue agonizing about it, where by 'agonizing' we mean 'acquiring more information and processing it more rigorously'.
This isn't an absolute rule, though. If a lot of value is at stake, and you're rigorous enough to estimate value to a lot of significant digits, then even if your preferences keep switching by proportionally small amounts, your decision may matter a lot in absolute terms. E.g., the choice between $5,000,010,000 and $5,000,000,000 matters a lot in a world where $10,000 can save lives.
comment by Mestroyer · 2013-12-27T04:40:38.138Z · LW(p) · GW(p)
Perhaps Luke Muehlhauser thinks CFAR is universe-scale useful for the same reason Eliezer Yudkowsky does?
And remember: To be a PC, you’ve got to involve yourself in the Plot of the Story. Which from the standpoint of a hundred million years from now, is much more likely to involve the creation of Artificial Intelligence or the next great advance in human rationality (e.g. Science) than… than all that other stuff. Sometimes I don’t really understand why so few people try to get involved in the Plot. But if there’s one thing I’ve learned in life, it’s that the important things are accomplished not by those best suited to do them, or by those who ought to be responsible for doing them, but by whoever actually shows up.
(If this is indeed the whole reason EY thinks so, besides movement-building).
comment by wdmacaskill · 2013-12-28T12:07:57.342Z · LW(p) · GW(p)
Argh! Original post didn't go through (probably my fault), so this will be shorter than it should be:
First point:
I know very little about CEA, and a brief check of their website leaves me a little unclear on why Luke recommends them, aside from the fact that they apparently work closely with FHI.
CEA = Giving What We Can, 80,000 Hours, and a bit of other stuff
Reason -> donations to CEA predictably increase the size and strength of the EA community, a good proportion of whom take long-run considerations very seriously and will donate to / work for FHI/MIRI, or otherwise pursue careers with the aim of extinction risk mitigation. It's plausible that $1 to CEA generates significantly more than $1's worth of x-risk-value [note: I'm a trustee and founder of CEA].
Second point:
Don't forget CSER. My view is that they are even higher-impact than MIRI or FHI (though I'd defer to Sean_o_h if he disagreed). Reason: marginal donations will be used to fund program management + grantwriting, which would turn ~$70k into a significant chance of ~$1-$10mn, and launch what I think might become one of the most important research institutions in the world. They have all the background (high profile people on the board; an already written previous grant proposal that very narrowly missed out on being successful). High leverage!
Replies from: Sean_o_h, John_Maxwell_IV↑ comment by Sean_o_h · 2013-12-28T17:07:59.989Z · LW(p) · GW(p)
On point 1: I can confirm that members of CEA have done quite a lot of awareness-spreading about existential risks and long-run considerations, as well as bringing FHI, MIRI and other organisations to the attention of potential donors who have concerns in this area. I generally agree with Will's point, and I think it's very plausible that CEA's work will result in more philanthropic funding coming FHI's way in the future.
On point 2: I also agree. I need to have some discussion with the founders to confirm some points on strategy going forward as soon as the Christmas period's over, but it's likely that additional funds could play a big role in CSER's progress in gaining larger funding streams. I'll be posting on this shortly.
Replies from: amcknight, lukeprog↑ comment by lukeprog · 2013-12-31T08:51:36.744Z · LW(p) · GW(p)
I should clarify that when I said "I think it's hard to tell whether donations do more good at MIRI, FHI, CEA, or CFAR," I didn't mean that to be an exhaustive list. CSER could also be on the list, precisely because it's not just some random organization talking about how they want to reduce x-risk, but in fact is tightly connected to FHI.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-01-01T05:34:53.487Z · LW(p) · GW(p)
CSER could also be on the list, precisely because it's not just some random organization talking about how they want to reduce x-risk, but in fact is tightly connected to FHI.
For bystanders, note that opinions differ on whether CSER being tightly affiliated with FHI is a pro or a con. (Personally, I'm inclined to think that any the Great Filter is either behind us or impossible to subvert, given bonobos.)
↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-01-01T05:10:26.179Z · LW(p) · GW(p)
I looked but didn't see any donation info for CSER. Are they soliciting donations?
comment by hyporational · 2013-12-27T20:25:25.224Z · LW(p) · GW(p)
If I were a talented but broke EA, I would try to convince my mom to invest the money in me, since that would make me a more effective philanthropist faster.
A lot of people here argue that microdonations make a large whole, but I'm not sure they're applying probabilistic reasoning to a huge amount of donations actually happening. Or maybe the donation is huge, what do I know.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-01-01T15:33:40.121Z · LW(p) · GW(p)
He's not making any money, so his mother probably does already finance him in a way where he doesn't need to make money to pay for living expenses.
Beyond that point it not always easy to buy further effectiveness. It might be the most effective route if he focuses most of his energy on his coding bootcamp instead of spending money to pursue other roads.
Replies from: hyporational↑ comment by hyporational · 2014-01-01T23:42:07.670Z · LW(p) · GW(p)
Either that or he has savings. Talent and coding skills plus money sounds like a startup to me.
comment by eggman · 2014-07-15T08:47:39.150Z · LW(p) · GW(p)
Is there an update on this issue? Representatives from nearly all the relevant organizations have stepped in, but what's been reported has done little to resolve my confusion on this issue, and I think of myself as divided on it as Mr. Hallquist originally was. Dr. MacAskill, Mr. O'Haigeartaigh, Ms. Salamon have all provided explanations for why they believe each of the organizations they're attached are the most deserving of funding. The problem is that this has done little to assuage my concern about which organization is in the most need of funds, and will have the greatest impact given a donation in the present, relative to each of the others.
Thinking about it as a write this comment, it strikes me an unfortunate case of events when organizations who totally want to cooperate towards the same ends are put in the awkward position of making competing(?) appeals to the same base of philanthropists. This might have been mentioned elsewhere in the comments, but donations to which organization do any of you believe would lead to the biggest return of investment in terms of attracting more donors, and talent, towards existential risk reduction as a whole? Which organization will increase the base of effective altruists, and like individuals, who would support this cause?