What is the most effective way to donate to AGI XRisk mitigation?
post by JoshuaFox · 2021-05-30T11:08:10.446Z · LW · GW · 11 commentsContents
11 comments
There are now many organizations in field of Existential Risk from Artificial General Intelligence. I wonder which can make the most effective use of small donations.
My priority is on mathematical or engineering research aimed at XRisk from superhuman AGI.
My donations go to MIRI, and for now it looks best to me, but I will appreciate thoughtful assessment.
- Machine Intelligence Research Institute pioneered AGI XRisk mitigation (alongside FHI, below) and does foundational research. Their approach aims to avoid a rush to implementation of an AGI which includes unknown failure modes.
- Alignment Research Center: Paul Christiano's new organization. He has done impressive research and has worked with both academia and MIRI.
- Center for Human-Compatible Artificial Intelligence at Berkeley: If you're looking to sponsor academia rather than an independent organization, this one does research that combines mainstream AI methods with serious consideration of XRisk.
- Future of Humanity Institute at Oxford is a powerhouse in multiple relevant areas, including AGI XRisk Research.
- The Centre for the Study of Existential Risk at Cambridge. Looks promising. I haven't seen much on AGI XRisk from them.
- Leverhulme Center for the Future of Intelligence, also at Cambridge and linked to CSER
- Smaller organizations with scope that goes beyond AGI XRisk. I don't know much about them otherwise.
- Donating money to a grant-disbursing organization makes sense if you believe that they have better ability to determine effectiveness than you. Alternatively, you might be guided by their decisions as you make your own donations.
- Open Philanthropy Project
- Long-Term Future Fund
- Survival and Flourishing Fund/Survival and Flourishing. org
- Solenum Foundation (see here). Recently, Jaan Tallinn discussed a new initiative for an assessment pipeline that will aggregate expert opinion on the most effective organizations.
- Berkeley Existential Risk Initiative
- Future of Life Institute: It's not clear if they still actively contribute to AI XRisk research, but did disburse grants a few years ago.
Are there others?
11 comments
Comments sorted by top scores.
comment by adamShimi · 2021-05-30T17:34:14.195Z · LW(p) · GW(p)
Quick thought: I expect that the most effective donation would be to organizations funding independent researchers, notably the LTFF.
Note that I'm an independent researcher funded by the LTFF (and Beth Barnes), but even if you told me that the money would never go to me, I would still think that.
- Grants by organizations like that have a good track record for producing valuable research, as at least two people I think are among the most interesting thinkers on the topic (John S. Wentworth and Steve Byrnes) have gotten grants from sources like that (Steve is technically funded by Beth Barnes with money from the donor lottery), and others I'm really excited about (like Alex Turner) were helped by LTFF grants.
- Such grants allow researchers to both bootstrap their careers, and also explore less incentivized subjects related to alignment at the start of their career.
- They are cheaper than funding a hire for somewhere like MIRI, ARC or CHAI.
↑ comment by JoshuaFox · 2021-05-31T10:04:03.938Z · LW(p) · GW(p)
Thank you. Can you link to some of the better publications by Wentworth, Turner, and yourself? I've found mentions of each of you online but I'm not finding a canonical source for the recommended items.
- I found this about Steve Byrnes
- This about Beth Barnes
↑ comment by adamShimi · 2021-05-31T11:20:25.010Z · LW(p) · GW(p)
Sure.
- For Alex Turner, his main sequence [? · GW] is the place to start.
- For John Wentworth, he has a sequence on abstraction [? · GW], and a lot of [AF · GW] great [AF · GW] content [AF · GW] around it.
- For Steve Byrnes, most of his work is on brain-based (or brain-inspired) AGI; see here [AF · GW], here [AF · GW] and here [AF · GW] for example
- Personally, I feel like my best work is stuff I'm working on right now, but you can look at my sequence on goal-directedness [? · GW] and my sequence of distillations [? · GW].
For a bit more funding information:
comment by Rafael Harth (sil-ver) · 2021-05-30T14:44:13.507Z · LW(p) · GW(p)
You may be interested in Lark's AI Alignment charity reviews [EA · GW]. The only organization I would add is the Qualia Research Institute, which is my personal speculative pick for the highest impact organization, even though they don't do alignment research. (They're trying to develop a mathematical theory of consciousness and qualia.)
Replies from: JoshuaFox↑ comment by JoshuaFox · 2021-05-30T15:38:54.313Z · LW(p) · GW(p)
Thank you! That is valuable. I'd love to get also educated opinions on the quality of the research of some of these, with a focus on foundational or engineering research aimed at superhuman-AGI XRIsk (done mostly, I think, in MIRI, FHI, and by Christiano), but that article is great.
comment by Ofer (ofer) · 2021-05-31T12:28:40.664Z · LW(p) · GW(p)
There may be many people working for top orgs (in the donor's judgment) who are able to convert additional money to productivity effectively. This seems especially likely in academic orgs where the org probably faces strict restrictions on salaries. (But I won't be surprised if it's similarly the case for other orgs). So a private donor could solicit applications (with minimal form filling) from such people, and then distribute the donation between those who applied.
comment by Ben Pace (Benito) · 2021-06-11T20:37:42.228Z · LW(p) · GW(p)
Gonna +1 the other comments that name the LTFF and Larks' annual reviews. Though if I were to donate myself I'd probably go with a donor lottery. (The CEA donor lottery is not currently up alas.)
comment by Charlie Steiner · 2021-05-30T16:08:42.256Z · LW(p) · GW(p)
I agree that there are multiple types of basic research we might want to see, and maybe not all of them are getting done. I therefore actually put a somewhat decent effect size on traditional academic grants from places like FLI, even though most of its grants aren't useful, because it seems like a way to actually get engineers to work on problems we haven't thought of yet. This is the grant-disbursing process as an active ingredient, not just as filler. I am skeptical if this effect size is bigger on the margin than just increasing CHAI's funding, but presumably we want some amount of diversification.
Replies from: JoshuaFox↑ comment by JoshuaFox · 2021-05-31T10:05:05.562Z · LW(p) · GW(p)
Thank you. Can you point me to a page on FLI's latest grants? What I found was from a few years back. Is there another organizations whose grants are worthy of attention?
Replies from: Charlie Steiner, Charlie Steiner↑ comment by Charlie Steiner · 2021-05-31T18:21:57.231Z · LW(p) · GW(p)
I actually haven't heard anything out of them in the last few years either. My knowledge of grantmaking organizations is limited - I think similar organizations like Berkeley Existential Risk Initiative, or the Long-Term Future Fund, tend to be less about academic grantmaking and more about funding individuals and existing organizations (not that this isn't also valuable).
↑ comment by Charlie Steiner · 2021-06-11T19:12:47.982Z · LW(p) · GW(p)
Right on time, turns out there's more grants - but now I'm not sure if these are academic-style or not (I guess we might see the recipients later). https://futureoflife.org/fli-announces-grants-program-for-existential-risk-reduction/?fbclid=IwAR3_pMQ0tDd_EOg_RShlLY8i71nGFliu0YH8kzbc7fClACEgxIo2uK6gPW8&cn-reloaded=1