An Overview of the AI Safety Funding Situation
post by Stephen McAleese (stephen-mcaleese) · 2023-07-12T14:54:36.732Z · LW · GW · 9 commentsContents
9 comments
9 comments
Comments sorted by top scores.
comment by NunoSempere (Radamantis) · 2024-05-24T11:34:24.503Z · LW(p) · GW(p)
I found this post super valuable but I found the presentation confusing. Here is a table, provided as is, that I made based on this post & a few other sources:
Source | Amount for 2024 | Note |
---|---|---|
Open Philanthropy | $80M | Projected from past amount |
Foundation Model Taskforce | $20M | 100M GBP but unclear over how many years? |
FLI | $30M | $600M donation in crypto, say you can get $300M out of it, distributed over 10 years |
AI labs | $30M | |
Jan Tallin | $20M | See [here](https://jaan.info/philanthropy/) |
NSF | $5M | |
LTFF (not OpenPhil) | $2M | |
Nonlinear fund and donors | $1M | |
Academia | Considered separately | |
GWWC | $1M | |
Total | $189M | Does not consider uncertainty! |
↑ comment by Zach Stein-Perlman · 2024-08-06T20:45:10.453Z · LW(p) · GW(p)
My impression is that FLI doesn't know how to spend their money and will do <<$30M in 2024; please correct me if I'm wrong.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-08-06T21:14:08.829Z · LW(p) · GW(p)
FLI has in the past contributed to SFF grant rounds. I think if they do that they could potentially distribute ~$20M via that.
↑ comment by BrianTan · 2024-08-06T09:35:46.877Z · LW(p) · GW(p)
Thanks for making this! This is minor, but I think the total should be $189M and not $169M?
Replies from: Radamantis↑ comment by NunoSempere (Radamantis) · 2024-08-06T20:32:30.769Z · LW(p) · GW(p)
You're right, changed
↑ comment by Stephen McAleese (stephen-mcaleese) · 2024-05-26T09:52:33.818Z · LW(p) · GW(p)
Thanks for the table, it provides a good summary of the post's findings. It might also worthwhile to also add it to the EA Forum post as well.
I think the table should include the $10 million in OpenAI Superalignment fast grants as well.
comment by Nicholas / Heather Kross (NicholasKross) · 2023-07-12T23:20:22.961Z · LW(p) · GW(p)
I appreciate the analysis of talent-vs-funding constraints. I think the bar-for-useful-contribution is so high that we loop back around to "we need to spend more money (and effort) on finding (and making) more talent", and the programs to do those may be more funding-constrained than talent-constrained.
Like, the 20th century had some really good mathematicians and physicists, and the US government spared little expense towards getting them what they needed, finding them, and so forth. Top basketball teams will "check up on anyone over 7 feet that’s breathing".
Consider how huge Von Neumann's expense account must've been, between all the consulting and flight tickets and car accidents. Now consider that we don't seem to have Von Neumanns anymore. There are caveats [LW · GW] to at least that second point, but the overall problem still hasn't been "fixed".
Things an entity with absurdly-greater funding (e.g. the US Department of Defense) could probably do, with their absurdly-greater funding and probably coordination power:
- Indefinitely-long-timespan basic minimum income for everyone who
- Coordinating [LW · GW], possibly by force, every AI alignment researcher and aspiring alignment researcher on Earth to move to one place that doesn't have high rents like the Bay. Possibly up to and including creating that place and making rent free for those who are accepted in.
- Enforce a global large-ML-training shutdown.
- An entire school system (or at least an entire network of universities, with university-level funding) focused on Sequences-style rationality in general and AI alignment in particular.
- Genetic [LW · GW] engineering, focused-training-from-a-young-age, or other extreme "talent development" setups.
- All of these at once.
I think the big logistical barrier here is something like "LTFF is not the US government", or more precisely "nothing cool like this can be done 'on-the-margin' or with any less than the full funding". However, I think some of these could be scaled down into mere megaprojects [? · GW].
comment by Nicholas / Heather Kross (NicholasKross) · 2023-07-12T22:58:03.332Z · LW(p) · GW(p)
The argument is that academia is huge and does a lot of AI safety-adjacent research on topics such as transparency, robustness, and safe RL. Therefore, even if this work is strongly discounted because it’s only tangentially related to AGI safety, the discounted contribution is still large.
I and others would argue that at least some "prosaic" safety research, such as interpretability, may actually be increasing P(doom) from AI, even if some of the work involved turns out to have been essential later on. Partly because more-useful AI needs a modicum of steering (so AI labs fund this at all).
My main worry is that having steering in-place before [LW · GW] a goal could be a strategically-bad order-of-events [LW · GW]. Even lots of shared-structure between "steering" and "what to steer towards" [LW · GW], does not guarantee good outcomes.
comment by Roman Leventov · 2023-07-13T17:44:43.916Z · LW(p) · GW(p)
AI safety is a field concerned with preventing negative outcomes from AI systems and ensuring that AI is beneficial to humanity.
This is a bad definition of "AI safety" as a field, which muddles the water somewhat. I would say that AI safety is a particular R&D branch (plus we can add here meta and proxy activities for this R&D field, such as AI safety fieldbuilding, education, outreach and marketing among students, grantmaking, and platform development such as what apartresearch.com are doing), of the gamut of activity that strives to "prevent the negative result of civilisational AI transition".
There are also other sorts of activity that strive for that more or less directly, some of which are also R&D (such as governance R&D (cip.org), R&D in cryptography, infosec, and internet decentralisation (trustoverip.org)), and others are not R&D: good old activism and outreach to the general public (StopAI, PauseAI), good old governance (policy development, UK foundational model task force [LW · GW]), and various "mitigation" or "differential development" projects and startups, such as Optic, Digital Gaia, Ought, social innovations (I don't know about any good examples as of yet, though), innovations in education and psychological training of people (I don't know about any good examples as of yet). See more details and ideas in this comment [LW(p) · GW(p)].
It's misleading to call this whole gamut of activities "AI safety". It's maybe "AI risk mitigation". By the way, 80000 hours, despite properly calling "Preventing an AI-related catastrophe", also suggest that the only two ways to apply one's efforts to this cause is "technical AI safety research" and "governance research and implementation", which is wrong, as I demonstrated above.
Somebody may ask, isn't technical AI safety research more direct and more effective way to tackle this cause area? I suspect that it might not be the case for people who don't work at AGI labs. That is, I suspect that independent or academic AI safety research might be inefficient enough (at least for most people attempting it) that it would be more effective to apply themselves to various other activities, and "mitigation" or "differential development" projects of the likes that are described above. (I will publish a post that details reasoning behind this suspicion later, but for now this comment [LW(p) · GW(p)] has the beginning of it.)