Posts

Comments

Comment by Birk Källberg on If I have some money, whom should I donate it to in order to reduce expected P(doom) the most? · 2025-01-03T22:58:14.139Z · LW · GW

Wanting to answer a very similar question, I’ve just done about a day of donation research into x-risk funds. There are three that have caught my interest:

  • Long Term Future Fund (LTFF) from EA Funds
    • in 2024, LTFF grants have gone mostly to individual TAIS researchers (also some policy folk and very small orgs) working on promising projects. Most are 3- to 12-month stipends between 10k$ and 100k$.
    • see their Grants Database for details
  • Emerging Challenges Fund (ECF) - Longview Philanthropy
    • gives grants to orgs in AIS, biorisk and nuclear. funds both policy work (diplomacy, laws, advocacy) and technical work (TAIS research, technical bio-safety)
    • see their 2024 Report for details
  • Global Catastrophic Risks Fund (GCR Fund) - Founders Pledge
    • focuses on prevention of great power conflicts
    • their grants cover things like US-China diplomacy efforts on nuclear, AI and autonomous weapons issues. also biorisk strategy and policy work.

 

A very rough estimate on LTFF effectiveness (how much p(doom) does $1 reduce?):

The article Microdooms averted by working on AI Safety uses a simple quantitative model to estimate that one extra AIS researcher will avert 49 microdooms on average at current margins.
Considering only humanities current 8B people, this would mean 400,000 current people saved in expectation by each additional researcher. Note that depending on parameter choices, the model’s result could easily go up or down an order of magnitude.

The rest are my calculations: 

  • optimistic case: the researcher has all their impact in the first year and only requires a yearly salary of $80k. This would imply 0.6 nanodooms / $ or 5 current people saved / $.
  • pessimistic case: the researcher takes 40 years (a full career) to have that impact and big compute and org. staffing costs mean their career costs 10x their salary. This implies a 400x lower effectiveness, i.e. 1.5 picodooms / $ or 0.012 current people saved / $ or 80$ to save a person.

For me at least, this actually looks like quite promising results! I now think of “Funding an extra AIS researcher” as a baseline to compare other X-risk interventions too. 

One can do better than that: Finding and supporting especially talented researchers or ones working on especially promising avenues should be a lot more effective than funding the average AIS researcher. This is exactly what LTFF is doing right now.

The other two funds seem to focus more on finding and supporting especially promising policy efforts on the organizational level. Their picks seem to me as potentially even more promising than LTFF, but I currently have no way to model this so that’s just my current intuition.

I intend to start donating to one of these three funds as a consequence of these findings.