It looks like there are some good funding opportunities in AI safety right now

post by Benjamin_Todd · 2024-12-22T12:41:02.151Z · LW · GW · 0 comments

This is a link post for https://benjamintodd.substack.com/p/looks-like-there-are-some-good-funding

Contents

No comments

The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace.

However, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1]

Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding [EA · GW] several categories of work (my own categories, not theirs):

In addition, they are currently not funding (or not fully funding):

This means many of the organisations in these categories have only been able to access a a minority of the available philanthropic capital (in recent history, I’d guess ~25%). In the recent SFF grant round, I estimate they faced a funding bar 1.5 to 3 times higher.

This creates a lot of opportunities for other donors: if you’re into one of these categories, focus on finding gaps there.

In addition, even among organisations that can receive funding from Good Ventures, receiving what’s often 80% of funding from one donor is an extreme degree of centralisation. By helping to diversify the funding base, you can probably achieve an effectiveness somewhat above Good Ventures itself (which is kinda cool given they’re a foundation with 20+ extremely smart people figuring out where to donate).

Open Philanthropy (who advise Good Ventures on what grants to make) is also large and capacity constrained, which means it’s relatively easy for them to miss small, new organisations (<$250k), individual grants, or grants that require speed. So smaller donors can play a valuable role by acting as “angel donors” who identify promising new organisations, and then pass them on to OP to scale up.

In response to the attractive landscape, SFF allocated over $19m of grants, compared to an initial target of $5 - $15m. However, that wasn’t enough to fill all the gaps.

SFF published a list of the organisations that would have received more funding if they’d allocated another $5m or $10m. This list isn’t super reliable, because less effort was put into thinking about this margin, but it’s a source of ideas.

Some more concrete ideas that stand out to me as worth thinking about are as follows (in no particular order):

I’m not making a blanket recommendation to fund these organisations, but they seem worthy of consideration, and also hopefully illustrate a rough lower bound for what you could do with $10m of marginal funds. With some work, you can probably find stuff that’s even better.

I’m pretty uncertain how this situation is going to evolve. I’ve heard there some new donors starting to make larger grants (e.g. Jed McCaleb’s Navigation Fund). And as AI Safety becomes more mainstream I expect more donors to enter. Probably the most pressing gaps will be better covered in a couple of years. If that’s true, that means giving now could be an especially impactful choice.

In the future, there may also be opportunities to invest large amounts of capital in scalable AI alignment efforts, so it’s possible future opportunities will be even better. But there are concrete reasons to believe there are good opportunities around right now.

If you’re interested in these opportunities:

  1. ^

    I'm writing this in an individual capacity and don't speak for SFF or Jaan Tallinn.

0 comments

Comments sorted by top scores.