Suppose $1 billion is given to AI Safety. How should it be spent?

post by hunterglenn · 2021-05-15T23:24:26.927Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    18 Charlie Steiner
None
1 comment

What are the current bottlenecks in AI Safety progress, and if they were solved with money, what would be the next bottlenecks? 

Do current researchers need more money? Do we need to catalyze the creation of more researchers? Do we need to alter public opinion? Something else?

If you had $1 billion to spend on AI Safety, what would you do with it all, from start to finish?

Answers

answer by Charlie Steiner · 2021-05-16T10:18:26.850Z · LW(p) · GW(p)

The biggest problem is taste. A hypothetical billion-dollar donor needs to somehow turn their grant into money spent on useful things for current and future people actually working on AI safety, rather than on relatively useless projects marketed by people who are skilled at collecting grant money. This is more or less the problem Open Philanthropy has been trying to solve, and they're doing an okay job, so if I was a non-expert billionaire I would try to do something meta like OpenPhil.

But if I personally had a billion dollars to spend, and had to spend it with a 10 year horizon...

Things to plausibly do:

  • Grants for current entities. Giving them more than they currently need is just a sneaky way of spreading around the allocation process. Might be better to just give them a small amount (~2M/yr, i.e. 2%), but partner with them through my meta-level grantmaking organization (see below).

  • Money to move adjacent workers or experts not currently working on AI alignment into full time work. Might also be related to the next item:

  • Found my own object-level AI alignment organization. Budget depends on how big it is. Probably can't scale past 50 people or 5M/yr very well with what I currently think is the state of people worth hiring.

  • Securing computing resources. Currently unimportant (except secondarily for reputation and/or readiness), might be very important very suddenly, sometime in the future. Spend ~0.4M/yr on preparing but set aside 100M-200M for compute?

  • Found or partner with a meta-level organization to search for opportunities I'm not aware of now or don't have the expertise to carry out, and to do meta-level work as method of promoting AI safety (e.g. search for opportunities for promoting AI alignment work in China, lobbying other organizations such as Google by providing research on how they can contribute to AI safety.) (3M/yr on org, setting aside ~30M/yr for opportunities)

  • Found a meta-level organization (may be part of another organization) focused on influencing the education of students interested in AI. Maybe try to get textbooks written that have a LW perspective, partner with professors to develop courses on alignment-related topics, and also make some grants to places we think are doing it right and for students excelling at those places. (Say 2M/yr on background, 4M/yr on grants)

This is only 6 things. Could I spend 160M (16M per year for 10 years) on each of these things? Looking at the estimates, maybe not. This indicates that to spend money at the billion-dollar level, we might have to spend only part (my estimate says 60%) on things that have good marginal returns from where we're currently standing, and the rest might have to go into some large hierarchical nonprofit that tries to turn ordinary philosophers, mathematicians, and software engineers into AI alignment workers by paying them or otherwise making it a glamorous problem for the best and brightest to work on. But I'm worried that bad ideas could become popular in this kind of ecosystem. Some iron-fisted control over the purse strings may be necessary.

I'm not sure if this is "talent-limited" or not. To some extent yes, it is. But we're also limited by the scale of social trust, and by what one might call the "surface area" of the problem, which determines how fast the returns diminish when just adding more people, even if they were clones of known people.

1 comment

Comments sorted by top scores.

comment by Adam Zerner (adamzerner) · 2021-05-16T00:59:47.429Z · LW(p) · GW(p)

MIRI's 2017 Fundraiser might provide some insight even though the amount of money discussed is off by 2-3 orders of magnitude.