Funding case: AI Safety Camp

post by Remmelt (remmelt-ellen), Linda Linsefors · 2023-12-12T09:08:18.911Z · LW · GW · 5 comments

This is a link post for https://manifund.org/projects/10th-edition-of-ai-safety-camp

Contents

  Project summary
  What are this project's goals and how will you achieve them?
  How will this funding be used?
    Virtual AISC - Budget version 
    Virtual AISC - Normal version 
  Who is on your team and what's your track record on similar projects?
    Organisers
    Track record
  What are the most likely causes and outcomes if this project fails? (premortem)
  What other funding are you or your project getting?
None
5 comments

Project summary

AI Safety Camp is a program with a 5-year track record of enabling people to find careers in AI Safety.

We support up-and-coming researchers outside the Bay Area and London hubs.

We are out of funding. To make the 10th edition happen, fund our stipends and salaries.

 

What are this project's goals and how will you achieve them?

AI Safety Camp is a program for inquiring how to work on ensuring future AI is safe, and try concretely working on that in a team.

For the 9th edition of AI Safety Camp we opened applications for 29 projects

We are first to host a special area to support “Pause AI” work. With funding, we can scale from 4 projects for restricting corporate-AI development to 15 projects next edition.

We are excited about our new research lead format, since it combines:

 

How will this funding be used?

We are fundraising to pay for:


Whether we run the tenth edition, or put AISC indefinitely on hold depends on your donation.

Last June, we had to freeze a year's worth of salary for three staff. Our ops coordinator had to leave, and Linda and Remmelt decided to run one more edition as volunteers.

AISC has previously gotten grants paid for by FTX money. After the FTX collapse, we froze $255K in funds to cover clawback claims. For the current AISC, we have $99K left from SFF that was earmarked for stipends – but nothing for salaries, and nothing for future AISCs.

If we have enough money we might also restart the in-person version of AISC. This decision will also depend on an ongoing external evaluation of AISC, which among other things, is evaluating the difference in impact of the virtual vs in-person AISCs.

By default we’ll decide what to prioritise with the funding we get. But if you want to have a say, we can discuss that. We can earmark your money for whatever you want.


Potential budgets for various versions of AISC

These are example budgets for different possible versions of the virtual AISC. If our funding lands somewhere in between, we’ll do something in between.

Virtual AISC - Budget version 

Software etc $2K
Organiser salaries, 2 ppl, 4 months$56K
Stipends for participants $0
                                                            Total $58K

In the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program.

Salaries are calculated based on $7K per person per month.

Virtual AISC - Normal version 

Software etc $2K
Organiser salaries, 3 ppl, 6 months $126K
Stipends for participants $185K
                                                            Total $313K

For the non-budget version, we have one more staff and more paid hours per person, which means we can provide more support all-round. 

Stipends estimate based on: $185K = $1.5K/research lead *40 + $1K/team member * 120
Number of research leads (40) and team members (120) are guesses based on how much we think AISC will grow.
 

 

Who is on your team and what's your track record on similar projects?

We have run AI Safety Camp over five years, covering 8 editions, 74 teams, and 251 participants.

We iterated a lot, based on participant feedback. We converged on a research lead format we are excited about. We will carefully scale this format with your support.

As researchers ourselves, we can meet potential research leads where they are at. We can provide useful guidance and feedback in almost every area of AI Safety research.

We are particularly well-positioned to support epistemically diverse bets.
 

Organisers

Remmelt – coordinator of "do not build uncontrollable AI"


Linda - coordinator of "everything else"

 

Track record

AI Safety Camp is primarily a learning-by-doing training program. People get to try a role and explore directions in AI safety, by collaborating on a concrete project.

Multiple alumni have told us that AI Safety Camp was how they got started in AI Safety.
AISC topped the ‘average usefulness’ list in Daniel Filan’s survey [EA · GW].

Papers that came out of the camp include:

Projects started at AI Safety Camp went on to receive a total of $613K in grants:

AISC 1:  Bounded Rationality team     $30K from Paul
AISC 3:  Modelling Cooperation$24K from CLT, $50K from SFF
$83K from SFF, $83K from SFF
AISC 4:  Survey      $5K from LTTF
AISC 5:  Pessimistic Agents $3K from LTFF [EA · GW]
AISC 5:  Multi-Objective Alignment$20K from EV
AISC 6:  LMs as Tools for Alignment$10K from LTFF
AISC 6:  Modularity$125K from LTFF
AISC 7:  AGI Inherent Non-Safety$170K from SFF
AISC 8:  Policy Proposals for High-Risk AI      $10K from NL

Organizations launched out of camp conversations include:

Alumni went on to take positions at:

For statistics of previous editions, see here. We also recently commissioned Arb Research to run alumni surveys and interviews to carefully evaluate AI Safety Camp's impact.

 

What are the most likely causes and outcomes if this project fails? (premortem)

 

What other funding are you or your project getting?

No other funding sources.

5 comments

Comments sorted by top scores.

comment by JuliaHP · 2023-12-12T10:57:19.627Z · LW(p) · GW(p)

Many more are engaged in AI Safety in other ways, eg. as PhD or independent researcher. These are just the positions we know about. We currently have not done a comprehensive survey.


Worth mentioning that most of the Cyborgism community founders came out of or did related projects in AISC beforehand.

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2023-12-12T11:03:22.208Z · LW(p) · GW(p)

Oh yeah, I totally forgot to mention that.

Thank you!

comment by Jonathan Claybrough (lelapin) · 2023-12-12T11:02:42.882Z · LW(p) · GW(p)

I participated in the previous edition of AISC and found it very valuable to my involvement in AI Safety. I acquired knowledge (on standards and the standards process), got experience, contacts. I appreciate how much coordination AISC enables, with groups forming, which enable many to have their first hands on experience and step up their involvement. 

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2023-12-16T10:23:55.397Z · LW(p) · GW(p)

Thank you for sharing, Jonathan. 
Welcoming any comments here (including things that went less well, so we can do better next time!).

comment by zeshen · 2023-12-23T10:37:36.358Z · LW(p) · GW(p)

Strong upvoted. I was a participant of AISC8 in the team that went on to launch AI Standards Lab, which I think counterfactually would not be launched if not for AISC.