Where might I direct promising-to-me researchers to apply for alignment jobs/grants?
post by abramdemski · 2023-09-18T16:20:03.452Z · LW · GW · No commentsThis is a question post.
Contents
Answers 14 mruwnik 14 LawrenceC 13 plex 4 Chipmonk 3 stavros None No comments
It sometimes happens that people who I've talked to or worked with ask me where they should go for financial support for their research. I haven't developed a standard list of answers to this question. It seems to me like there are a lot of new orgs recently, and I'm losing track!
If you are looking for such applicants or know someone who is looking, consider replying as an answer (or sending me a PM if that makes more sense for whatever reason).
Answers
There is a Stampy answer to that which should stay up to date here.
↑ comment by Keenan Pepper (keenan-pepper) · 2023-09-21T00:11:00.513Z · LW(p) · GW(p)
Upvoted because this mentions Nonlinear Network.
The main funders are LTFF, SFF/Lightspeed/other S-process stuff from Jaan Tallinn, and Open Phil. LTFF is the main one that solicits independent researcher grant applications.
There's a lot of orgs, off the top of my head, there's Anthropic/OpenAI/GDM as the scaling labs with decent-sized alignment teams, and then there's a bunch of smaller/independent orgs:
- Alignment Research Center
- Apollo Research
- CAIS
- CLR
- Conjecture
- FAR
- Orthogonal
- Redwood Research
And there's always academia.
(I'm sure I'm missing a few though!)
(EDIT: added in RR and CLR)
↑ comment by DanielFilan · 2023-09-19T01:33:17.874Z · LW(p) · GW(p)
Redwood Research?
Replies from: LawChan↑ comment by LawrenceC (LawChan) · 2023-09-19T15:56:15.874Z · LW(p) · GW(p)
I don't think they're hiring, but added.
↑ comment by Martín Soto (martinsq) · 2023-09-19T11:42:05.035Z · LW(p) · GW(p)
Center on Long-term Risk (CLR)
↑ comment by momom2 (amaury-lorin) · 2023-09-19T21:14:36.421Z · LW(p) · GW(p)
In France, EffiSciences is looking for new members and interns.
https://aisafety.world/tiles/ has a bunch.
Very surprised that you don't have a regranting budget! I don't know which funder I would expect to do that, but I would've expected this to be more common.
I guess Jan Tallinn does this, and Manifund does this. Hmm.
Depending on the kind of support they're looking for https://ceealar.org could be an option. At any one time there are a handful of people staying there working independently on AI Safety stuff.
No comments
Comments sorted by top scores.