Best Ways to Try to Get Funding for Alignment Research?

post by RGRGRG · 2023-04-04T06:35:05.356Z · LW · GW · 6 comments

This is a question post.

Contents

6 comments

Hey Everyone! I recently left my FAANG job to split my time between doing Alignment Research (70%) and investigating start-up ideas (30%).

If I decide to fully commit to Alignment Research, what is the best way to go about applying for and/or getting funding?  (In a perfect world, this funding would cover compute and SF living expenses).

Thanks! RGRGRG

Answers

6 comments

Comments sorted by top scores.

comment by the gears to ascension (lahwran) · 2023-04-04T06:41:02.446Z · LW(p) · GW(p)

Generally to get funding you'll need to show you've got a plausible plan - as with all funding situations, there are a great many folks who just want funding and don't have interest in the key problems. Showing you have a plausible plan requires showing your work to other researchers who can check it.

There are funds to apply to, but I'm not super familiar with that level of it - mostly people I know seem to get funding by sharing their plans and demonstrating that they have some grip on the problem.

Replies from: RGRGRG
comment by RGRGRG · 2023-04-04T06:46:14.140Z · LW(p) · GW(p)

Thanks

> key problems 

Is there a blog post to key problems?

> sharing their plans

Where is the best place to share? Once I come up with a plan I'm happy with, is there value in posting it on this site?
 

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-04-04T07:20:21.701Z · LW(p) · GW(p)

There are a number of intro posts floating around. https://stampy.ai/ is one major project to make an organized list of topics. I keep meaning to look up more and getting distracted so I'm sending this instead of nothing.

edit: here are some more https://www.lesswrong.com/tag/ai-alignment-intro-materials [? · GW]

Replies from: RGRGRG
comment by RGRGRG · 2023-04-04T17:27:45.407Z · LW(p) · GW(p)

Thanks!

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-04-04T19:19:14.913Z · LW(p) · GW(p)

I see on your profile that you have already done a PhD in machine learning which gives you a particular kind of context that I'm happy to see; I would love to talk synchronously at some point, do you have time for a voice call or focused text chat about your research background? I'm just an independent researcher and don't ask because of any ability to fund you, but I'm interested in talking to people with experience to exchange perspectives.

I'm most interested in fixing the QACI plan and fully understanding what it would make sense to mean in all edge cases by the word "agency"; some great progress has been made on that, eg check out the two papers mentioned in the comments of this post: https://www.lesswrong.com/posts/JqWQxTyWxig8Ltd2p/relative-abstracted-agency [LW · GW]

a lot of what we're worried about is ai being able to reach simulation grade agency over us before we've reached simulation grade in return. this would occur for any of a variety of reasons. curious about your thoughts!

Replies from: RGRGRG
comment by RGRGRG · 2023-04-05T04:11:56.897Z · LW(p) · GW(p)

DM'd