Jobs: Help scale up LM alignment research at NYU
post by Sam Bowman (sbowman) · 2022-05-09T14:12:22.938Z · LW · GW · 1 commentsContents
1 comment
NYU is hiring alignment-interested researchers!
- I'll be working on setting up a CHAI-inspired research center at NYU that'll focus on empirical alignment work, primarily on large language models. I'm looking for researchers to join and help set it up.
- The alignment group at NYU is still small, but should be growing quickly over the next year. I'll also be less of a hands-on mentor next year than in future years, because I'll simultaneously be doing visiting position at Anthropic. So, for the first few hires, I'm looking for people who are relatively independent, and have some track record of doing alignment-relevant work.
- That said, I'm not necessarily looking for a lot of experience, as long as you think you're in a position to work productively on some relevant topic with a few peer collaborators. For the pre-PhD position, a few thoughtful forum posts or research write-ups can be a sufficient qualification. We're looking for ML experimental skills and/or conceptual alignment skills/knowledge that could be relevant to empirical work, not necessarily both.
- Our initial funding is coming from Open Philanthropy, for a starting project inspired by AI Safety Via Debate. Very early results from us (and a generally-encouraging note from Paul C.) here [AF · GW].
- Pay and benefits really are negotiable, and we're willing to match industry offers if there's a great fit. Don't let it stop you from applying.
1 comments
Comments sorted by top scores.
comment by RHollerith (rhollerith_dot_com) · 2022-05-10T16:36:43.194Z · LW(p) · GW(p)
The LM in the title means "language model".