Finally Entering Alignment
post by Ulisse Mini (ulisse-mini) · 2022-04-10T17:01:36.898Z · LW · GW · 8 commentsContents
Background Babble Reflection Updates None 8 comments
Background
Reading Death With Dignity [LW · GW] combined with recent advancements shortening my timeline has finally made me understand on a gut level that nature is allowed to kill me [LW · GW]. Because of this alignment has gone from "An interesting career path I might pursue after I finish studying" to "If I don't do something I'll die while there was something else I could have done."
A big part of writing this is to get tailored advice, so a bit about my current skills that could be useful to the cause
- Intermediate programmer, haven't done much stuff in AI, but I'm confident I can learn quickly. I'm good with Linux and can do sysadmin-y things.
- Know a good amount of math, I've read textbooks on real analysis and linear algebra. With some effort, I think I can understand technical alignment papers though I'm far away from writing them.
- I'm 17 and unschooled meaning I have nearly infinite free time and no financial obligations.
Babble
Inspired by Entering at the 11th Hour [LW · GW] here's a list of some things I can do that may contribute (not quite "babble" but close enough)
- Learn ML engineering and apply to AI safety orgs. I believe I can get to level talked about in AI Safety Needs Great Engineers [AF · GW] in a few months of focused practice.
- Deal with alignment research debt by summarizing results, making exposition, quiz games, etc.
- Help engineers learn math. Hold study groups, tutor etc.
- Help researchers learn engineering (I'm not an amazing programmer, but a lot of research code is questionable to say the least)
- Donate money to safety orgs. (will wait till I'm familiar with all the options, so I can weight based on likelihood of success)
- Host AI/EA/Rationality meetup. I live in the middle of nowhere, so this would be the only one
- Try to convince some young math prodigies that alignment is important (I've run into a few in math groups before)
- Make a website/YT/podcast debating AGI with people in order to convince people and raise awareness. (changing one person's carrier path is worth a lot)
- Lobby local politicians, see if anyone I know has connections and can put a word in
- Become active on LessWrong and ElutherAI in order to find friends who'll help along the way. Hard for me because of impostor syndrome right now (you don't want to know how long writing this post took).
Reflection
I most like (1), (2-4) and (6). (7) is something I'll do next time I have the chance.
I'm going to spend my working time studying the engineering needed to get hired at a safety org. If anyone here is good at programming and bad at math (or the converse) please contact me, I'd love to help (teaching helps me learn a subject too, so don't be shy).
Updates
Got accepted to Atlas after applying due to prompting in the comments, which progressed to me being more and more involved in the Berkeley rationalist scene (e.g. I stayed in an experimental group house for a month in October-November of 2022), and now I'm doing serimats under Alex Turner [LW · GW] till March of 2023.
I still have a long way to go before aligning the AI, but I'm making progress :)
8 comments
Comments sorted by top scores.
comment by johnswentworth · 2022-04-10T18:54:03.689Z · LW(p) · GW(p)
If you're planning to study/teach math anyway, I've found that framing exercises [? · GW] are a really good 80/20 at getting people able to use mathematical concepts. However, it takes a fair bit of work to create a good framing exercise. So if you could create a bunch of those, I expect they'd be a fairly powerful tool for creating more competent researchers.
(Also, I have a post [LW · GW] with a big list of useful-for-alignment math, almost all of which would benefit from lots of framing exercises.)
Replies from: ulisse-mini↑ comment by Ulisse Mini (ulisse-mini) · 2022-04-10T21:10:50.366Z · LW(p) · GW(p)
Thanks! I will definitely read those!
Replies from: ulisse-mini↑ comment by Ulisse Mini (ulisse-mini) · 2022-04-11T00:13:25.523Z · LW(p) · GW(p)
Read it, that study guide is really good, really motivates me to branch out since I've definitely overfocused on depth before and not done enough applications/"generalizing"
This also reminds me of Miyamoto Musashi's 3rd principle: Become acquainted with every art
comment by Chris_Leong · 2022-04-10T19:33:22.638Z · LW(p) · GW(p)
You may want to apply for the Atlas Fellowship. There's also the AGI Safety Fundamentals course (may need to be 18).
Regarding 9, I'd suggest reaching out to CEA before engaging in any lobbying as if done poorly it can be counterproductive.
Replies from: ulisse-mini↑ comment by Ulisse Mini (ulisse-mini) · 2022-04-10T23:30:33.499Z · LW(p) · GW(p)
Thanks! Some other people recommended the Atlas Fellowship and I've applied. Regarding (9) I think I worded it badly, I meant reach out to local politicians (I thought the terms were interchangeable)
Replies from: Chris_Leong↑ comment by Chris_Leong · 2022-04-11T03:38:58.542Z · LW(p) · GW(p)
Even so, it's recommended to checkin with advice before contacting anyone important.
Replies from: ulisse-mini↑ comment by Ulisse Mini (ulisse-mini) · 2022-04-11T12:58:38.665Z · LW(p) · GW(p)
Noted
comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-04-10T18:17:34.279Z · LW(p) · GW(p)
My thought, as a researcher who is pretty good at roughshod programming but not so good at rock-solid tested-everything programming, is that programming/engineering is big. Focusing on a specific aspect that is needed and also interesting to you might be advantageous, like supercomputing / running big spark clusters or security / cryptography.