What are the best non-LW places to read on alignment progress?

post by Raemon · 2023-07-07T00:57:21.417Z · LW · GW · 14 comments

Contents

14 comments

I've lately been thinking I should prioritize a bit more keeping up with alignment-relevant progress outside of LessWrong/Alignment Forum. 

I'm curious if people have recommendations that stand out as reliably valuable, and/or have tips for finding "the good stuff" on places where the signal/noise ratio isn't very good. (Seems fine to also apply this to LW/AF)

Some places I've looked into somewhat (though not made major habits around so far) include:

I generally struggle with figuring out how much to keep up with stuff – it seems like there's more than one full-time-job's worth of stuff to keep up with, and it's potentially overanchoring to think about "the stuff people have worked on" as opposed to "stuff that hasn't been worked on yet."

I'm personally coming at this from a lens of "understand the field well enough to think about how to make useful infrastructural advances", but I'm interested in hearing thoughts about various ways people keep-up-with-stuff and how they gain value from it.

14 comments

Comments sorted by top scores.

comment by maxnadeau · 2023-07-07T01:26:10.692Z · LW(p) · GW(p)

"Follow the right people on twitter" is probably the best option. People will often post twitter threads explaining new papers they put out. There's also stuff like:

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-07-07T03:24:59.368Z · LW(p) · GW(p)

can you and others please reply with lists of people you find notable for their high signal to noise ratio, especially given twitter's sharp decline in quality lately?

Replies from: Quadratic Reciprocity, interstice
comment by Quadratic Reciprocity · 2023-07-07T09:07:47.327Z · LW(p) · GW(p)

Here are some Twitter accounts I've found useful to follow (in no particular order): Quintin Pope, Janus @repligate, Neel Nanda, Chris Olah, Jack Clark, Yo Shavit @yonashav, Oliver Habryka, Eliezer Yudkowsky, alex lawsen, David Krueger, Stella Rose Biderman, Michael Nielsen, Ajeya Cotra, Joshua Achiam, Séb Krier, Ian Hogarth, Alex Turner, Nora Belrose, Dan Hendrycks, Daniel Paleka, Lauro Langosco, Epoch AI Research, davidad, Zvi Mowshowitz, Rob Miles

comment by interstice · 2023-07-08T17:15:01.021Z · LW(p) · GW(p)

For tracking ML theory progress I like @TheGregYang, @typedfemale, @SebastienBubeck, @deepcohen, @SuryaGanguli.

comment by Chris_Leong · 2023-07-07T02:57:19.182Z · LW(p) · GW(p)

Podcasts are another possibility with less of a time trade-off.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2023-07-07T03:04:52.875Z · LW(p) · GW(p)

I listen to these podcasts which often have content related to AI alignment or AI risk. Any other suggestions?

Replies from: Meiren, Quadratic Reciprocity
comment by Meiren · 2023-07-07T03:44:14.571Z · LW(p) · GW(p)

https://theinsideview.ai/ is also quite good.

comment by Quadratic Reciprocity · 2023-07-07T09:12:14.352Z · LW(p) · GW(p)

Other podcasts that have at least some relevant episodes: Hear This Idea, Towards Data Science, The Lunar Society, The Inside View, Machine Learning Street Talk

comment by Jemal Young (ghostwheel) · 2023-07-07T17:20:22.626Z · LW(p) · GW(p)

Here are some resources I use to keep track of technical research that might be alignment-relevant:

  • Podcasts: Machine Learning Street Talk, The Robot Brains Podcast
  • Substacks: Davis Summarizes Papers, AK's Substack

How I gain value: These resources help me notice where my understanding breaks down i.e. what I might want to study, and they get thought-provoking research on my radar.

comment by Howie Lempel (howie-lempel) · 2023-07-07T14:15:37.691Z · LW(p) · GW(p)

I haven't kept up with it so can't really vouch for it but Rohin's alignment newsletter should also be on your radar. https://rohinshah.com/alignment-newsletter/

Replies from: Alan E Dunne
comment by Alan E Dunne · 2023-07-07T16:23:07.214Z · LW(p) · GW(p)

This seems to have stopped in July 2022.

Replies from: howie-lempel
comment by Howie Lempel (howie-lempel) · 2023-07-07T17:04:34.368Z · LW(p) · GW(p)

Whoops - thanks!

comment by Writer · 2023-07-07T08:51:22.857Z · LW(p) · GW(p)

This is probably not the most efficient way for keeping up with new stuff, but aisafety.info is shaping up to be a good repository of alignment concepts.

comment by Patodesu · 2023-07-07T02:05:29.854Z · LW(p) · GW(p)

Some people post about AI Safety in the EA Forum without crossposting here