A daily routine I do for my AI safety research work
post by scasper · 2022-07-19T21:58:24.511Z · LW · GW · 7 commentsContents
7 comments
I’m a Ph.D student at MIT working in AI safety--mostly interpretability in deep networks. I thought I would very briefly share a list of links that I have bookmarked and go through every weekday. It usually takes me <15 minutes. Since I started doing this, I’ve been much more up to date and informed on current work.
First is Twitter. For a long time, I resisted using Twitter because it’s terrible. But after I started following almost exclusively AI safety people and orgs, I’ve found a lot of good papers and ideas.
Second, I check LessWrong and the AI Alignment Forum :)
Finally, I look over new titles on arXiv for AI, ML, Computation and Language, and Computer Vision and Pattern Recognition. More often than not, I find a new paper that I want to click on and check out.
And as a bonus, I’d of course also recommend these three newsletters.
7 comments
Comments sorted by top scores.
comment by TW123 (ThomasWoodside) · 2022-07-19T23:19:40.626Z · LW(p) · GW(p)
I made this link which redirects to all arxiv pages from the last day on AI, ML, Computation and Language, Computer Vision, and Computers and Society into a single view. Since some papers are listed under multiple areas I prefer to view this so I don't skim over the same paper twice. If you bookmark it's just one click per day!
Replies from: DavidHolmes, scasper↑ comment by DavidHolmes · 2022-07-20T09:04:17.253Z · LW(p) · GW(p)
If you get the daily arXiv email feeds for multiple areas it automatically removes duplicates (i.e. each paper appears exactly once, regardless of cross-listing). The email is not to everyone's taste of course, but this is a nice aspect of it.
comment by Hoagy · 2022-07-19T22:18:57.839Z · LW(p) · GW(p)
Are there any Twitter lists you'd recommend for high % of good AI (safety) content?
Replies from: Aay17ush, scasper↑ comment by Aay17ush · 2022-07-20T09:38:41.348Z · LW(p) · GW(p)
https://mobile.twitter.com/i/lists/1185207859728076800 AGI Safety core by JJ (From AI Safety Support)