Announcing the Alignment Newsletter

post by rohinmshah · 2018-04-09T21:16:54.274Z · score: 72 (20 votes) · LW · GW · 3 comments

I've been writing weekly emails for the Center for Human-Compatible AI (CHAI) summarizing the content from the last week that's relevant to AI alignment. These have been useful enough that I'm now making them public! You can:


Comments sorted by top scores.

comment by rohinmshah · 2018-04-11T23:06:37.399Z · score: 19 (4 votes) · LW · GW

Since people seem to be finding it useful, I just updated the archive with public versions of the 5 emails I wrote for CHAI summarizing ~2 months of content.

comment by Benito · 2018-04-12T06:21:31.485Z · score: 12 (2 votes) · LW · GW

Huh, this makes me much more excited for the email - having your brief personal reviews of whether it's useful to read and why is great!

comment by Benito · 2018-04-09T21:21:20.910Z · score: 15 (3 votes) · LW · GW

Signed up! Thanks.