Call for contributors to the Alignment Newsletter
post by Rohin Shah (rohinmshah) · 2019-08-21T18:21:31.113Z · LW · GW · 0 commentsContents
Roles I am looking for Why am I looking for content creators? Why should you do it? Qualifications Application process None No comments
TL;DR: I am looking for (possibly paid) contributors to write summaries and opinions for the Alignment Newsletter. This is currently experimental, but I estimate ~80% chance that it will become long-term, and so I’m looking for people who are likely to contribute at least 20 summaries over the course of their tenure at the newsletter (see caveats in the post). To apply, read this doc, write an example summary + opinion, and fill out this form by Friday, September 6. I am also looking for someone to take over the work of publishing the newsletter (~1-3 hours per week); please send me an email if you’d be interested in this.
ETA: I now have enough interest in the publisher role that I would be shocked if none of them worked out. Feel free to continue expressing interest if you think you'd particularly benefit from doing the work, or if you think you'd be particularly good at it.
Roles I am looking for
Publisher: Once all of the summaries and opinions are written, you would turn them into an actual newsletter, send it out for proofreading, fix any typos found, update the database, etc. This currently takes me around half an hour per newsletter. Ideally, you would also take on some tasks that I haven’t found the time for: improving the visual design of the newsletter, A/B testing different versions to see what people engage with, publicity, and so on, for a total of ~1-3 hours per week.
Since I don’t yet have the setup to pay people to help with the newsletter, I am only looking for expressions of interest. If you think you’d be interested in this role, click this link to email me at rohinmshah@berkeley.edu with the subject line “Interested in publisher role for Alignment Newsletter EOM”. If I do end up hiring for the publisher role I’ll reach out to you with more details.
The rest of this doc will be focused on the more substantial role:
Content creator: You would choose articles that you’re interested in, and write summaries and opinions for them, that would then be published in the newsletter.
Why am I looking for content creators?
In the past few months, I haven’t been allocating as much time to the newsletter (you may have noticed they’re coming out every other week now). There have been many other things that seem more important to do. This is both because I’m more optimistic about the other work I’m doing, and because I no longer find it as useful to read papers as I did when I started the newsletter. As a result, I now have over 100 articles that I would probably want to send out, but haven’t gotten around to yet. This is also partly because there’s just more stuff coming out now. (I mentioned some of these points in the retrospective.)
Another reason for more content creators is that as I have learned more since starting the newsletter, I have developed my own idiosyncratic beliefs, and I think I have become worse at intuitively interpreting other posts from the author’s perspective rather than my own. (In other words, I would perform worse at an Ideological Turing Test of their position than I would have in the past, unless I put in a lot of effort into it.) I expect that with more writers the newsletter will better reflect a diversity of opinions.
Why should you do it?
It’s impactful. See the retrospective for more on this point. I’m not currently able to get a (normal length) newsletter out every week; you’d likely be causally responsible for getting back to weekly newsletters.
You will improve your analytical writing skills. Hopefully clear.
You’ll learn more about safety by reading papers. You could do this by yourself, but by summarizing the papers, you’re also providing a valuable service for everyone else.
You might learn more about AI safety by getting feedback from me. This is a “might” because I don’t know how much feedback I will end up giving to you about your summaries and opinions that’s actually about key ideas in AI safety (as opposed to feedback about the writing itself).
You might build career capital. I certainly have built career capital by creating this newsletter -- it has made me well known in some communities. I don’t know to what extent this will transfer to you.
You might be paid. Currently this is experimental, so I haven’t actually thought much about payment. I expect that I could get a grant to pay you if I ended up deciding that it would be worth it. However, it might be that dealing with all of the paperwork + tax implications cancels out any time savings, though I think this is unlikely. If this is an important factor to you, please do let me know when you apply.
Qualifications
- Likely to contribute at least 20 summaries to the newsletter over time, at least 4 of which are in the first month (for onboarding purposes). Alternatively, if you have deep expertise in a topic that the newsletter covers infrequently, such as formal verification, you should be likely to summarize relevant papers for at least the next 6 months.
- Basic familiarity with AI safety arguments
- Medium familiarity with the topic that you want to write summaries about
- Good writing skills (though I recommend just applying regardless and letting me evaluate based on your example summary)
Application process
Fill out this form. The main part of the application is to write an example summary and opinion for an article (which I may send out in the newsletter, if you give me permission to). Ideally you would write a summary on one of the articles from the list below, but if there isn’t an article in the subarea you’d like to write on, you can choose some other article (that hasn’t already been summarized in the newsletter) and summarize that. The whole process should take 1-4 hours, depending on how much time you put into the summary and opinion.
List of articles:
- Aligning a toy model of optimization [AF · GW]
- Four Ways An Impact Measure Could Help Alignment
- AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence
- Learning to Interactively Learn and Assist
- Natural Adversarial Examples
- On Inductive Biases in Deep Reinforcement Learning
0 comments
Comments sorted by top scores.