Suggestions of posts on the AF to review

post by adamShimi · 2021-02-16T12:40:52.520Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    39 johnswentworth
    13 Charlie Steiner
    10 Jsevillamol
    6 Daniel Kokotajlo
    6 G Gordon Worley III
    5 G Gordon Worley III
    4 G Gordon Worley III
    3 SoerenMind
None
1 comment

How does one write a good and useful review of a technical post on the Alignment Forum?

I don’t know. Like many people, I tend to comment and give feedback on posts closely related to my own research, or to write down my own ideas when reading the paper. Yet this is quite different from the quality peer-review that you can get (if you’re lucky) in more established fields. And from experience, such quality reviews can improve the research dramatically, give some prestige to it, and help people navigate the field.

In an attempt to understand what makes a good review for the Alignment Forum, Joe Collman, Jérémy Perret (Gyrodiot on LW) and me are launching a project to review many posts in depth. The goal is to actually write reviews of various posts, get feedback on their usefulness from authors and readers alike, and try to extract from them some knowledge about how to go about doing such reviews for the field. We hope to have enough insights to eventually write some guidelines that could be used in an official AF review process.

On that note, despite the support of members of the LW team, this project isn’t official. It’s just the three of us trying out something.

Now, the reason for the existence of this post (and why it is a question) is that we’re looking for posts to review. We already have some in mind, but they are necessarily biased towards what we’re more comfortable about. This is where you come in, to suggest a more varied range of posts.

Anything posted on the AF goes, although we will not take into account things that are clearly not “research outputs” (like transcripts of podcasts or pointers to surveys). This means that posts about specific risks, about timelines, about deconfusion, about alignment schemes, and more, are all welcome.

We would definitely appreciate it if you add a reason to your suggestion, to help us decide whether to include the post on our selection. Here is a (non-exhaustive) list of possible reasons:

Thanks in advance, and we’re excited about reading your suggestions!

Answers

answer by johnswentworth · 2021-02-16T17:45:57.251Z · LW(p) · GW(p)

Related to the role of peer review: a lot stuff on LW/AF is relatively exploratory, feeling out concepts, trying to figure out the right frames, etc. We need to be generally willing to ask discuss incomplete ideas, stuff that hasn't yet had the details ironed out. For that to succeed, we need community discussion standards which tolerate a high level of imperfect details or incomplete ideas. I think we do pretty well with this today.

But sometimes, you want to be like "come at me bro". You've got something that you're pretty highly confident is right, and you want people to really try to shoot it down (partly as a social mechanism to demonstrate that the idea is in fact as solid and useful as you think it is). This isn't something I'd want to be the default kind of feedback, but I'd like for authors to be able to say "come at me bro" when they're ready for it, and I'd like for posts which survive such a review to be perceived as more epistemically-solid/useful.

With that in mind, here's a few of my own AF posts which I'd submit for a "come at me bro" review:

For all of these, things like "this frame is wrong" or "this seems true but not useful" are valid objections. I'm not just claiming that the proofs hold.

comment by adamShimi · 2021-02-16T18:14:58.695Z · LW(p) · GW(p)

But sometimes, you want to be like "come at me bro". You've got something that you're pretty highly confident is right, and you want people to really try to shoot it down (partly as a social mechanism to demonstrate that the idea is in fact as solid and useful as you think it is). This isn't something I'd want to be the default kind of feedback, but I'd like for authors to be able to say "come at me bro" when they're ready for it, and I'd like for posts which survive such a review to be perceived as more epistemically-solid/useful.

Yeah, when I think about implementing a review process for the Alignment Forum, I'm definitely thinking about something you can ask for more polished research, in order to get external feedback and a tag saying this is peer review (for prestige and reference).

Thanks for the suggestions! We'll consider them. :)

answer by Charlie Steiner · 2021-02-21T05:37:01.598Z · LW(p) · GW(p)

Steve's big thoughts on alignment in the brain probably deserve a review. Component posts include  https://www.lesswrong.com/posts/diruo47z32eprenTg/my-computational-framework-for-the-brain [LW · GW] , https://www.lesswrong.com/posts/DWFx2Cmsvd4uCKkZ4/inner-alignment-in-the-brain [LW · GW] , https://www.lesswrong.com/posts/jNrDzyc8PJ9HXtGFm/supervised-learning-of-outputs-in-the-brain [LW · GW]

Interestingly, I think there aren't any of my posts I should recommend - basically all of them are speculative. However, I did have a post called Gricean communication and meta-preferences that I think is still fairly interesting speculation, and I've never gotten any feedback on it at all, so I'll happily ask for some: https://www.lesswrong.com/posts/8NpwfjFuEPMjTdriJ/gricean-communication-and-meta-preferences [LW · GW] .  

answer by Jsevillamol · 2021-02-18T12:55:46.559Z · LW(p) · GW(p)

Suggestion 1: Utility != reward [AF · GW] by Vladimir Mikulik. This post attempts to distill the core ideas of mesa alignment. This kind of distillment increases the surface area of AI Alignment, which is one of the key bottlenecks of the area (that is, getting people familiarized with the field, motivated to work on it and with a handle on some open questions to work on). I would like an in-depth review because it might help us learn how to do it better!

Suggestion 2: me and my coauthor Pablo Moreno would be interested in feedback in our post [AF · GW] about quantum computing and AI alignment. We do not think that the ideas of the paper are useful in the sense of getting us closer to AI alignment, but I think it is useful to have signpost explaining why avenues that might seem attractive to people coming into the field are not worth exploring, while introducing them to the field in a familiar way (in this case our audience are quantum computing experts). One thing that confuses me is that some people have approached me after publishing the post asking me why I think that quantum computing is useful for AI alignment, so I'd be interested in feedback on what went wrong on the communication process given the deflationary nature of the article. 

answer by Daniel Kokotajlo · 2021-02-16T14:45:56.904Z · LW(p) · GW(p)

Great idea! Thanks for doing this!

Unsurprisingly, I'd love it if you reviewed any of my posts.

Since you said "technical," I suggest this one in particular [AF · GW]. It's a big deal IMO because Armstrong & Mindermann's argument has been cited approvingly by many people and seems to be still widely regarded as correct, but if I'm right, it's actually a bad argument. I'd love a third perspective on this that helps me figure out what's going on.

More generally I'd recommend sorting all AF posts by karma and reviewing the ones that got the most, since presumably karma correlates with how much people here like the post and thus it's extra important to find flaws in high-karma posts.

comment by adamShimi · 2021-02-16T18:16:25.841Z · LW(p) · GW(p)

I was indeed expecting you to suggest one of your post. But that's one of the valid reasons I listed, and I didn't know about this one, so it's great!

We'll consider it. :)

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-03-12T15:02:25.404Z · LW(p) · GW(p)

Insofar as you want to do others of mine, my top recommendation would be this one [LW · GW] since it got less feedback than I expected and is my most important timelines-related post of all time IMO.

Replies from: adamShimi
comment by adamShimi · 2021-03-12T15:35:32.373Z · LW(p) · GW(p)

If we do only one, which one do you think matters the most?

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-03-12T21:56:17.118Z · LW(p) · GW(p)

I'm more interested in feedback on the +12 OOMs one because it's more decision-relevant. It's more of a fuzzy thing, not crunchy logic like the first one I recommended, and therefore less suitable for your purposes (or so I thought when I first answered your question, now I am not sure)

answer by Gordon Seidoh Worley (G Gordon Worley III) · 2021-02-16T14:35:20.230Z · LW(p) · GW(p)

I wrote this post as a summary of a paper I published. It didn't get much attention, so I'd be interesting in having you all review it.

 https://www.lesswrong.com/posts/JYdGCrD55FhS4iHvY/robustness-to-fundamental-uncertainty-in-agi-alignment-1 [LW · GW]

To say a little more, I think the general approach I lay out in here for taking towards safety work is worth considering more deeply and points towards a better process for choosing interventions in attempts to build aligned AI. I think what's more important than the specific examples where I apply the method is the method itself, but thus far as best I can tell folks did not much engage with that, so unclear to me if that's because they disagree, think it's too obvious, or what.

comment by adamShimi · 2021-02-16T18:21:56.371Z · LW(p) · GW(p)

Thanks for the suggestion! It's great to have some methodological posts!

We'll consider it. :)

answer by Gordon Seidoh Worley (G Gordon Worley III) · 2021-02-16T14:48:11.655Z · LW(p) · GW(p)

I think the generalized insight from Armstrong's no free lunch paper is still underappreciated in that I sometimes see papers that, to me, seem to run up against this and fail to realize there's a free variable in their mechanisms that needs to be fixed if they want them to not go off in random directions.

https://www.lesswrong.com/posts/LRYwpq8i9ym7Wuyoc/other-versions-of-no-free-lunch-in-value-learning [LW · GW]

comment by adamShimi · 2021-02-16T18:20:01.264Z · LW(p) · GW(p)

Thanks for the suggestion!

I didn't know about this post. We'll consider it. :)

answer by Gordon Seidoh Worley (G Gordon Worley III) · 2021-02-16T14:44:36.042Z · LW(p) · GW(p)

Another post of mine I'll recommend you:

https://www.lesswrong.com/posts/k8F8TBzuZtLheJt47/deconfusing-human-values-research-agenda-v1 [LW · GW]

This is the culmination of a series of post on "formal alignment", where I start out saying "what it would mean to formally state what it would mean to build aligned AI" and then from that try to figure out what we'd have to figure out in order to achieve that.

Over the last year I've gotten pulled in other directions so not pushed this line of research forward much, plus I reached a point with it where it was clear it required different specialization than I have to make additional progress, but I still think it presents a different approach to what others are doing in the space of work towards AI alignment and think you might find it interesting to review (along with the preceding posts in the series) for that reason.

comment by adamShimi · 2021-02-16T18:19:24.763Z · LW(p) · GW(p)

Thanks for the suggestion!

We want to go through the different research agendas (and I already knew about yours), as they give different views/paradigms on AI Alignment. Yet I'm not sure how relevant a review of such posts are. In a sense, the "reviewable" part is the actual research that underlies the agenda, right?

Replies from: Joe_Collman
comment by Joe Collman (Joe_Collman) · 2021-02-18T18:54:24.664Z · LW(p) · GW(p)

I don't see a good reason to exclude agenda-style posts, but I do think it'd be important to treat them differently from more here-is-a-specific-technical-result posts.

Broadly, we'd want to be improving the top-level collective AI alignment research 'algorithm'. With that in mind, I don't see an area where more feedback/clarification/critique of some kind wouldn't be helpful.
The questions seem to be:
What form should feedback/review... take in a given context?
Where is it most efficient to focus our efforts?

Productive feedback/clarification on high-level agendas seems potentially quite efficient. My worry would be to avoid excessive selection pressure towards paths that are clear and simply justified. However, where an agenda does use specific assumptions and arguments to motivate its direction, early 'review' seems useful.

answer by SoerenMind · 2021-06-08T07:18:02.601Z · LW(p) · GW(p)

This seems useful. But do you ask the authors for permission to review and give them an easy way out? Academic peer review is for good reasons usually non-public. The prospect of having one's work reviewed in public seems likely to be extremely emotionally uncomfortable for some authors and may discourage them from writing.

comment by adamShimi · 2021-06-08T08:22:48.683Z · LW(p) · GW(p)

Putting aside how people feel for the moment (I'll come back to it), I don't think peer-review should be private, and I think anyone publishing work in an openly readable forum where other researchers are expected to interact would value a thoughtful review of their work.

That being said, you're probably right that at least notifying the authors before publication is a good policy. We sort of did that for the first two reviews, in the sense of literally asking people what they wanted to get reviews for, but we should make it a habit.

Thanks for the suggestion.

Replies from: SoerenMind
comment by SoerenMind · 2021-06-08T10:44:29.385Z · LW(p) · GW(p)

Thanks - I agree there's value to public peer review. Personally I'd go further than notifying authors and instead ask for permission. We already have a problem where many people (including notably highly accomplished authors) feel discouraged from posting due to the fear of losing reputation. Worse, your friends will actually read reviews of your work, unlike OpenReview. And I wouldn't want to make this worse by implicitly making authors opt into a public peer review if that makes sense. 

There are also some differences between forums and academia. Forums allow people to share unpolished work and see how the community reacts. I worry that highly visible public reviews may discourage some authors from posting this work, unless it's obvious that they won't get a highly visible negative review for their off-the-cuff thoughts without opting into it.  Which seems doable within your (very useful) approach. I agree there's a fine line here; just want to point out that not everyone is emotionally ready for this.

1 comment

Comments sorted by top scores.

comment by Gyrodiot · 2021-02-16T21:27:46.887Z · LW(p) · GW(p)

My gratitude for the already posted suggestions (keep them coming!) - I'm looking forward to work on the reviews. My personal motivation resonates a lot with the help people navigate the field part; in-depth reviews are a precious resource for this task.