AI Alignment: A Comprehensive Survey

post by Stephen McAleer (stephen-mcaleer) · 2023-11-01T17:35:34.583Z · LW · GW · 1 comments

This is a link post for https://arxiv.org/abs/2310.19852

We have just released an academic survey of AI alignment.

We identify four main categories of alignment research:

  1. Learning from feedback (e.g. scalable oversight)
  2. Learning under distribution shift
  3. Assurance (e.g. interpretability)
  4. Governance

We mainly focused on academic references but also included some posts from LessWrong and other forums. We would love to hear from the community about any references we missed or anything that was unclear or misstated. We hope that this can be a good starting point for AI researchers who might be unfamiliar with current efforts in AI alignment. 

1 comments

Comments sorted by top scores.

comment by wassname · 2024-04-21T11:08:44.351Z · LW(p) · GW(p)

This is pretty good. It has a lot in it, being a grab bag of things. I particularly enjoyed the scalable oversight sections which succinctly explained debate, recursive reward modelling etc. There were also some gems I hadn't encountered before, like the concept of training out agentic behavior by punishing side-effects.

If anyone wants the HTML version of the paper, it is here.