Improving Mathematical Reasoning with-Process Supervision

post by p.b. · 2023-05-31T19:00:31.352Z · LW · GW · 3 comments

This is a link post for

Alignment impact

Process supervision has several alignment advantages over outcome supervision. It directly rewards the model for following an aligned chain-of-thought, since each step in the process receives precise supervision. Process supervision is also more likely to produce interpretable reasoning, since it encourages the model to follow a human-approved process. In contrast, outcome supervision may reward an unaligned process, and it is generally harder to scrutinize.

In some cases, safer methods for AI systems can lead to reduced performance3, a cost which is known as an alignment tax. In general, any alignment tax may hinder the adoption of alignment methods, due to pressure to deploy the most capable model. Our results below show that process supervision in fact incurs a negative alignment tax, at least in the math domain. This could increase the adoption of process supervision, which we believe would have positive alignment side-effects.


Paper here.

Related Prediction Market.


Comments sorted by top scores.

comment by harfe · 2023-05-31T19:47:19.469Z · LW(p) · GW(p)

From just reading your excerpt (and not the whole paper), it is hard to determine how much alignment washing is going on here.

  • what is aligned chain-of-thought? What would unaligned chain-of-thought look like?
  • what exactly means alignment in the context of solving math problems?

But maybe these worries can be answered from reading the full paper...

comment by Vladimir_Nesov · 2023-06-01T02:23:02.185Z · LW(p) · GW(p)

Our results below show that process supervision in fact incurs a negative alignment tax

Some compelling arguments are given that alignment tax would be negative when this method is used to improve safety. The actual experimental results are about improving/eliciting capabilities and don't explore application of the method for safety, except by drawing an analogy.

comment by Charlie Steiner · 2023-06-01T14:47:19.261Z · LW(p) · GW(p)

I am shocked that higher quality training data based on more effortful human feedback produced a better result.