The effect of horizon length on scaling laws
post by Jacob_Hilton · 2023-02-01T03:59:26.074Z · LW · GW · 2 commentsThis is a link post for https://arxiv.org/abs/2301.13442
Contents
2 comments
The scaling of optimal model size with compute is a key input into the biological anchors framework [AF · GW] for forecasting transformative AI. In particular, the "effective horizon length" introduces a multiplier into this scaling law that can have a big effect on forecasts.
This paper studies this scaling law for several RL environments: Procgen, Dota 2 and a toy MNIST-based environment. The last of these is used to study the effect of the task horizon length in a toy setting. There are a number of takeaways for the biological anchors framework, which are summarized in Section 5.4.
2 comments
Comments sorted by top scores.
comment by cfoster0 · 2023-02-01T05:45:15.402Z · LW(p) · GW(p)
It would be great if you're able to comment on more directional takeaways for the biological anchors framework. Reading Section 5.4 it's hard to tell at a glance whether each of the points weighs toward an upward revision of long-horizon anchor estimates or a downward one.
Replies from: Jacob_Hilton↑ comment by Jacob_Hilton · 2023-02-01T17:42:02.875Z · LW(p) · GW(p)
I think the direction depends on what your expectations were – I'll try to explain.
First, some terminology: the term "horizon length" is used in the paper to refer to the number of timesteps over which the algorithm pays attention to rewards, as governed by the discount rate. In the biological anchors framework, the term "effective horizon length" is used to refer to a multiplier on the number of samples required to train the model, which is influenced by the horizon length and other factors. For clarity, I'll using the term "scaling multiplier" instead of "effective horizon length" in this comment. The paper studies the effect of the horizon length on the scaling multiplier in a toy MNIST setting.
One key takeaway is that the scaling multiplier is not simply proportional to the horizon length, as one might have naively expected. Instead, the number of samples required is the sum of two components, one that is inherent to the task and independent of the horizon length, and one that is proportional to the horizon length. Compared to the naive expectation, this means that training compute requirements are lower. On the other hand, this ignores reward sparsity, so you might expect training compute requirements to be higher once both horizon length and reward sparsity are accounted for.
The paper also lends some support to the modeling assumptions of the neural network anchor, by validating the hypotheses that (a) training compute requirements still scale as a power law in model size for reinforcement learning, and with a similar exponent, and (b) the scaling multiplier can indeed vary a lot between environments. This might make you put more weight on the neural network anchor, which could again have either directional effect.
The other takeaways are more methodological and I don't think have much of a directional effect.