What happens to variance as neural network training is scaled? What does it imply about "lottery tickets"?

post by abramdemski · 2020-07-28T20:22:14.066Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    17 Evan Hubinger
    9 dsj
None
1 comment

Daniel Kokotajlo asks [LW · GW] whether the lottery ticket hypothesis implies the scaling hypothesis.

The way I see it, this depends on the distribution of "lottery tickets" being drawn from.

However, a long tail also suggests to me that variance in results would continue to be relatively high as a network is scaled: bigger networks are hitting bigger jackpots, but since even bigger jackpots are within reach, the payoff of scaling remains chaotic.

(This could all benefit from a more mathematical treatment.)

So: what do we know about NN training? Does it suggest we are living in extremistan or mediocristan?

Note: a major conceptual difficulty to answering this question is representing NN quality in the right units. For example, an accuracy metric -- which necessarily falls between 0% and 100% -- must yield "diminishing returns", and cannot be host to a "long-tailed distribution". Take that same metric and send it through an inverse sigmoid, and now you might not have diminishing returns, and could have a long-tail distribution. But we can transform data all day. The analysis shouldn't be too ad-hoc. So it's not immediately clear how to measure this.

Answers

answer by evhub (Evan Hubinger) · 2021-02-23T23:49:51.072Z · LW(p) · GW(p)

This paper seems to be arguing that variance initially increases as network width goes up, then starts decreasing for very large networks, suggesting that overall variance is likely to decrease as we approach more advanced AI systems and networks get very large.

comment by gwern · 2021-02-24T03:08:08.390Z · LW(p) · GW(p)

'Variance' is used in an amusing number of ways in these discussions.You use 'variance' in one sense (the bias-variance tradeoff), but "Explaining Neural Scaling Laws", Bahri et al 2021 talks about a difference kind of variance limit in scaling, while "Learning Curve Theory", Hutter 2001's toy model provides statements on yet others kinds of variances about scaling curves themselves (and I think you could easily dig up a paper from the neural tangent kernel people about scaling approximating infinite width models which only need to make infinitesimally small linear updates or something like that because variance in a different sense goes down...) Meanwhile, my original observation was about the difficulty of connecting benchmarks to practical real-world capabilities: regardless of whether the 'variance of increases in practical real-world capabilities' goes up or down with additional scaling, we still have no good way to say that an X% increase on benchmarks ought to yield qualitatively new capability Y - almost a year later, still no one has shown how you would have predicted in advance that pushing GPT-3 to a particular likelihood loss would yield all these cool new things. As we cannot predict that at all, it would not be of terribly much use to say whether it either increases or decreases as we continue scaling (since either way, we may wind up being surprised).

answer by dsj · 2020-07-28T22:59:14.970Z · LW(p) · GW(p)

One assumption that I think might be implicit in your question is that the number of lottery tickets is linear with model size. But it seems plausible to me that it’s exponential in network depth.

1 comment

Comments sorted by top scores.

comment by romeostevensit · 2020-07-29T02:05:57.756Z · LW(p) · GW(p)

One related question is what sub-tasks of gpt-3 showed surprise jackpots vs gpt-2