Capability Phase Transition Examples
post by gwern · 2022-02-08T03:32:54.551Z · LW · GW · 1 commentsThis is a link post for https://www.reddit.com/r/mlscaling/comments/sjzvl0/d_instances_of_nonlog_capability_spikes_or/
Contents
1 comment
1 comments
Comments sorted by top scores.
comment by A Ray (alex-ray) · 2022-02-08T05:13:57.951Z · LW(p) · GW(p)
It's worth probably going through the current deep learning theories that propose parts of gears-level models, and see how they fit with this. The first one that comes to mind is the Lottery Ticket Hypothesis. It seems intuitive to me that certain tasks correspond to some "tickets" that are harder to find.
I like the taxonomy in the Viering and Loog, and it links to a bunch of other interesting approaches.
This paper shows phase transitions in data quality as opposed to data size, which is an angle I hadn't considered before.
There's the google paper explaining neural scaling laws that describes these two regimes that can be transitioned between: variance-limited and resolution-limited. Their theory seems to predict that behavior between the two is similar to a phase boundary.
I think also there should be a bit of a null hypothesis. It seems like there are simple functional maps where even if the internal improvement on "what matters" (e.g. feature learning) is going smoothly, our metric of performance is "sharp" in a way that hides the internal improvement until some transition when it doesnt.
Accuracy metrics seem like an example of this -- where you get 1 point if the correct answer is highest probability, otherwise 0 points. It's easy to understand why this has a sharp transition in complex domains.
Personal take: I've been spending more and more time thinking about modularity [AF · GW], and it seems like modularity in learning could drive sharp transitions (e.g. "breakthroughs").