The Theory Behind Loss Curves
post by James Camacho (james-camacho) · 2025-05-06T22:22:16.359Z · LW · GW · 3 commentsThis is a link post for https://github.com/programjames/theoretical-loss-curve
Contents
Solomonoff's Lightsaber Counting Explanations None 3 comments
Or, why GAN training looks so funky.
Solomonoff's Lightsaber
The simplest explanation is exponentially more important.
Suppose I give you a pattern, and ask you to explain what is going on:
Several explanations might come to mind:
- "The powers of two,"
- "Moser's circle problem,"
- ,
- "The counting numbers in an alien script,"
- "Fine-structure constants."
Some of these explanations are better than others, but they could all be the "correct" one. Rather than taking one underlying truth, we should assign a weight to each explanation, with the ones more likely to produce the pattern we see getting a heavier weight:
Now, what exactly is meant by the word "explanation"? Between humans, our explanations are usually verbal signals or written symbols, with a brain to interpret the meaning. If we want more precision, we can program an explanation into a computer, e.g. fn pattern(n) {2^n}
. If we are training a neural network, an explanation describes what region of weight-space produces the pattern we're looking for, with a few error-correction bits since the neural network is imperfect. See, for example, the paper "ARC-AGI Without Pretraining" (Liao & Gu).
Let's take the view that explanations are simply a string of bits, and our interpreter does the rest of the work to turn it into words, programs, or neural networks. This means there are exactly -bit explanations, and the average weight for each of them is less than . Now, most explanations—even the short ones—have hardly any weight, but there are still exponentially more longer explanations that are "good"[1]. This means, if we take the most prominent explanations, we would expect the remaining explanations to have weight on the order of .
Counting Explanations
What you can count, you can measure.
Suppose we are training a neural network, and we want to count how many explanations it has learned. Empirically, we know the loss comes from all the missing explanations, so
However, wouldn't it be more useful to go the other way? To estimate the loss, at the beginning of a training run, by counting how many concepts we expect our neural network to learn? That is our goal for today.
If we assume our optimizers are perfect, we should be able to use every bit of training data, and the proportion of the time a neural net has learned any particular concept will be
where is the training iteration and the bit-length of the concept. The proportion it learns the concept twice is , thrice is , and so on. We can use the partition function
to keep track of how many times the network has learned a concept. To track multiple concepts, say and , we would just multiply their partition functions:
It's actually more useful to look at the logarithm, that way we can add density functions instead of multiplying partition functions[2]:
Now, not every model learns by simply memorizing the training distribution. We'll look at three kinds of learning dynamics:
- —The network memorizes a concept, and continues to overfit on that concept. This is your typical training run, such as with classifying MNIST digits.
- —The network can only learn a concept once. Equivalently, we can pretend that the network alternates between learning and forgetting a concept. This is for extremely small models, or grokking in larger training runs.
- —One network is trying to learn and imitate a concept, while another network is trying to discriminate what is real and what is an imitation. Any time you add adversarial loss—such as with GANs or the information bottleneck—you'll get this learning dynamic.
In general, a learning dynamic can be described by some group . It's possible to go through several steps at once, so every group element creates a sub-dynamic. Also, we could begin at any step in the dynamic, at , , or so on, up to where is the order of . So, for a particular sub-dynamic , our density function becomes
since[3]
To capture the entire group of dynamics, we have to project onto the fundamental representation of our group:
Finally, to get back the partition function, we exponentiate:
For the three groups in question, we have
To recover the average number of times a concept has been learned, note that taking a derivative drops out the exponents keeping track of this, e.g.
so the expected number of times a concept has been learned is
Putting it altogether, we get
for . Here are the plots, with theory on the left and experiment on the right:
- As in, need very few error-correcting bits after interpretation. The explanation "fine-structure constants" needs many error-correcting bits such as, "your brain spasmed and misinterpreted the text," while "Moser's circle problem" produces the pattern without any need for error correction. ↩︎
- This is known as the plethystic logarithm. ↩︎
- This is the same idea as roots of unity filters. ↩︎
3 comments
Comments sorted by top scores.
comment by James Camacho (james-camacho) · 2025-05-06T22:55:07.018Z · LW(p) · GW(p)
A couple things to add that don't deserve to be in the main text:
-
The Taylor series for the partition function is , which means it actively learns "not this" the second, and third times around. This is why we see a dip () when , followed by a steep rise in loss () as , and then a tapering out.
-
The and partition functions correspond to bosons (e.g. photons) and fermions (e.g. electrons) in physics. Perhaps corresponds to an exotic particle the theorists have yet to classify.
↑ comment by RogerDearnaley (roger-d-1) · 2025-05-13T05:18:07.366Z · LW(p) · GW(p)
Pretty sure that the 'exotic particle' in question for the last sentence would be a spin-1/6 anyon. So '…have already classified'.
Replies from: james-camacho↑ comment by James Camacho (james-camacho) · 2025-05-13T16:50:12.748Z · LW(p) · GW(p)
I haven't been able to find the spin-1/6 anyon's partition function, so mine could be wrong.