Posts

Stagewise Development in Neural Networks 2024-03-20T19:54:06.181Z
Timaeus's First Four Months 2024-02-28T17:01:53.437Z
Generalization, from thermodynamics to statistical physics 2023-11-30T21:28:50.089Z
Announcing Timaeus 2023-10-22T11:59:03.938Z
You’re Measuring Model Complexity Wrong 2023-10-11T11:46:12.466Z
Open Call for Research Assistants in Developmental Interpretability 2023-08-30T09:02:59.781Z
Apply for the 2023 Developmental Interpretability Conference! 2023-08-25T07:12:36.097Z
Towards Developmental Interpretability 2023-07-12T19:33:44.788Z
Approximation is expensive, but the lunch is cheap 2023-04-19T14:19:12.570Z
Jesse Hoogland's Shortform 2023-04-12T09:32:54.009Z
Singularities against the Singularity: Announcing Workshop on Singular Learning Theory and Alignment 2023-04-01T09:58:22.764Z
Empirical risk minimization is fundamentally confused 2023-03-22T16:58:41.113Z
The shallow reality of 'deep learning theory' 2023-02-22T04:16:11.216Z
Gradient surfing: the hidden role of regularization 2023-02-06T03:50:39.649Z
Spooky action at a distance in the loss landscape 2023-01-28T00:22:46.506Z
Neural networks generalize because of this one weird trick 2023-01-18T00:10:36.998Z
No, human brains are not (much) more efficient than computers 2022-09-06T13:53:07.519Z
A Starter-kit for Rationality Space 2022-09-01T13:04:48.618Z

Comments

Comment by Jesse Hoogland (jhoogland) on Examples of Highly Counterfactual Discoveries? · 2024-04-24T04:17:02.290Z · LW · GW

If you'll allow linguistics, Pāṇini was two and a half thousand years ahead of modern descriptive linguists.

Comment by Jesse Hoogland (jhoogland) on Many arguments for AI x-risk are wrong · 2024-03-12T01:50:51.285Z · LW · GW

Right. SLT tells us how to operationalize and measure (via the LLC) basin volume in general for DL. It tells us about the relation between the LLC and meaningful inductive biases in the particular setting described in this post. I expect future SLT to give us meaningful predictions about inductive biases in DL in particular. 

Comment by Jesse Hoogland (jhoogland) on Many arguments for AI x-risk are wrong · 2024-03-11T13:38:54.682Z · LW · GW

The post is live here.

Comment by Jesse Hoogland (jhoogland) on Many arguments for AI x-risk are wrong · 2024-03-11T02:09:34.300Z · LW · GW

If we actually had the precision and maturity of understanding to predict this "volume" question, we'd probably (but not definitely) be able to make fundamental contributions to DL generalization theory + inductive bias research. 

 

Obligatory singular learning theory plug: SLT can and does make predictions about the "volume" question. There will be a post soon by @Daniel Murfet that provides a clear example of this. 

Comment by Jesse Hoogland (jhoogland) on Timaeus's First Four Months · 2024-02-28T18:55:34.085Z · LW · GW

You can find a v0 of an SLT/devinterp reading list here. Expect an updated reading list soon (which we will cross-post to LW). 

Comment by Jesse Hoogland (jhoogland) on You’re Measuring Model Complexity Wrong · 2024-02-09T18:49:54.909Z · LW · GW

Our work on the induction bump is now out. We find several additional "hidden" transitions, including one that splits the induction bump in two: a first part where previous-token heads start forming, and a second part where the rest of the induction circuit finishes forming. 

The first substage is a type-B transition (loss changing only slightly, complexity decreasing). The second substage is a more typical type-A transition (loss decreasing, complexity increasing). We're still unclear about how to understand this type-B transition structurally. How is the model simplifying? E.g., is there some link between attention heads composing and the basin broadening? 

Comment by Jesse Hoogland (jhoogland) on Generalization, from thermodynamics to statistical physics · 2023-12-05T10:17:23.581Z · LW · GW

As a historical note / broader context, the worry about model class over-expressivity has been there in the early days of Machine Learning. There was a mistrust of large blackbox models like random forest and SVM and their unusually low test or even cross-validation loss, citing ability of the models to fit noise. Breiman frank commentary back in 2001, "Statistical Modelling: The Two Cultures", touch on this among other worries about ML models. The success of ML has turn this worry into the generalisation puzzle. Zhang et. al. 2017 being a call to arms when DL greatly exacerbated the scale and urgency of this problem.

Yeah it surprises me that Zhang et al. (2018) has had the impact it did when, like you point out, the ideas have been around for so long. Deep learning theorists like Telgarsky point to it as a clear turning point.

Naive optimism: hopefully progress towards a strong resolution to the generalisation puzzle give us understanding enough to gain control on what kind of solutions are learned. And one day we can ask for more than generalisation, like "generalise and be safe".

This I can stand behind.

Comment by Jesse Hoogland (jhoogland) on Generalization, from thermodynamics to statistical physics · 2023-12-04T21:17:32.868Z · LW · GW

Thanks for raising that, it's a good point. I'd appreciate it if you also cross-posted this to the approximation post here.

Comment by Jesse Hoogland (jhoogland) on Generalization, from thermodynamics to statistical physics · 2023-12-04T21:14:25.394Z · LW · GW

I think this mostly has to do with the fact that learning theory grew up in/next to computer science where the focus is usually worst-case performance (esp. in algorithmic complexity theory). This naturally led to the mindset of uniform bounds. That and there's a bit of historical contingency: people started doing it this way, and early approaches have a habit of sticking.

Comment by Jesse Hoogland (jhoogland) on My Criticism of Singular Learning Theory · 2023-11-22T21:50:59.183Z · LW · GW

This is probably true for neural networks in particular, but mathematically speaking, it completely depends on how you parameterise the functions. You can create a parameterisation in which this is not true.

Agreed. So maybe what I'm actually trying to get at it is a statement about what "universality" means in the context of neural networks. Just as the microscopic details of physical theories don't matter much to their macroscopic properties in the vicinity of critical points ("universality" in statistical physics), just as the microscopic details of random matrices don't seem to matter for their bulk and edge statistics ("universality" in random matrix theory), many of the particular choices of neural network architecture doesn't seem to matter for learned representations ("universality" in DL). 

What physics and random matrix theory tell us is that a given system's universality class is determined by its symmetries. (This starts to get at why we SLT enthusiasts are so obsessed with neural network symmetries.) In the case of learning machines, those symmetries are fixed by the parameter-function map, so I totally agree that you need to understand the parameter-function map. 

However, focusing on symmetries is already a pretty major restriction. If a universality statement like the above holds for neural networks, it would tell us that most of the details of the parameter-function map are irrelevant.  

There's another important observation, which is that neural network symmetries leave geometric traces. Even if the RLCT on its own does not "solve" generalization, the SLT-inspired geometric perspective might still hold the answer: it should be possible to distinguish neural networks from the polynomial example you provided by understanding the geometry of the loss landscape. The ambitious statement here might be that all the relevant information you might care about (in terms of understanding universality) are already contained in the loss landscape.

If that's the case, my concern about focusing on the parameter-function map is that it would pose a distraction. It could miss the forest for the trees if you're trying to understand the structure that develops and phenomena like generalization. I expect the more fruitful perspective to remain anchored in geometry.

Is this not satisfied trivially due to the fact that the RLCT has a certain maximum and minimum value within each model class? (If we stick to the assumption that  is compact, etc.)

Hmm, maybe restrict  so it has to range over 

Comment by Jesse Hoogland (jhoogland) on My Criticism of Singular Learning Theory · 2023-11-21T22:35:16.962Z · LW · GW

The easiest way to explain why this is the case will probably be to provide an example. Suppose we have a Bayesian learning machine with 15 parameters, whose parameter-function map is given by

and whose loss function is the KL divergence. This learning machine will learn 4-degree polynomials.

I'm not sure, but I think this example is pathological. One possible reason for this to be the case is that the symmetries in this model are entirely "generic" or "global." The more interesting kinds of symmetry are "nongeneric" or "local." 

What I mean by "global" is that each point  in the parameter space  has the same set of symmetries (specifically, the product of a bunch of hyperboloids ). In neural networks there are additional symmetries that are only present for a subset of the weights. My favorite example of this is the decision boundary annihilation (see below). 

For the sake of simplicity, consider a ReLU network learning a 1D function (which is just piecewise linear approximation). Consider what happens when you you rotate two adjacent pieces so they end up sitting on the same line, thereby "annihilating" the decision boundary between them, so this now-hidden decision boundary no longer contributes to your function. You can move this decision boundary along the composite linear piece without changing the learned function, but this only holds until you reach the next decision boundary over. I.e.: this symmetry is local. (Note that real-world networks actually seem to take advantage of this property.)

This is the more relevant and interesting kind of symmetry, and it's easier to see what this kind of symmetry has to do with functional simplicity: simpler functions have more local degeneracies. We expect this to be true much more generally — that algorithmic primitives like conditional statements, for loops, composition, etc. have clear geometric traces in the loss landscape. 

So what we're really interested in is something more like the relative RLCT (to the model class's maximum RLCT). This is also the relevant quantity from a dynamical perspective: it's relative loss and complexity that dictate transitions, not absolute loss or complexity. 

This gets at another point you raised:

2. It is a type error to describe a function as having low RLCT. A given function may have a high RLCT or a low RLCT, depending on the architecture of the learning machine. 

You can make the same critique of Kolmogorov complexity. Kolmogorov complexity is defined relative to some base UTM. Fixing a UTM lets you set an arbitrary constant correction. What's really interesting is the relative Kolmogorov complexity. 

In the case of NNs, the model class is akin to your UTM, and, as you show, you can engineer the model class (by setting generic symmetries) to achieve any constant correction to the model complexity. But those constant corrections are not the interesting bit. The interesting bit is the question of relative complexities. I expect that you can make a statement similar to the equivalence-up-to-a-constant of Kolmogorov complexity for RLCTs. Wild conjecture: given two model classes  and  and some true distribution , their RLCTs obey:

where  is some monotonic function.

Comment by Jesse Hoogland (jhoogland) on You’re Measuring Model Complexity Wrong · 2023-11-16T09:17:48.748Z · LW · GW

I think there's some chance of models executing treacherous turns in response to a particular input, and I'd rather not trigger those if the model hasn't been sufficiently sandboxed.

Comment by Jesse Hoogland (jhoogland) on You’re Measuring Model Complexity Wrong · 2023-11-16T09:15:05.747Z · LW · GW

One would really want to know if the complexity measure can predict 'emergence' of capabilities like inner-monologue, particularly if you can spot previously-unknown capabilities emerging which may not be covered in any of your existing benchmarks.

That's our hope as well. Early ongoing work on toy transformers trained to perform linear regression seems to bear out that lambdahat can reveal transitions where the loss can't. 

But this type of 'emergence' tends to happen with such expensive models that the available checkpoints are too separated to be informative (if you get an emergence going from 1b vs 10b vs 100b, what does it mean to compute a complexity measure there? You'd really want to compare them at wherever the emergence actually really happens, like 73.5b vs 74b, or whatever.) 

The kind of emergence we're currently most interested in is emergence over training time, which makes studying these transitions much more tractable (the main cost you're paying is storage for checkpoints, and storage is cheap). It's still a hurdle in that we have to start training large models ourselves (or setting up collaborations with other labs). 

But the induction bump happens at pretty small (ie. cheap) model sizes, so it could be replicated many times and in many ways within-training-run and across training-runs, and one see how the complexity metric reflects or predicts the induction bump. Is that one of the 'hidden' transitions you plan to test? And if not, why not?

The induction bump is one of the main things we're looking into now. 

Comment by Jesse Hoogland (jhoogland) on No, human brains are not (much) more efficient than computers · 2023-11-14T20:54:31.639Z · LW · GW

Oops yes this is a typo. Thanks for pointing it out. 

Comment by Jesse Hoogland (jhoogland) on Towards Developmental Interpretability · 2023-11-04T08:54:37.512Z · LW · GW

Should be fixed now, thank you!

Comment by Jesse Hoogland (jhoogland) on Announcing Timaeus · 2023-10-27T02:26:54.488Z · LW · GW

To be clear, our policy is not publish-by-default. Our current assessment is that the projects we're prioritizing do not pose a significant risk of capabilities externalities. We will continue to make these decisions on a per-project basis. 

Comment by Jesse Hoogland (jhoogland) on You’re Measuring Model Complexity Wrong · 2023-10-12T00:20:54.584Z · LW · GW

We don’t necessarily expect all dangerous capabilities to exhibit phase transitions. The ones that do are more dangerous because we can’t anticipate them, so this just seems like the most important place to start.

It's an open question to what extent the lottery-ticket style story of a subnetwork being continually upweighted contradicts (or supports) the phase transition perspective. Just because a subnetwork's strength is growing constantly doesn't mean its effect on the overall computation is. Rather than grokking, which is a very specific kind of phase transition, it's probably better to have in mind the emergence of in-context learning in tandem with induction heads, which seems to us more like the typical case we're interested in when we speak about structure in neural networks developing across training.

We expect there to be a deeper relation between degeneracy and structure. As an intuition pump, think of a code base where you have two modules communicating across some API. Often, you can change the interface between these two modules without changing the information content being passed between them and without changing their internal structure. Degeneracy — the ways in which you can change your interfaces — tells you something about the structure of these circuits, the boundaries between them, and maybe more. We'll have more to say about this in the future. 

Comment by Jesse Hoogland (jhoogland) on Open Call for Research Assistants in Developmental Interpretability · 2023-09-16T07:32:03.249Z · LW · GW

Now that the deadline has arrived, I wanted to share some general feedback for the applicants and some general impressions for everyone in the space about the job market:

  • My number one recommendation for everyone is to work on more legible projects and outputs. A super low-hanging fruit for >50% of the applications would be to clean up your GitHub profiles or to create a personal site. Make it really clear to us which projects you're proud of, so we don't have to navigate through a bunch of old and out-of-use repos from classes you took years ago. We don't have much time to spend on every individual application, so you want to make it really easy for us to become interested in you. I realize most people don't even know how to create a GitHub profile page, so check out this guide
  • We got 70 responses and will send out 10 invitations for interviews. 
  • We rejected a reasonable number of decent candidates outright because they were looking for part-time work. If this is you, don't feel dissuaded.
  • There were quite a few really bad applications (...as always): poor punctuation/capitalization, much too informal, not answering the questions, totally unrelated background, etc. Two suggestions: (1) If you're the kind of person who is trying to application-max, make sure you actually fill in the application. A shitty application is actually worse than no application, and I don't know why I have to say that. (2) If English is not your first language, run your answers through ChatGPT. GPT-3.5 is free. (Actually, this advice is for everyone). 
  • Between 5 and 10 people expressed interest in an internship option. We're going to think about this some more. If this includes you, and you didn't mention it in your application, please reach out. 
  • Quite a few people came from a data science / analytics background. Using ML techniques is actually pretty different from researching ML techniques, so for many of these people I'd recommend you work on some kind of project in interpretability or related areas to demonstrate that you're well-suited to this kind of research. 
  • Remember that job applications are always noisy.  We almost certainly made mistakes, so don't feel discouraged!
Comment by Jesse Hoogland (jhoogland) on Open Call for Research Assistants in Developmental Interpretability · 2023-08-30T21:04:03.712Z · LW · GW

Hey Thomas, I wrote about our reasoning for this in response to Winston:

All in all, we're expecting most of our hires to come from outside the US where the cost of living is substantially lower. If lower wages are a deal-breaker for anyone but you're still interested in this kind of work, please flag this in the form. The application should be low-effort enough that it's still worth applying.

Comment by Jesse Hoogland (jhoogland) on Open Call for Research Assistants in Developmental Interpretability · 2023-08-30T16:36:21.075Z · LW · GW

Hey Winston, thanks for writing this out. This is something we talked a lot about internally. Here are a few thoughts: 

Comparisons: At 35k a year, it seems it might be considerably lower than industry equivalent even when compared to other programs 

I think the more relevant comparison is academia, not industry. In academia, $35k is (unfortunately) well within in the normal range for RAs and PhD students. This is especially true outside the US, where wages are easily 2x - 4x lower.

Often academics justify this on the grounds that you're receiving more than just monetary benefits: you're receiving mentorship and training. We think the same will be true for these positions. 

The actual reason is that you have to be somewhat crazy to even want to go into research. We're looking for somewhat crazy.

If I were applying to this, I'd feel confused and slightly underappreciated if I had the right set of ML/Software Engineering skills but to be barely paid subsistence level for my full-time work (in NY).

If it helps, we're paying ourselves even less. As much as we'd like to pay the RAs (and ourselves) more, we have to work with what we have.  

Of course... money is tight: The grant constraint is well acknowledged here. But potentially the number of RAs expected to hire can be further down adjusted as while potentially increasing the submission rate of the candidates that truly fits the requirement of the research program.

For exceptional talent, we're willing to pay higher wages. 

The important thing is that both funding and open positions are exceptionally scarce. We expect there to be enough strong candidates who are willing to take the pay cut.

All in all, we're expecting most of our hires to come from outside the US where the cost of living is substantially lower. If lower wages are a deal-breaker for anyone but you're still interested in this kind of work, please flag this in the form. The application should be low-effort enough that it's still worth applying. 

Comment by Jesse Hoogland (jhoogland) on Towards Developmental Interpretability · 2023-07-12T21:16:59.106Z · LW · GW

Yes (see footnote 1)! The main place where devinterp diverges from Naomi's proposal is the emphasis on phase transitions as described by SLT. During the first phase of the plan, simply studying how behaviors develop over different checkpoints is one of the main things we'll be doing to establish whether these transitions exist in the way we expect.

Comment by Jesse Hoogland (jhoogland) on Approximation is expensive, but the lunch is cheap · 2023-04-20T20:55:44.946Z · LW · GW

The No Free Lunch Theorem says "that any two optimization algorithms are equivalent when their performance is averaged across all possible problems."

So if the class of target functions (=the set of possible problems you would want to solve) is very large, then it's harder for a random model class (=set of solutions) to do much better than any other model class. You can't obtain strong guarantees for why you should expect good approximation.

If the target function class is smaller and your model class is big enough you might have better luck.

Comment by Jesse Hoogland (jhoogland) on Jesse Hoogland's Shortform · 2023-04-12T09:32:54.322Z · LW · GW

When people complain about LLMs doing nothing more than interpolation, they're mixing up two very different ideas: interpolation as intersecting every point in the training data, and interpolation as predicting behavior in-domain rather than out-of-domain.

With language, interpolation-as-intersecting isn't inherently good or bad—it's all about how you do it. Just compare polynomial interpolation to piecewise-linear interpolation (the thing that ReLUs do).

Neural networks (NNs) are biased towards fitting simple piecewise functions, which is (locally) the least biased way to interpolate. The simplest function that intersects two points is the straight line.

In reality, we don't even train LLMs long enough to hit that intersecting threshold. In this under-interpolated sweet spot, NNs seem to learn features from coarse to fine with increasing model size. E.g.: https://arxiv.org/abs/1903.03488

 

Bonus: this is what's happening with double descent: Test loss goes down, then up, until you reach the interpolation threshold. At this point there's only one interpolating solution, and it's a bad fit. But as you increase model capacity further, you end up with many interpolating solutions, some of which generalize better than others.

Meanwhile, with interpolation-not-extrapolation NNs can and do extrapolate outside the convex hull of training samples. Again, the bias towards simple linear extrapolations is locally the least biased option. There's no beating the polytopes.

Here I've presented the visuals in terms of regression, but the story is pretty similar for classification, where the function being fit is a classification boundary. In this case, there's extra pressure to maximize margins, which further encourages generalization

The next time you feel like dunking on interpolation, remember that you just don't have the imagination to deal with high-dimensional interpolation. Maybe keep it to yourself and go interpolate somewhere else.

Comment by Jesse Hoogland (jhoogland) on Empirical risk minimization is fundamentally confused · 2023-03-23T22:07:50.295Z · LW · GW

Thank you. Pedantic is good (I fixed the root)!

Comment by Jesse Hoogland (jhoogland) on Empirical risk minimization is fundamentally confused · 2023-03-23T18:58:59.414Z · LW · GW

You're right, thanks for pointing that out! I fixed the notation. Like you say, the difference of risks doesn't even qualify as a metric (the other choices mentioned do, however).

Comment by Jesse Hoogland (jhoogland) on The shallow reality of 'deep learning theory' · 2023-02-22T21:48:03.632Z · LW · GW

I agree with you that I haven't presented enough evidence! Which is why this is the first part in a six-part sequence.

Comment by Jesse Hoogland (jhoogland) on The shallow reality of 'deep learning theory' · 2023-02-22T16:53:10.279Z · LW · GW

There's more than enough math! (And only a bit of philosophy.)

Comment by Jesse Hoogland (jhoogland) on The shallow reality of 'deep learning theory' · 2023-02-22T16:51:49.994Z · LW · GW

Yes, I'm planning to adapt a more technical and diplomatic version of this sequence after the first pass.

To give the ML theorists credit, there is genuinely interesting new non-"classical" work going on (but then "classical" is doing a lot of the heavy lifting). Still, some of these old-style tendrils of classical learning theory are lingering around, and it's time to rip off the bandaid.

Comment by Jesse Hoogland (jhoogland) on The shallow reality of 'deep learning theory' · 2023-02-22T16:44:16.529Z · LW · GW

Thanks for pointing this out!

Comment by Jesse Hoogland (jhoogland) on Gradient surfing: the hidden role of regularization · 2023-02-06T16:53:03.180Z · LW · GW

Thanks Lawrence! I had missed the slingshot mechanism paper, so this is great!

(As an aside, I also think grokking is not very interesting to study -- if you want a generalization phenomena to study, I'd just study a task without grokking, and where you can get immediately generalization or memorization depending on hyperparameters.)

I totally agree on there being much more interesting tasks than grokking with modulo arithmetic, but it seemed like an easy way to test the premise.

Also worth noting that grokking is pretty hyperparameter sensitive -- it's possible you just haven't found the right size/form of noise yet!

I will continue the exploration!

Comment by Jesse Hoogland (jhoogland) on Gradient surfing: the hidden role of regularization · 2023-02-06T16:46:07.602Z · LW · GW

I think Omnigrok looked at enough tasks (MNIST, group composition, IMDb reviews, molecule polarizability) to suggest that the weight norm is an important ingredient and not just a special case / cherry-picking.

That said, I still think there's a good chance it isn't the whole story. I'd love to explore a task that generalizes at large weight norms, but it isn't obvious to me that you can straightforwardly construct such a task.

Comment by Jesse Hoogland (jhoogland) on Spooky action at a distance in the loss landscape · 2023-02-05T20:03:11.989Z · LW · GW

When you start to include broad basins (where a small perturbation changes your loss but only slightly), and other results like Mingard et al.'s and Valle Pérez et al.'s, I'm inclined to think this picture is more representative. But even in this picture, we can probably end up making statements about singularities in the blobby regions having disproportionate effect over all points within those regions.

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-31T00:28:12.212Z · LW · GW

Let me add some more views on SLT and capabilities/alignment. 

Quoting Dan Murfet:

(Dan Murfet’s personal views here) First some caveats: although we are optimistic SLT can be developed into theory of deep learning, it is not currently such a theory and it remains possible that there are fundamental obstacles. Putting those aside for a moment, it is plausible that phenomena like scaling laws and the related emergence of capabilities like in-context learning can be understood from first principles within a framework like SLT. This could contribute both to capabilities research and safety research.

Contribution to capabilities. Right now it is not understood why Transformers obey scaling laws, and how capabilities like in-context learning relate to scaling in the loss; improved theoretical understanding could increase scaling exponents or allow them to be engineered in smaller systems. For example, some empirical work already suggests that certain data distributions lead to in-context learning. It is possible that theoretical work could inspire new ideas. Thermodynamics wasn’t necessary to build steam engines, but it helped to push the technology to new levels of capability once the systems became too big and complex for tinkering.

Contribution to alignment. Broadly speaking it is hard to align what you do not understand. Either the aspects of intelligence relevant for alignment are universal, or they are not. If they are not, we have to get lucky (and stay lucky as the systems scale). If the relevant aspects are universal (in the sense that they arise for fundamental reasons in sufficiently intelligent systems across multiple different substrates) we can try to understand them and use them to control/align advanced systems (or at least be forewarned about their dangers) and be reasonably certain that the phenomena continue to behave as predicted across scales. This is one motivation behind the work on properties of optimal agents, such as Logical Inductors. SLT is a theory of universal aspects of learning machines, it could perhaps be developed in similar directions.

Does understanding scaling laws contribute to safety?. It depends on what is causing scaling laws. If, as we suspect, it is about phases and phase transitions then it is related to the nature of the structures that emerge during training which are responsible for these phase transitions (e.g. concepts). A theory of interpretability scalable enough to align advanced systems may need to develop a fundamental theory of abstractions, especially if these are related to the phenomena around scaling laws and emergent capabilities.

Our take on this has been partly spelled out in the Abstraction seminar. We’re trying to develop existing links in mathematical physics between renormalisation group flow and resolution of singularities, which applied in the context of SLT might give a fundamental understanding of how abstractions emerge in learning machines. One best case scenario of the application of SLT to alignment is that this line of research gives a theoretical framework in which to understand more empirical interpretability work.

The utility of theory in general and SLT in particular depends on your mental model of the problem landscape between here and AGI. To return to the thermodynamics analogy: a working theory of thermodynamics isn’t necessary to build train engines, but it’s probably necessary to build rockets. If you think the engineering-driven approach that has driven deep learning so far will plateau before AGI, probably theory research is bad in expected value. If you think theory isn’t necessary to get to AGI, then it may be a risk that we have to take.

Summary: In my view we seem to know enough to get to AGI. We do not know enough to get to alignment. Ergo we have to take some risks.

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-30T22:23:16.503Z · LW · GW

First of all, I really like the images, they made things easier to understand and are pretty. Good work with that!

Thank you!

My biggest problem with this is the unclear applicability of this to alignment. Why do we want to predict scaling laws? Doesn't that mostly promote AI capabilities, and not alignment very much?

This is also my biggest source of uncertainty on the whole agenda. There's definitely a capabilities risk, but I think the benefits to understanding NNs currently much outweigh the benefits to improving them. 

In particular, I think that understanding generalization is pretty key to making sense of outer and inner alignment. If "singularities = generalization" holds up, then our task seems to become quite a lot easier: we only have to understand a few isolated points of the loss landscape instead of the full exponential hell that is a billions-dimensional system.

In a similar vein, I think that this is one of the most promising paths to understanding what's going on during training. When we talk about phase changes / sharp left turns / etc., what we may really be talking about are discrete changes in the local singularity structure of the loss landscape. Understanding singularities seems key to predicting and anticipating these changes just as understanding critical points is key to predicting and anticipating phase transitions in physical systems.

  • We care about the generalization error  with respect to some prior , but the latter doesn't have any effect on the dynamics of SGD or on what the singularity is
  • The Watanabe limit ( as ) and the restricted free energy  all are presented on results, which rely on the singularities, and somehow predict generalization. But all of these depend on the prior , and earlier we've defined the singularities to be of the likelihood function; plus SGD actually only uses the likelihood function for its dynamics.

As long as your prior has non-zero support on the singularities, the results hold up (because we're taking this large-N limit where the prior becomes less important). Like I mention in the objections, linking this to SGD is going to require more work. To first order, when your prior has support over only a compact subset of weight space, your behavior is dominated by the singularities in that set (this is another way to view the comments on phase transitions).

It's also unclear what the takeaway from this post is. How can we predict generalization or dynamics from these things? Are there any empirical results on this?

This is very much a work in progress. 

In statistical physics, much of our analysis is built on the assumption that we can replace temporal averages with phase-space averages. This is justified on grounds of the ergodic hypothesis. In singular learning theory, we've jumped to parameter (phase)-space averages without doing the important translation work from training (temporal) averages. SGD is not ergodic, so this will require care. That the exact asymptotic forms may look different in the case of SGD seems probable. That the asymptotic forms for SGD make no reference to singularities seems unlikely. The basic takeaway is that singularities matter disproportionately, and if we're going to try to develop a theory of DNNs, they will likely form an important component.

For (early) empirical results, I'd check out the theses mentioned here

 is not a KL divergence, the terms of the sum should be multiplied by  or .

 is an empirical KL divergence. It's multiplied by the empirical distribution, , which just puts  probability on the observed samples (and 0 elsewhere), 

the Hamiltonian is a random process given by the log likelihood ratio function

Also given by the prior, if we go by the equation just above that. Also where does "ratio" come from?

Yes, also the prior, thanks for the correction.The ratio comes from doing the normalization ("log likelihood ratio" is just another one of Watanabe's name for the empirical KL divergence). In the following definition,

the likelihood ratio is 

But that just gives us the KL divergence.

I'm not sure where you get this. Is it from the fact that predicting p(x | w) = q(x) is optimal, because the actual probability of a data point is q(x) ? If not it'd be nice to specify.

the minima of the term in the exponent, K (w) , are equal to 0.

This is only true for the global minima, but for the dynamics of learning we also care about local minima (that may be higher than 0). Are we implicitly assuming that most local minima are also global? Is this true of actual NNs?

This is the comment in footnote 3. Like you say, it relies on the assumption of realizability (there being a global minimum of ) which is not very realistic! As I point out in the objections, we can sometimes fix this, but not always (yet). 

the asymptotic form of the free energy as 

This is only true when the weights  are close to the singularity right? 

That's the crazy thing. You do the integral over all the weights to get the model evidence, and it's totally dominated by just these few weights. Again, when we're making the change to SGD, this probably changes. 

Also what is , seems like it's the RLCT but this isn't stated.

Yes, I've made an edit. Thanks!

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-30T21:43:14.910Z · LW · GW

Yep, regularization tends to break these symmetries.

I think the best way to think of this is that it causes the valleys to become curved — i.e., regularization helps the neural network navigate the loss landscape. In its absence, moving across these valleys depends on the stochasticity of SGD which grows very slowly with the square root of time.

That said, regularization is only a convex change to the landscape that doesn't change the important geometrical features. In its presence, we should still expect the singularities of the corresponding regularization-free landscape to have a major macroscopic effect.

There are also continuous zero-loss deformations in the loss landscape that are not affected by regularization because they aren't a feature of the architecture but of the "truth". (See the thread with tgb for a discussion of this, where we call these "Type B".)

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-30T21:31:28.138Z · LW · GW

This is a toy example (I didn't come up with it for any particular in mind.

I think the important thing is that the distinction does not have much of a difference in practice. Both correspond to lower-effective dimensionality (type A very explicitly, and type B less directly). Both are able to "trap" random motion. And it seems like both somehow help make the loss landscape more navigable.

If you're interested in interpreting the energy landscape as a loss landscape,  and would be the parameters (and and would be hyperparameters related to things like the learning rate and batch size.

Comment by Jesse Hoogland (jhoogland) on Spooky action at a distance in the loss landscape · 2023-01-30T05:34:01.467Z · LW · GW

Instead of simulating Brownian motion, you could run SGD with momentum. That would be closer to what actually happens with NNs, and just as easy to simulate.

Hey I need a reason to write a follow-up to this, right?

I also take issue with the way the conclusion is phrased. "Singularities work because they transform random motion into useful search for generalization". This is only true if you assume that points nearer a singularity generalize better. Maybe I'd phrase it as, "SGD works because it's more likely to end up near a singularity than the potential alone would predict, and singularities generalize better (see my [Jesse's] other post)". Would you agree with this phrasing?

I was trying to be intentionally provocative, but you're right — it's too much. Thanks for the suggestion!

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-28T03:41:13.618Z · LW · GW

I wrote a follow-up that should be helpful to see an example in more detail. The example I mention is the loss function (=potential energy) . There's a singularity at the origin. 

This does seem like an important point to emphasize: symmetries in the model  (or  if you're making deterministic predictions) and the true distribution  lead to singularities in the loss landscape . There's an important distinction between  and .

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-26T18:19:22.774Z · LW · GW

To take a step back, the idea of a Taylor expansion is that we can express any  function as an (infinite) polynomial. If you're close enough to the point you're expanding around, then a finite polynomial can be an arbitrarily good fit.

The central challenge here is that  is pretty much never a polynomial. So the idea is to find a mapping, , that lets us re-express  in terms of a new coordinate system, . If we do this right, then we can express  (locally) as a polynomial in terms of the new coordinates, .

What we're doing here is we're "fixing" the non-differentiable singularities in  so that we can do a kind of Taylor expansion over the new coordinates. That's why we have to introduce this new manifold, , and mapping .

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-23T18:19:16.806Z · LW · GW

I’m confused now too. Let’s see if I got it right:

A: You have two models with perfect train loss but different test loss. You can swap between them with respect to train loss but they may have different generalization performance.

B: You have two models whose layers are permutations of each other and so perform the exact same calculation (and therefore have the same generalization performance).

The claim is that the “simplest” models (largest singularities) dominate our expected learning behavior. Large singularities mean fewer effective parameters. The reason that simplicity (with respect to either type) translates to generalization is Occam’s razor: simple functions are compatible with more possible continuations of the dataset.

Not all type A redundant models are the same with respect to simplicity and therefore they’re not treated the same by learning.

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-20T17:11:57.702Z · LW · GW

I'm confused by the setup. Let's consider the simplest case: fitting points in the plane, y as a function of x. If I have three datapoints and I fit a quadratic to it, I have a dimension 0 space of minimizers of the loss function: the unique parabola through those three points (assume they're not ontop of each other). Since I have three parameters in a quadratic, I assume that this means the effective degrees of freedom of the model is 3 according to this post. If I instead fit a quartic, I now have a dimension 1 space of minimizers and 4 parameters, so I think you're saying degrees of freedom is still 3. And so the DoF would be 3 for all degrees of polynomial models above linear. But I certainly think that we expect that quadratic models will generalize better than 19th degree polynomials when fit to just three points.

On its own the quartic has 4 degrees of freedom (and the 19th degree polynomial 19 DOFs).

It's not until I introduce additional constraints (independent equations), that the effective dimensionality goes down. E.g.: a quartic + a linear equation = 3 degrees of freedom,

It's these kinds of constraints/relations/symmetries that reduce the effective dimensionality.

This video has a good example of a more realistic case:

I think the objection to this example is that the relevant function to minimize is not loss on the training data but something else? The loss it would have on 'real data'? That seems to make more sense of the post to me, but if that were the case, then I think any minimizer of that function would be equally good at generalizing by definition. Another candidate would be the parameter-function map you describe which seems to be the relevant map whose singularities we are studying, but we it's not well defined to ask for minimums (or level-sets) of that at all. So I don't think that's right either.

We don't have access to the "true loss." We only have access to the training loss (for this case, ). Of course the true distribution is sneakily behind the empirical distribution and so has after-effects in the training loss, but it doesn't show up explicitly in  (the thing we're trying to maximize).  

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-20T00:21:55.590Z · LW · GW

Yes, so an example of this would be the ReLU scaling symmetry discussed in "Neural networks are freaks of symmetries." You're right that regularization often breaks this kind of symmetry. 

But even when there are no local symmetries, having other points that have the same posterior means this assumption of asymptotic normality doesn't hold.

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-20T00:15:55.041Z · LW · GW

Not at all stupid!

A singularity here is defined as where the tangent is ill-defined, is this just saying where the lines cross? In other words, that places where loss valleys intersect tend to generalize?

Yep, crossings are singularities, as are things like cusps and weirder things like tacnodes

It's not necessarily saying that these places tend to generalize. It's that these singularities have a disproportionate impact on the overall tendency of models learning in that landscape to generalize. So these points can impact nearby (and even distant) points. 

If true, what is a good intuition to have around loss valleys? Is it reasonable to think of loss valleys kind of as their own heuristic functions? 

I still find the intuition difficult

For example, if you have a dataset with height and weight and are trying to predict life expectancy, one heuristic might be that if weight/height > X then predict lower life expectancy.  My intuition reading is that all sets of weights that implement this heuristic would correspond to one loss valley. 

If we think about some other loss valley, maybe one that captures underweight people where weight/height < Z, then the place where these loss valleys intersect would correspond to a neural network that predicts lower life expectancy for both overweight and underweight people. Intuitively it makes sense that this would correspond to better model generalization, is that on the right track? 

But to me it seems like these valleys would be additive, i.e. the place where they intersect should be lower loss than the basin of either valley on its own. This is because our crossing point should create good predictions for both overweight and underweight people, whereas either valley on its own should only create good predictions for one of those two sets. However, in the post the crossing points are depicted as having the same loss as either valley has on its own, is this intentional or do you think there ought to be a dip where valleys meet?

I like this example! If your model is  then the w-h space is split into lines of constant lifespan (top-left figure). If you have a loss which compares predicted lifespan to true lifespan, this will be constant on those lines as well. The lower overweight and underweight lifespans will be two valleys that intersect at the origin. The loss landscape could, however, be very different because it's measuring how good your prediction is, so there could be one loss valley, or two, or several. 

Suppose you have a different function  with also with two valleys (top-right). Yes, if you add the two functions, the minima of the result will be at the intersections. But adding isn't actually representative of the kinds of operations we perform in networks.

For example, compare taking their min, now they cross and form part of the same level sets. It depends very much on the kind of composition. The symmetries I mention can cooperate very well.

From top-left clockwise: .

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-19T17:52:04.313Z · LW · GW

Let me see if I understand your question correctly. Are you asking: does the effective dimensionality / complexity / RLCT () actually tell us something different from the number of non-zero weights? And if the optimization method we're currently using already finds low-complexity solutions, why do we need to worry about it anyway?

So the RLCT tells us the "effective dimensionality" at the largest singularity. This is different from the number of non-zero weights because there are other symmetries that the network can take advantage of. The claim currently is more descriptive than prescriptive. It says that if you are doing Bayesian inference, then, in the limiting case of large datasets, this RLCT (which is a local thing) ends up having a global effect on your expected behavior. This is true even if your model is not actually at the RLCT. 

So this isn't currently proposing a new kind of optimization technique. Rather, it's making a claim about which features of the loss landscape end up having most influence on the training dynamics you see. This is exact for the case of Bayesian inference but still conjectural for real NNs (though there is early supporting evidence from experiments).

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-19T17:39:10.340Z · LW · GW

This appears to be a high-quality book report. Thanks. I didn't see anywhere the 'because' is demonstrated. Is it proved in the citations or do we just have 'plausibly because'?

The because ends up taking a few dozen pages to establish in Watanabe 2009 (and only after introducing algebraic geometry, empirical processes, and a bit of complex analysis).  Anyway, I thought it best to leave the proof as an exercise for the reader. 

Physics experiences in optimizing free energy have long inspired ML optimization uses. Did physicists playing with free energy lead to new optimization methods or is it just something people like to talk about?

I'm not quite sure what you're asking. Like you say, physics has a long history of inspiring ML optimization techniques (e.g., momentum/acceleration and simulated annealing). Has this particular line of investigation inspired new optimization techniques? I don't think so. It seems like the current approaches work quite well, and the bigger question is: can we extend this line of investigation to the optimization techniques we're currently using?

Comment by Jesse Hoogland (jhoogland) on Neural networks generalize because of this one weird trick · 2023-01-18T16:05:47.381Z · LW · GW

What's going on here? Are you claiming that you get better generalization if you have a large complexity gap between the local singularities you start out with and the local singularities you end up with? 

The claim behind figure 7.6 in Watanabe is more conjectural than much of the rest of the book, but the basic point is that adding new samples changes the geometry of your loss landscape. ( is different for each .) As you add more samples the free-energy-minimizing tradeoff starts favoring a more accurate fit and a smaller singularity. This would lead to progressively more complex functions (which seems to match up to observations for SGD).

But smoothness is nice. 

Smoothness is nice, but hey we use swishes anyway.

I thought the original scaling laws paper was based on techniques from statistical mechanics? Anyway, that does sound exciting. Do you know if anyone has a plausible model for the Chinchilla scaling laws? Also, I'd like to see if anyone has tried predicting scaling laws for systems with active learning. 

The scaling analysis borrows from the empirical side. In terms of predicting the actual coefficients behind these curves, we're still in the dark. Well, mostly. (There are some ideas.) 

I may have given the sense that this scaling-laws program is farther along than it actually is. As far as I know, we're not there yet with Chinchilla, active learning, etc.

Comment by Jesse Hoogland (jhoogland) on Humans can be assigned any values whatsoever… · 2022-09-15T20:49:06.449Z · LW · GW

There are three natural reward functions that are plausible:

  • , which is linear in the number of times  is pressed.
  • , which is linear in the number of times  is pressed.
  • , where  is the indicator function for  being pressed an even number of times,  being the indicator function for  being pressed an odd number of times.

 

Why are these reward functions "natural" or more plausible than , (some constant, independent of button presses),  (the total number of button presses), etc. 

Comment by Jesse Hoogland (jhoogland) on No, human brains are not (much) more efficient than computers · 2022-09-06T18:19:42.110Z · LW · GW

Let me take a slightly different example: echolocation.

Bats can detect differences in period as short as 10 nanoseconds. Neuronal spiking maxes out around 100 Hz. So the solution can't just be as simple as "throw more energy and compute at it". It's a question of "design clever circuitry that's as close as possible to theoretical limits on optimality". 

Similarly, the brain being very efficient increases the probability I assign to "it is doing something non-(practically-)isomorphic to feed-forward ANNs". Maybe it's hijacking recurrency in a way that scales far more effectively with parameter size than we can ever hope to create with transformers. 

But I notice I am confused and will continue to think on it. 

Comment by Jesse Hoogland (jhoogland) on No, human brains are not (much) more efficient than computers · 2022-09-06T18:03:39.346Z · LW · GW

I think it is pretty clear right now that 2022-AI is less sample efficient than humans. I think other forms of efficiency (e.g., power efficiency, efficiency of SGD vs. evolution) are less relevant to this.

To me this isn't clear. Yes, we're better one-shot learners, but I'd say the most likely explanation is that the human training set is larger and that much of that training set is hidden away in our evolutionary past.

It's one thing to estimate evolution FLOP (and as Nuño points out, even that is questionable). It strikes me as much more difficult (and even more dubious) to estimate the "number of samples" or "total training signal (bytes)" over one's lifetime / evolution.

Comment by Jesse Hoogland (jhoogland) on No, human brains are not (much) more efficient than computers · 2022-09-06T14:46:14.237Z · LW · GW

I think a better counterargument is that if a computer running a human-brain-like algorithm consumes a whopping 10,000× more power than does a human brain, who cares? The electricity costs would still be below my local minimum wage!

I agree (as counterargument to skepticism)! Right now though, "brains being much more efficient than computers" would update me towards "AGI is further away / more theoretical breakthroughs are needed". Would love to hear counterarguments to this model.

I argue here that a much better analogy is between training an ML model versus within-lifetime learning, i.e. multiply Joe Carlsmith’s FLOP/s estimates by roughly 1 billion seconds (≈31 years, or pick a different length of time as you wish) to get training FLOP. See the “Genome = ML code” analogy table in that post.

Great point. Copying from my response to Slider: "Cotra's lifetime anchor is  FLOPs (so 4-5 OOMs above gradient descent). That's still quite a chasm."

I didn’t check just now, but I vaguely recall that there’s several (maybe 3??)-orders-of-magnitude difference between FLOP/J of a supercomputer versus FLOP/J of a GPU.

This paper suggests 100 GFLOPs/W in 2020 (within an OOM of Fuguka). I don't know how much progress there's been in the last two years.

I think that’s a bad tradeoff. FLOP reads just fine. Clear communication is more important!! :)

Good point! I've updated the text.