silentbob's Shortform

post by silentbob · 2024-06-25T10:30:10.166Z · LW · GW · 8 comments

Contents

8 comments

8 comments

Comments sorted by top scores.

comment by silentbob · 2024-06-25T10:30:10.375Z · LW(p) · GW(p)

One crucial question in understanding and predicting the learning process, and ultimately the behavior, of modern neural networks, is that of the shape of their loss landscapes. What does this extremely high dimensional landscape look like? Does training generally tend to find minima? Do minima even exist? Is it predictable what type of minima (or regions of lower loss) are found during training? What role does initial randomization play? Are there specific types of basins in the landscape that are qualitatively different from others, that we might care about for safety reasons?

First, let’s just briefly think about very high dimensional spaces. One somewhat obvious observation is that they are absolutely vast. With each added dimension, the volume of the available space increases exponentially. Intuitively we tend to think of 3-dimensional spaces, and often apply this visual/spatial intuition to our understanding of loss landscapes. But this can be extremely misleading. Parameter spaces are utterly incredibly vast to a degree that our brain can hardly fathom. Take GPT3 for instance. It has 175 billion parameters, or dimensions. Let’s assume somewhat arbitrarily that all parameters end up in a range of [-0.5, 0.5], i.e. live in a 175-billion-dimensional unit cube around the origin of that space (as this is not the case, the real parameter space is actually even much, much larger, but bear with me). Even though every single axis only varies by 1 – let’s just for the sake of it interpret this as “1 meter” – even just taking the diagonal from one corner to the opposite one in this high-dimensional cube, you would get a length of ~420km. So if, hypothetically, you were sitting in the middle of this high dimensional unit cube, you could easily touch every single wall with your hand. But nonetheless, all the corners would be more than 200km distant from you.

This may be mind boggling, but is it relevant? I think it is. Take this realization for instance: if you have two minima in this high dimensional space, but one is just a tiny bit “flatter” than the other (meaning the second derivatives overall are a bit closer to 0), then the attractor basin of this flatter minimum is vastly larger than that of the other minimum. This is because the flatness implies a larger radius, and the volume depends exponentially on that radius. So, at 175 billion dimensions, even a microscopically larger radius means an overwhelmingly larger volume. If, for instance, one minimum’s attractor basin has a radius that is just 0.00000001% larger than that of the other minimum, then its volume will be roughly 40 million times larger (if my Javascript code to calculate this is accurate enough, that is). And this is only for GPT3, which is almost 4 years old by now.

The parameter space is just ridiculously large, so it becomes really crucial how the search process through it works and where it lands. It may be that somewhere in this vast space, there are indeed attractor basins that correspond to minima that we find extremely undesirable – certain capable optimizers [LW · GW] perhaps, that have situational awareness [? · GW] and deceptive tendencies [? · GW]. If they do exist, what could we possibly tell about them? Maybe these minima have huge attractor basins that are reliably found eventually (maybe once we switch to a different network architecture, or find some adjustment to gradient descent, or reach a certain model size, or whatever), which would of course be bad news. Or maybe these attractor basins are so vanishingly small that we basically don’t have to care about them at all, because all the computer & search capacity of humanity over the next million years would have an almost 0 chance of ever stumbling onto these regions. Maybe they are even so small that they are numerically unstable, and even if your search process through some incredible cosmic coincidence happens to start right in such a basin, the first SGD step would immediately jump out of it due to the limitations of numerical accuracy on the hardware we’re using.

 

So, what can we actually tell at this point about the nature of high dimensional loss landscapes? While reading up on this topic, one thing that constantly came up is the fact that, the more dimensions you have, the lower the relative number of minima becomes compared to saddle points. Meaning that whenever the training process appears to slow down and it looks like it found some local minimum, it’s actually overwhelmingly likely that what it actually found is a saddle point, hence the training process never halts but keeps moving through parameter space, even if the loss doesn't change that much. Do local minima exist at all? I guess it depends on the function the neural network is learning to approximate. Maybe some loss landscapes exist where the loss can just get asymptotically closer to some minimum (such as 0), without ever reaching it. And probably other loss landscapes exist where you actually have a global minimum, as well as several local ones.

Some people argue that you probably have no minima at all, because with each added dimension it becomes less and less likely that a given point is a minimum (because not only does the first derivative of a point have to be 0 for it to be a minimum, also all the second derivatives need to be in on it, and all be positive). This sounds compelling, but given that the space itself also grows exponentially with each dimension, we also have overwhelmingly more points to choose from. If you e.g. look at n-dimensional Perlin Noise, its absolute number of local minima within an n-dimensional cube of constant side length actually increases with each added dimension. However, the relative number of local minima compared to the available space still decreases, so it becomes harder and harder to find them.

 

I’ll keep it at that. This is already not much of a "quick" take. Basically, more research is needed, as my literature review on this subject yielded way more questions than answers, and many of the claims people made in their blog posts, articles and sometimes even papers seemed to be more intuitive / common-sensical or generalized from maybe-not-that-easy-to-validly-generalize-from research.

One thing I’m sure about however is that almost any explanation of how (stochastic) gradient descent works, that uses 3D landscapes for intuitive visualizations, is misleading in many ways. Maybe it is the best we have, but imho all such explainers should come with huge asterisks, explaining that the rules in very high dimensional spaces may look much different than our naive “oh look at that nice valley over there, let’s walk down to its minimum!” understanding, that happens to work well in three dimensions.

Replies from: jhoogland, avturchin, joel-burget
comment by Jesse Hoogland (jhoogland) · 2024-06-25T14:53:59.732Z · LW(p) · GW(p)

I'd like to point out that for neural networks, isolated critical points (whether minima, maxima, or saddle points) basically do not exist. Instead, it's valleys and ridges all the way down. So the word "basin" (which suggests the geometry is parabolic) is misleading. [LW · GW

Because critical points are non-isolated, there are more important kinds of "flatness" than having small second derivatives. Neural networks have degenerate loss landscapes: their Hessians have zero-valued eigenvalues [? · GW], which means there are directions you can walk along that don't change the loss (or that change the loss by a cubic or higher power rather than a quadratic power). The dominant contribution to how volume scales in the loss landscape comes from the behavior of the loss in those degenerate directions. This is much more significant than the behavior of the quadratic directions. The amount of degeneracy is quantified by singular learning theory's local learning coefficient (LLC) [LW · GW]. 

In the Bayesian setting, the relationship between geometric degeneracy and inductive biases is well understood through Watanabe's free energy formula [? · GW]. There's an inductive bias towards more degenerate parts of parameter space that's especially strong earlier in the learning process.

comment by avturchin · 2024-06-26T16:21:27.740Z · LW(p) · GW(p)

I heard that there is no local minima in high-dimensional spaces because there will be almost always paths to global minimum. 

comment by Joel Burget (joel-burget) · 2024-06-25T14:03:35.158Z · LW(p) · GW(p)

If, for instance, one minimum’s attractor basin has a radius that is just 0.00000001% larger than that of the other minimum, then its volume will be roughly 40 million times larger (if my Javascript code to calculate this is accurate enough, that is).

Could you share this code? I'd like to take a look.

Replies from: silentbob
comment by silentbob · 2024-06-25T15:31:05.776Z · LW(p) · GW(p)

Maybe I accidentally overpromised here :D this code is just an expression, namely 1.0000000001 ** 175000000000, which, as wolframalpha agrees, yields 3.98e7.

comment by silentbob · 2025-04-13T12:10:40.716Z · LW(p) · GW(p)

For a long time, I used to wonder what causes people to consistently mispronounce certain words even when they are exposed to many people pronouncing them correctly. (which mostly applies to people speaking in a non-native language, e.g. people from continental Europe speaking English)

Some examples that I’ve heard from different people around me over the years:

  • Saying “rectangel” instead of “rectangle”
  • Saying “pre-purr” (like prefer, but with a p) instead of “prepare”
  • Saying something like, uhh, “devil-oupaw” instead of “developer”
  • Saying “leech” instead of “league”
  • Saying “immu-table” instead of “immutable”
  • Saying "cyurrently" instead of "currently"

I did, of course, understand that if you only read a word, particularly in English where pronunciations are all over the place and often unpredictable, you may end up with a wrong assumption of how it's pronounced. This happened to me quite a lot[1]. But then, once I did hear someone pronounce it, I usually quickly learned my lesson and adapted the correct way of saying it. But still I've seen all these other people stick to their very unusual pronunciations anyway. What's up with that?[2] Naturally, it was always too awkward for me to ask them directly, so I never found out.

Recently, however, I got a rather uncomfortable insight into how this happens when a friend pointed out that I was pronouncing "dude" incorrectly, and have apparently done so for all my life, without anyone ever informing me about it, and without me noticing it.

So, as I learned now, "dude" is pronounced "dood" or "dewd". Whereas I used to say "dyood" (similar to duke). And while I found some evidence that dyood is not completely made up, it still seems to be very unusual, and something people notice when I say it.

Hence I now have the, or at least one, answer to my age-old question of how this happens. So, how did I never realize? Basically, I did realize that some people said "dood", and just took that as one of two possible ways of pronouncing that word. Kind of, like, the overly American way, or something a super chill surfer bro might say. Whenever people said "dood" (which, in my defense, didn't happen all that often in my presence[3]) I had this subtle internal reaction of wondering why they suddenly saw the need to switch to such a heavy accent for a single word.

I never quite realized that practically everyone said "dood" and I was the only "dyood" person.

So, yeah, I guess it was a bit of a trapped prior and it took some well-directed evidence to lift me out of that valley. And maybe the same is the case for many of the other people out there who are consistently mispronouncing very particular words. 

But, admittedly, I still don't wanna be the one to point it out to them.

And when I lie awake at night, I wonder which other words I may be mispronouncing with nobody daring to tell me about it.

  1. ^

    e.g., for some time I thought "biased" was pronounced "bee-ased". Or that "sesame" was pronounced "see-same". Whoops. And to this day I have a hard time remembering how "suite" is pronounced.

  2. ^

    Of course one part of the explanation is survivorship bias. I'm much less likely to witness the cases where someone quickly corrects their wrong pronunciation upon hearing it correctly. Maybe 95% of cases end up in this bucket that remains invisible to me. But still, I found the remaining 5% rather mysterious. 

  3. ^

    Maybe they were intimidated by my confident "dyood"s I threw left and right.

Replies from: Viliam
comment by Viliam · 2025-04-15T00:31:52.501Z · LW(p) · GW(p)

I use written English much more than spoken English, so I am probably wrong about the pronunciation of many words. I wonder if it would help to have a software that would read each sentence I wrote immediately after I finished it (because that's when I still remember how I imagined it to sound).

EDIT: I put the previous paragraph in Google Translate, and luckily it was just as I imagined. But that probably only means that I am already familiar with frequent words, and may make lots of mistakes with rare ones.

comment by silentbob · 2024-11-23T10:40:25.362Z · LW(p) · GW(p)

For people who like guided meditations: there's a small YouTube channel providing a bunch of secular AI-generated guided meditations of various lengths and topics. More are to come, and the creator (whom I know) is happy about suggestions. Three examples:

They are also available in podcast form here.

I wouldn't say these meditations are necessarily better or worse than any others, but they're free and provide some variety. Personally, I avoid apps like Waking Up and Headspace due to both their imho outrageous pricing model and their surprising degree of monotony. Insight Timer is a good alternative, but the quality varies a lot and I keep running into overly spiritual content there. Plus there's obviously thousands and thousands of guided meditations on YouTube, but there too it's hit and miss. So personally I'm happy about this extra source of a good-enough-for-me standard.

Also, in case you ever wanted to hear a guided meditation on any particular subject or in any particular style, I guess you can contact the YouTube channel directly, or tell me and I'll forward your request.