Posts

Toward A Mathematical Framework for Computation in Superposition 2024-01-18T21:06:57.040Z
Grokking, memorization, and generalization — a discussion 2023-10-29T23:17:30.098Z
Investigating the learning coefficient of modular addition: hackathon project 2023-10-17T19:51:29.720Z
The Low-Hanging Fruit Prior and sloped valleys in the loss landscape 2023-08-23T21:12:58.599Z
Decomposing independent generalizations in neural networks via Hessian analysis 2023-08-14T17:04:40.071Z
Alternative mask materials 2020-03-27T01:22:11.435Z

Comments

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Legionnaire's Shortform · 2024-09-17T20:59:12.622Z · LW · GW

Kinda silly to do this with an idea you actually care about, especially if political (which would just increase the heat:light ratio in politics along the grain for Russian troll factories etc.). But carefully trying to make NN traps with some benign and silly misinformation -- e.g. "whales are fish" or something -- could be a great test to see if weird troll-generated examples on the internet can affect the behavior

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Does life actually locally *increase* entropy? · 2024-09-16T23:22:21.167Z · LW · GW

Maybe I'll add two addenda:

  1. It's easy to confuse entropy with free energy. Since energy is conserved, globally the two measure the same thing. But locally, the two decouple, and free energy is the more relevant parameter here. Living processes often need to use extra free energy to prevent the work they are interested in doing from getting converted into heat (e.g. when moving we're constantly fighting friction); in this way we're in some sense locally increasing free energy.

  2. I think a reasonable (though imperfect) analogy here is with potential energy. Systems tend to reduce their potential energy, and thus you can make a story that, in order to avoid just melting into a puddle on the ground, life needs to constantly fight the tendency of gravitational potential energy to be converted to kinetic energy (and ultimately heat). And indeed, when we walk upright, fly, build skyscrapers, use hydro power, we're slowing down or modifying the tendency of potential energy to become kinetic. But this is in no sense the fundamental or defining property of life, whether we're looking globally at all matter or locally at living beings. We sometimes burrow into the earth, flatten mountains, etc. While life both (a), can use potential energy of other stuff to power its engines and (b), needs to at least somewhat fight the tendency of gravitational kinetic energy to turn it into a puddle of matter without any internal structure, this is just one of many physical stories about life and isn't "the whole story".

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Does life actually locally *increase* entropy? · 2024-09-16T22:56:26.856Z · LW · GW

I think one shouldn't think of entropy as fundamentally preferred or fundamentally associated with a particular process. Note that it isn't even a well-defined parameter unless you posit some macrostate information and define entropy as a property of a system + the information we have about it.

In particular, life can either increase or decrease appropriate local measurements of entropy. We can burn the hydrocarbons or decay the uranium to increase entropy or we can locally decrease entropy by changing reflectivity properties of earth's atmosphere, etc.

The more fundamental statement, as jessicata explains, is that life uses engines. Engines are trying to locally produce energy that does work rather than just heat, i.e., that has lower entropy compared to what one would expect from a black body. This means that they have to use free energy, which corresponds tapping into aspects of the surrounding environment where entropy has not yet been maximized (i.e., which are fundamentally thermodynamic rather than thermostatic), and they also have to generate work which is not just heat (i.e., they can't just locally maximize the entropy). Life on earth mostly does this by using the fact that solar radiation is much higher-frequency than black-body radiation associated to temperatures on Earth, thus contains free energy (that can be released by breaking it down).

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Singular learning theory: exercises · 2024-08-31T08:11:56.315Z · LW · GW

This is awesome!

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Don't Get Distracted by the Boilerplate · 2024-06-29T19:43:23.385Z · LW · GW

I also wouldn't give this result (if I'm understanding which result you mean) as an example where the assumptions are technicalities / inessential for the "spirit" of the result. Assuming monotonicity or commutativity (either one is sufficient) is crucial here, otherwise you could have some random (commutative) group with the same cardinality as the reals.

Generally, I think math is the wrong comparison here. To be fair, there are other examples of results in math where the assumptions are "inessential for the core idea", which I think is what you're gesturing at. But I think math is different in this dimension from other fields, where often you don't lose much by fuzzing over technicalities (in fact the question of how much to fuss over technicalities like playing fast and loose with infinities or being careful about what kinds of functions are allowed in your fields is the main divider between math and theoretical physics).

In my experience in pure math, when you notice that the "boilerplate" assumptions on your result seem inessential, this is usually for one of the following reasons:

  1. In fact, a more general result is true and the proof works with fewer/weaker assumptions, but either for historical reasons or for reasons of some results used (lemmas, etc.) being harder in more generality, it's stated in this form
  2. The result is true in more generality, but proving the more general result is genuinely harder or requires a different technique, and this can sometimes lead to new and useful insights
  3. The result is false (or unknown) in more technicality, and the "boilerplate" assumptions are actually essential, and understanding why will give more insight into the proof (despite things seeming inessential at first)
  4. The "boilerplate" assumptions the result uses are weaker than what the theorem is stated with, but it's messy to explain the "minimal" assumptions, and it's easier to compress the result by using a more restrictive but more standard class of objects (in this way a lot of results that are true for some messy class of functions are easier to remember and use for a more restrictive class: most results that use "Schwartz spaces" are of this form; often results that are true for distributions are stated for simplicity for functions, etc.).
  5. Some assumptions are needed for things to "work right," but are kind of "small": i.e., trivial to check or mostly just controlling for degenerate edge cases, and can be safely compressed away in your understanding of the proof if you know what you're doing (a standard example is checking for the identity in group laws: it's usually trivial to check if true, and the "meaty" part of the axiom is generally associativity; another example is assuming rings don't have 0 = 1, i.e., aren't the degenerate ring with one element).
  6. There's some dependence on logical technicalities, or what axioms you assume (especially relevant in physics- or CS/cryptography- adjacent areas, where different additional axioms like P != NP are used, and can have different flavors which interface with proofs in different ways, but often don't change the essentials).

I think you're mostly talking about 6 here, though I'm not sure (and not sure math is the best source of examples for this). I think there's a sort of "opposite" phenomenon also, where a result is true in one context but in fact generalizes well to other contexts. Often the way to generalize is standard, and thus understanding the "essential parts" of the proof in any one context are sufficient to then be able to recreate them in other contexts, with suitably modified constructions/axioms. For example, many results about sets generalize to topoi, many results about finite-dimensional vector spaces generalize to infinite-dimensional vector spaces, etc. This might also be related to what you're talking about. But generally, I think the way you conceptualize "essential vs. boilerplate" is genuinely different in math vs. theoretical physics/CS/etc.

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Don't Get Distracted by the Boilerplate · 2024-06-29T19:04:56.135Z · LW · GW

Nitpick, but I don't think the theorem you mention is correct unless you mean something other than what I understand. For the statement I think you want to be true, the function also needs to be a group law, which requires associativity. (In fact, if it's monotonic on the reals, you don't need to enforce commutativity, since all continuous group laws on R are isomorphic.)

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Evidence of Learned Look-Ahead in a Chess-Playing Neural Network · 2024-06-05T07:25:12.057Z · LW · GW

This is very cool!

Comment by Dmitry Vaintrob (dmitry-vaintrob) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-11T19:23:40.203Z · LW · GW

Right - looking at energy change of the exhaust explains the initial question in the post: why energy is preserved when a rocket accelerates, despite apparently expending the same amount of fuel for every unit of acceleration (assuming small fuel mass compared to rocket). Note that this doesn't depend on a gravity well - this question is well posed, and well answered (by looking at the rocket + exhaust system) in classical physics without gravity. The Oberth phenomenon is related but different I think

Comment by dmitry-vaintrob on [deleted post] 2023-12-26T12:50:04.938Z

Hi! As I commented on your other post: I think this is a question for https://mathoverflow.net/ or https://math.stackexchange.com/ . This question is too technical, and does not explain a connection to alignment. If you think this topic is relevant to alignment and would be interesting to technical people on LW, I would recommend making a non-technical post that explains how you think results in this particular area of analysis are related to alignment.

Comment by dmitry-vaintrob on [deleted post] 2023-12-26T12:46:00.410Z

Hi! I think this is a question for https://mathoverflow.net/ or https://math.stackexchange.com/ . While Lesswrong has become a forum for relatively technical alignment articles, this question is too math-heavy, and it has not been made clear how this is relevant to alignment. The forum would get too crowded if very technical math questions became a part of the standard content.

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Mapping the semantic void: Strange goings-on in GPT embedding spaces · 2023-12-15T13:19:19.708Z · LW · GW

I think it's very cool to play with token embeddings in this way! Note that some of what you observe is, I think, a consequence of geometry in high dimensions and can be understood by just modeling token embeddings as random. I recommend generating a bunch of tokens as a Gaussian random variable in a high-dimensional space and playing around with their norms and their norms after taking a random offset.
Some things to keep in mind, that can be fun to check for some random vectors: 

- radii of distributions in high-dimensional space tend to cluster around some fixed value. For a multivariate Gaussian in n-dimensional space, it's because the square radius is a sum of squares of Gaussians (one for each coordinate). This is a random variable with mean O(n) and standard deviation . In your case, you're also taking a square root (norm vs. square norm) and normalization is different, but the general pattern of this variable becoming narrow around a particular band (with width about compared to the radius) will hold.
- a random offset vector will not change the overall behavior (though it will change the radius). 
- Two random vectors in high-dimensional space will be nearly orthogonal. 

On the other hand it's unexpected that the mean is so large (normally you would expect the mean of a bunch of random vectors to be much smaller than the vectors themselves). If this is not an artifact of the training, it may indicate that words learn to be biased in some direction (maybe a direction indicating something like "a concept exists here"). The behavior of tokens near the center-of-mass also seems really interesting. 

Comment by Dmitry Vaintrob (dmitry-vaintrob) on My Criticism of Singular Learning Theory · 2023-11-19T16:31:49.391Z · LW · GW

I think there is some misunderstanding of what SLT says here, and you are identifying two distinct notions of complexity as the same, when in fact they are not. In particular, you have a line  

"The generalisation bound that SLT proves is a kind of Bayesian sleight of hand, which says that the learning machine will have a good expected generalisation relative to the Bayesian prior that is implicit in the learning machine itself."

I think this is precisely what SLT is saying, and this is nontrivial! One can say that a photon will follow a locally fastest route through a medium, even if this is different from saying that it will always follow the "simplest" route. SLT arguments always works relative to a loss landscape, and interpreting their meaning should (ideally) be done relative to the loss landscape. The resulting predictions are, nevertheless, nontrivial, and are sometimes confirmed. For example we have some work on this with Nina Rimsky.

You point at a different notion of complexity, associated to considering the parameter-function map. This also seems interesting, but is distinct from complexity phenomena in SLT (at least from the more basic concepts like the RLCT), and which is not considered in the basic SLT paradigm. Saying that this is another interesting avenue of study or a potentially useful measure of complexity is valid, but is a priori independent of criticism of SLT (and of course ideally, the two points of view could be combined). 

Note that loss landscape considerations are more important than parameter-function considerations in the context of learning. For example it's not clear in your example why f(x) = 0 is likely to be learned (unless you have weight regularization). Learning bias in a NN should most fundamentally be understood relative to the weights, not higher-order concepts like Kolmogorov complexity (though as you point out, there might be a relationship between the two). 

Also I wanted to point out that in some ways, your "actual solution" is very close to the definition of RLCT from SLT.  The definition of the RLCT is how much entropy you have to pay (in your language, the change in negative log probability of a random sample) to gain an exponential improvement of loss precision; i.e., "bits of specification per bit of loss". See e.g. this article.

The thing is, the "complexity of f" (your K(f)) is not a very meaningful concept from the point of view of a neural net's learning (you can try to make sense of it by looking at something like the entropy of the weight-to-function mapping, but then it won't interact that much with learning dynamics). I think if you follow your intuitions carefully, you're likely to precisely end up arriving at something like the RLCT (or maybe a finite-order approximation of the RLCT, associated to the free energy). 

I have some criticisms of how SLT is understood and communicated, but I don't think that the ones you mention seem that important to me. In particular, my intuition is that for purposes of empirical measurement of SLT parameters, the large-sample limit of realistic networks is quite large enough to see approximate singularities in the learning landscape, and that the SGD-sampling distinction is much more important than many people realize (indeed, there is no way to explain why generalizable networks like modular addition still sometimes memorize without understanding that the two are very distinct). 

My main update in this field is that people should be more guided by empiricism and experiments, and less by competing paradigms of learning, which tend to be oversimplified and to fail to account for messy behaviors of even very simple toy networks. I've been pleasantly surprised by SLT making the same update in recent months.

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Grokking, memorization, and generalization — a discussion · 2023-10-31T23:09:23.736Z · LW · GW

Interesting - what SLT prediction do you think is relevant here?

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Grokking, memorization, and generalization — a discussion · 2023-10-31T23:07:57.171Z · LW · GW

Noticed thad I didn't answer Kaarel's question there in a satisfactory way.  Yeah - "basin" here is meant very informally as a local piece of the loss landscape with lower loss than the rest of the landscape, and surrounding a subspace of weight space corresponding to a circuit being on. Nina and I actually call this a "valley" our "low-hanging fruit" post.

By "smaller" vs. "larger" basins I roughly mean the same thing as the notion of "efficiency" that we later discuss

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Grokking, memorization, and generalization — a discussion · 2023-10-31T23:01:31.570Z · LW · GW

In particular, in most unregularized models we see that generalize (and I think also the ones in omnigrok), grokking happens early, usually before full memorization (so it's "grokking" in the redefinition I gave above). 

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Investigating the learning coefficient of modular addition: hackathon project · 2023-10-17T21:30:53.560Z · LW · GW

Oh I can see how this could be confusing. We're sampling at every step in the orthogonal complement to the gradient at that step ("initialization" here refers to the beginning of sampling, i.e., we don't update the normal vector during sampling). And the reason to do this is that we're hoping to prevent the sampler from quickly leaving the unstable point and jumping into a lower-loss basin (by restricting we are guaranteeing that the unstable point is a critical point)

Comment by Dmitry Vaintrob (dmitry-vaintrob) on The "best predictor is malicious optimiser" problem · 2020-07-29T19:54:26.756Z · LW · GW

Sorry, I misread this. I read your question as O outputting some function T that is most likely to answer some set of questions you want to know the answer to (which would be self-referential as these questions depend on the output of T). I think I understand your question now.

What kind of ability do you have to know the "true value" of your sequence B?

If the paperclip maximizer P is able to control the value of your turing machine, and if you are a one-boxing AI (and this is known to P) then of course you can make deals/communicate with P. In particular, if the sequence B is generated by some known but slow program, you can try to set up an Arthur-Merlin zero knowledge proof protocol in exchange for promising to make a few paperclips, which you can then use to keep P honest (after making the paperclips as promised).

To be clear though, this is a strategy for an agent A that somehow has as its goals only the desire to compute B together with some kind of commitment to following through on agreements. If A is genuinely aligned with humans, the rule "don't communicate/make deals with malicious superintelligent entities, at least until you have satisfactorily solved the AI in a box and similar underlying problems" should be a no-brainer.

Comment by Dmitry Vaintrob (dmitry-vaintrob) on The "best predictor is malicious optimiser" problem · 2020-07-29T14:24:26.946Z · LW · GW

Looks like you're making a logical error. Creating a machine that solves the halting problem is prohibited by logic. For many applications assuming a sufficiently powerful and logically consistent oracle is good enough but precisely these kinds of games you are playing, where you ask a machine to predict its own output/the output of a system involving itself, are where you get logically inconsistent. Indeed, imagine asking the oracle to simulate an equivalent version of itself and to output the the opposite answer to what its simulation outputs. This may seem like a derived question, but most "interesting" self-referential questions boil down to an instance of this. I think once you fix the logical inconsistency, you're left with an equivalent problem to AI in a box: boxed AI P is stronger that friendly AI A but has an agenda.

Alternatively, if you're assuming A is itself un-aligned (rather than friendly) and has the goal of getting the right answer at any cost then it looks like you need some more assumptions on A's structure. For example if A is sufficiently sophisticated and knows it has access to a much more powerful but untrustwothy oracle it might know to implement a merlin-arthur protocol.

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Research on repurposing filter products for masks? · 2020-04-03T21:10:11.536Z · LW · GW

Not sure but doubt it: IIRC, copper kills by catalysing intra-cellular reactions, which are slow (compared to salt, which should have near-instantaneous mechanism of action since it can blow up membranes). Also I would be worried about safety of breathing copper. But I might be wrong about this!

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Research on repurposing filter products for masks? · 2020-04-03T18:09:51.211Z · LW · GW

I've looked at a small amount of data on this question. I think it's a really important one (see a related question of mine), but am extremely not an expert. The most actionable item is this study that essentially "salting" a surgical mask might make it significantly more protective against flu viruses. The study's in vivo section with mice strikes me as a bit sketchy (small n, and unclear how representative of mask filtration their mouse procdeure actually is), but their in vitro section seems legit, and the study is in Scientific Reports (part of the Nature publishing group). If you're making a DIY mask/filter and it's not too thick already, it can't hurt to include a salted layer. Their proposed mechanism of action is by the salt killing the virus particles, not filtering them, so it should stack well with particulate filters. The recipe in the paper is to coat a hydrophobic filter in solution of salt and surfactant (they used polysorbate 20, which is approved to use as a food additive), then let it dry.

Comment by Dmitry Vaintrob (dmitry-vaintrob) on What will happen to supply chains in the era of COVID-19? · 2020-03-31T14:24:56.190Z · LW · GW

What makes you say England did not have looting during WW2? England had more cohesion. But that is just one factor impacting people's behavior. Someone who is desperate or immoral enough to loot in wartime is unlikely to be seriously swayed by the need for patriotic unity. Other factors, which I think are bigger, are severity of need and enforcement. Don't know about enforcement, but it is very hard for me to envision a scenario where meeting basic needs is harder and than in WW2 Britain.

Comment by Dmitry Vaintrob (dmitry-vaintrob) on What will happen to supply chains in the era of COVID-19? · 2020-03-31T04:19:26.066Z · LW · GW

I've done a little research about the food supply chain specifically. Presumably certain supply chains will be similar, certain ones will be different. Also note I am very much not an expert. The basic fact is that there is "enough food" but prices may rise and getting food may be worse. I think there are three key parameters, which could go either way:

(1) Hoarding/instability. Worst case scenario: people panic. People stockpile giant supplies of food. Food goes bad. People buy more food. Food gets prohibitively expensive. Best case scenario: supermarket situation stabilizes, panicky people feel like they have enough non-perishables stockpiled, most last-mile (grocery store) product shortages stop.

(2) Protectionism. This will be less dangerous in the US which exports more food than it imports. But certain countries, especially poorer countries that rely significantly on imports, will suffer if a global panic causes protectionist policies about food (e.g. wheat exporter Kazakhstan apparently stopped exporting grain because of coronavirus fears, see this article ). This is understandable, but probably bad. Here the best case according to this article is if big markets actively work to stabilize the market and punish protectionism (but the economics here is above my pay grade).

(3) Worker/driver issues. This mostly depends on "how freaked out blue-collar workers get". Currently most truck drivers, clerks, etc., are risking infection in exchange for a steady job. If things get bad (for example if there are wide-spread hospital bed shortages and fatality goes through the roof) *and younger people become afraid* (a big if), a big proportion of chain workers will take losing their job over getting infected. This would probably raise prices.

It's important to stress that it's *very unlikely* that anything catastrophic happens in developed countries like the US, and the worst-case scenario is government rationing. The example to keep in mind is WW2 Britain (I originally linked the wrong article here, which is also an interesting read ). Nevertheless, with rationing people survived basically healthy for several years of war.

Comment by Dmitry Vaintrob (dmitry-vaintrob) on A Significant Portion of COVID-19 Transmission Is Presymptomatic · 2020-03-18T19:10:19.720Z · LW · GW

A question I always have about these studies is at what level symptoms are defined and self-reported. E.g. presumably "you have an itchy throat or a mild headache in the morning/mildly increased fever over your baseline" is pre-symptomatic. Self-isolating with mild symptoms is probably hard to measure but can be at least socially enforced.

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Reasons why coronavirus mortality of young adults may be underestimated. · 2020-03-16T14:44:01.968Z · LW · GW

DP Cruise didn't have any fatalities under age 70, so not sure where you're getting the under-29 number. Also since the population is older the case fatality was over-estimated. This study https://cmmid.github.io/topics/covid19/severity/diamond_cruise_cfr_estimates.html?fbclid=IwAR2jCOZcBGHYBWC_dqSzwvX7T7-DOpwm8L84qqW8k6QtKa05Inv35Pk3Ezs estimates adjusted CFR form DP cruise ship data (assuming treatment!) to be .5%, largely in agreement with other numbers I'd heard. Though the sample size is ridiculously small, so the error bounds are terrible.

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Coronavirus: Justified Practical Advice Thread · 2020-03-13T05:00:13.600Z · LW · GW

Advice: drink a mouthful of water every 15 minutes. This is speculative (facebook post from a friend of a friend). The rationale is that if you have virus particles in your mouth, rinsing them into your stomach (where the stomach acid kills them) will prevent them from getting into your respiratory system. [edit: retracted, seems to be downstream from a fake news article. Drinking water is still good, but looks like this pathway is not realistic]

Comment by Dmitry Vaintrob (dmitry-vaintrob) on Coronavirus: Justified Practical Advice Thread · 2020-03-13T02:14:10.907Z · LW · GW

Advice: now may be a good time to learn to meditate. Deaths from coronavirus are due mostly to breathing problems from pneumonia, which is the main explanation for why older people are more likely to die. There is evidence that meditation is good for pneumonia specifically http://www.annfammed.org/content/10/4/337.full and lowers oxygen consumption generally https://journals.sagepub.com/doi/full/10.1177/2156587213492770. I didn't read the studies carefully to see how trustworthy they are, but this conforms well with my understanding and limited experience of meditation. Meditation is also known to be good for mitigating stress, which will obviously be beneficial in the coming months.