Posts

Paper Summary: The Koha Code - A Biological Theory of Memory 2023-12-30T22:37:13.865Z
Kolmogorov Complexity Lays Bare the Soul 2023-12-01T18:29:57.379Z
You Are a Computer, and No, That’s Not a Metaphor 2023-06-11T05:38:28.332Z

Comments

Comment by jakej (jake-jenks) on Kolmogorov Complexity Lays Bare the Soul · 2023-12-04T00:09:10.634Z · LW · GW

Even leaving issues of quantum physics aside, macroscopic physical objects like humans are unlikely to be very compressible (information-wise, that is). The author might feel that the number of lead atoms in their 36 molar tooth is not part of their Kolmogorov string, but I would argue that is is certainly part of a complete description. 

I don't know, just how compressible are we? I agree that the lead in my 36 molar is a part of my description, but anomalies such as these are always going to be the hardest part of compression since noise is not compressible. So maybe a complete description would look more like "all of the usual teeth, with xyz lead anomalies".

In practice, that fine level of detail is not actually what I care about. Just like I listen to lossy compressed music, I would be fine with being uploaded into a somewhat lossy representation of myself where I don't have any lead atoms in my teeth.

The "noise" of lead atoms in your teeth are among the least important bits in your Kolmogorov string, and would be the first to be dropped if you decided to allow a lossy representation. This reminds me of overfitting actually. The first thing a model tries to learn are the actual useful bits, and then later on when you train too long it starts to memorize the random noise in the dataset.

Comment by jakej (jake-jenks) on Kolmogorov Complexity Lays Bare the Soul · 2023-12-04T00:00:26.993Z · LW · GW

I imagine there could be two different compression strategies that both happen to produce a result of the same length, but cannot be merged.

I think this is correct, but I think of this as being similar to chirality - multiple symmetric versions of the same essential information. I think it also probably depends on the description language you use, so maybe in one language something might have multiple versions, but in another it wouldn't?

Comment by jakej (jake-jenks) on [Linkpost] Large Language Models Converge on Brain-Like Word Representations · 2023-06-12T17:07:38.394Z · LW · GW

To me, it really looks like brains and LLMs are both using embedding spaces to represent information. Embedding spaces ground symbols by automatically relating all concepts they contain, including the grammar for manipulating these concepts.