Posts
Comments
Small addendum: The padding argument gives a lower bound of the multiplicity. Above it is bounded by the Kraft-McMillan inequality.
Interesting! I think the problem is dense/compressed information can be represented in ways in which it is not easily retrievable for a certain decoder. The standard model written in Chinese is a very compressed representation of human knowledge of the universe and completely inscrutable to me.
Or take some maximally compressed code and pass it through a permutation. The information content is obviously the same but it is illegible until you reverse the permutation.
In some ways it is uniquely easy to do this to codes with maximal entropy because per definition it will be impossible to detect a pattern and recover a readable explanation.
In some ways the compressibility of NNs is a proof that a simple model exists, without revealing a understandable explanation.
I think we can have (almost) minimal yet readable model without exponentially decreasing information density as required by LDCs.
Good points! I think we underestimate the role that brute force plays in our brains though.
Damn! Dark forest vibes, very cool stuff!
Reference for the sub collision: https://en.wikipedia.org/wiki/HMS_Vanguard_and_Le_Triomphant_submarine_collision
And here's another one!
https://en.wikipedia.org/wiki/Submarine_incident_off_Kildin_Island
Might as well start equipping them with fenders at this point.
And 2050 basically means post-AGI at this point. ;)
Great write up Alex!
I wonder how well the transparent battlefied translates to the naval setting.
1. Detection and communication through water is significantly harder than air, requiring shorter distances.
2. Surveilling a volume scales worse than a surface.
Am I missing something or do you think drones will just scale anyway?
I don't know if that is a meaningful question.
Consider this: a cube is something that is symmetric under the octahedral group - that's what *makes* it a cube. If it wasn't symmetric under these transformations, it wouldn't be a cube. So also with spacetime - it's something that transforms according to the Poincaré group (plus some other mathematical properties, metric etc.). That's what makes it spacetime.
I'll bet you! ;)
Sadly my claim is somewhat unfalsifiable because the emergence might always be hiding at some smaller scale, but I would be surprised if we find the theory that the standard model emerges from and it's contains classical spacetime.
I did a little search, and if it's worth anything Witten and Wheeler agree: https://www.quantamagazine.org/edward-witten-ponders-the-nature-of-reality-20171128/ (just search for 'emergent' in the article)
You're making an interesting connection to symmetry! But scale invariance as discussed here is actually emergent - it arises when theories reach fixed points under coarse-graining, rather than being a fundamental symmetry of space. This is why quantities like electric charge can change with scale, despite spacetime symmetries remaining intact.
And while spacetime symmetries still seem scale invariant, considering the above argument they might also break down at small scales. It seems exceedingly unlikely that they would not! The initial parameters of the theory would have to be chosen just so as to be a fixed point. It seems much more likely that these symmetries emerged through RG flow rather than being fundamental.
The act of coarse-graining/scaling up (RG transformation) changes the theory that describes the system, specifically the theories parameters. If you consider in the space of all theories and iterate the coarse-graining, this induces a flow where each theory is mapped to a coarse-grained version. This flow may posess attractors, that is stable fixed points x*, meaning that when you apply the coarse-graining you get the same theory back.
And if f(x*)=x* then obviously f(f(x*))=x*, i.e. any repeated application will still yield the fixed point.
So you can scale up as much as you want - entering a fixed point really is a one way street, you can can check out any time you like but you can never leave!
As a corollary: Maybe power laws for AI should not surprise us, they are simply the default outcome of scaling.
Scale invariance is itself an emergent phenomenon.
Imagine scaling something (say a physical law) up - if it changes, it is obviously not scale invariant as it will continue changing with each scale up. If it does not change it has reached a fixed point and will not change in the next scale up either!
Scale invariances are just fixed points of coarse-graining.
Therefore, we should expect anything we think of as scale invariant to break down at small scales. For instance, electric charge is not scale invariant at small scales!
In the opposite direction: We should expect our physical laws to continue holding for the macro scale, if they are fixed points of scaling. This also explains the ubiquity of power laws in the natural sciences; power laws are the only relations that are scale invariant and thus preserved!
All of this may seem tautological but is actually truly strange. To me this indicates that we should expect to be very, very far from the actual substrate of the universe.
Now go forth and study renormalisation group flow! ;)
Epistemic status: Just riffing!