Posts
Comments
Thanks for the post! This is fantastic stuff, and IMO should be required MI reading.
Does anyone who perhaps knows more about this than me wonder if SQ dimension is a good formal metric for grounding the concept of explicit vs tacit representations? It appears to me the only reason you can't reduce a system down further by compressing into a 'feature' is that by default it relies on aggregation, requiring a 'bird's eye view' of all the information in the network.
I mention this as I was revisiting some old readings today on inductive biases of NNs, then realised that one reason why low complexity functions can be arbitrarily hard for NNs to learn could be because they have high SQ dimension (best example: binary parity).
So, I'm mostly referencing trends in e-commerce here. For example, first Amazon put storefronts out of business by allowing drop-shipping of cheaply manufactured goods with no warranty. Now, Temu is competing with Amazon by exploiting import tax loopholes, selling the same items at below production price, many of which contain pthalates and other chemical compounds at multiple times the safe standards. This is a standard trick for monopolisation pulled by large giants: they will then rack the prices back up once they have a stable user base, and start making profit. Uber did this.
The drop in clothing standard is real, though, because fast fashion didn't really exist until the 2000s.10 years is not far enough back: you need to go about 25-30. If I want high quality clothing made fairtrade, I now have to go on specialist websites like Good on You which compile databases of very niche companies and pay upwards of $100 for an item of clothing. I cannot get something that I expect to last long by walking into a department store.
Enshittification also exists in the apps and services that have been established via monopolisation or acquiring an existing user base.
I do worry we are already seeing this. To quote the word exactly, the 'enshittification' of everything we can buy and services we are provided is real. The best example of this high-quality clothing, but pretty much everything you can buy online at Amazon shows this too. It's important to be able to maintain quality separate of market dynamics, IMO, at least because some people value it (and consumers aren't really voting if there is no choice).
I think seems to be a very accurate abstraction of what is happening. During sleep, the brain consolidates (compresses and throws away) information. This would be equivalent to summarising the context window + discussion so far, and adding it to a running 'knowledge graph'. I would be surprised if someone somewhere has not tried this already on LLMs - summarising the existing context + discussion, formalising it in an external knowledge graph, and allowing the LLM to do RAG over this during inference in future.
Although, I do think LLM hallucinations and brain hallucinations arise via separate mechanisms. Especially there is evidence showing human hallucinations (sensory processing errors) occur as an inability of the brain's top-down inference (the bayesian 'what I expect to see based on priors') to happen correctly. There is instead increased reliance on bottom-up processing (https://www.neuwritewest.org/blog/why-do-humans-hallucinate-on-little-sleep).