Humans vs LLM, memes as theorems
post by Yaroslav Granowski (yaroslav-granowski) · 2025-05-09T13:26:34.043Z · LW · GW · 0 commentsContents
No comments
It is important to remember that although languages and brains evolved alongside each other, these are separate systems.
Human brain has evolved to be a fast learner of whatever common language it is exposed to. And languages themselves have evolved to be as accessible for new speakers as possible.
One may say that language summarizes the shared experience of its speakers. Let me play with an analogy to Arithmetics:
There is an infinite set of numbers that have several kinds of properties (negative or positive, rational or irrational) and several kinds of relations between them (summation, product, etc). To describe them all, you don’t need an infinitely large database. You can describe the entire model with several axioms and theorems. Furthermore, you can build up on it, inventing new concepts and new theorems ad infinitum.
What if we thought about language as such a model, describing whatever experience its speakers were sharing with each other? Not as unambiguous as Arithmetics of course, more like fuzzy logic with multiple duplications. Perhaps memes [? · GW] play the role of theorems here.
From this point of view, LLM is just a kind of reasoning system, generating new theorems and proofs. These are not new, and I don’t expect them to deliver AGI in several years, like some fear or hope.
The big part of intelligence is in the priority management. A reasoning system can pursue multiple paths to big depth, and in areas like chess, it supersedes the human brain. But real life is more complicated than chess, and goal setting is the big part of successful reasoning. Nature has debugged the human brain for this in millennia with RL [? · GW], but it is still prune to mental illnesses.
On the other hand, there is an opinion that the evolution of human society is driven by the evolution of memes. When people act as hosts of distributed computation network, they process memes along with their own goals and things are more or less stable. But if LLMs become a breeding ground for memes, I think, they may poison people with crazy, but deeply thought out ideas, like what some schizophrenics have.
0 comments
Comments sorted by top scores.