Posts

Comments

Comment by fuckailmao on The issue of meaning in large language models (LLMs) · 2024-03-08T01:17:18.120Z · LW · GW



@Bill Benzon 
Hi Bill Benson! You are the first person who I've come across who has come to the same conclusion as I did. LLMs cannot semantically understand the meaning of words. I believe this is because semantic understanding of words and concepts is a form of Qualia. And computers cannot feel qualia. We feel the semantic meaning of words as a sensation, as a qualia, which requires a consciousness. Only a consciousness can feel qualia. 
Meaning has three components,
1) a structural relationship component (how it structurally relates to other words)
2) intention & adhesion (only a consciousness can have an intention and understand its adhesion to real world) 
3) a qualia component (the part of when you said meaning takes place 'in the minds' of the people).
Semantic understanding of meaning of words and concepts is a form of qualia. It is felt in the minds of people as qualia.

An algorithm cannot feel qualia, and can only encode the first part which is the structural relationship between words. This is why I think that an algorithm cannot converge on a language model -- because any model it derives would only be structural and would be missing the essential qualiatic nature of the meaning of the language and concepts that it hopes to describe.

Everything that a LLM outputs is meaningless. A string of symbols strung together in the most probabilistic likely way. It feels nothing and understands nothing. The word "hollow" or "dead" is a good way to describe its output. Everything it outputs is dead, hollow of the essential qualia that ought to inhabit the concept/word.

The idea that words such as “good” or “love” are of vectors parameterized by arrays of rational numbers, acting on a Euclidean space in N dimensions (as Large Language Models operate), is preposterous. As if I can look at the weights of the Neural Network, find the word “love”, and I can say “oh, I understand what love is now, it is this array of rational numbers 0.44, 0.223,…”… this is ridiculous. You cannot map down the word “love” from Qualitic space to a quantitative representation and expect it to be isomorphic or even homomorphic to the real thing. As soon as you map it down to numbers, you have lost information. That’s not love. I don’t know what is is, something ugly, foreign and alien. There is no correspondence between the two. Love is not a numerical vector. It is not a datapoint. Love is Qualia; I feel it, it is real, it is nontrivial. 

AI disgusting. It is the ultimate form of nihilism because it makes a mockery out of consciousness, art, writing, language, and meaning. As if consciousness’s creative intellect is reduced to nothing more than 1s and 0s. There is AI art, but art is supposed to be created by a human consciousness to convey their personal experience. But AI has no experience. AI is just a meaningless statistical structure. A meaningless statistical structure without a soul. When you view AI art you are looking at something hollow. There is no experience inside of the art. No pain. No joy. No suffering. Nothing at all. No intention. There is no experience it is trying to convey. When you view AI art, you are looking at empty hollow nothingness. 

Everything that AI outputs is just dead echoes of things that real people have previously said in its training data. 

-Morph