Posts
Comments
Emotions represent swift, comprehensive assessments that enable an organism to promptly evaluate the importance of stimuli in relation to its survival and overall well-being.
It seems to me that the ability of emotions to evaluate/categorize stimuli is descriptively correct, but there is a large gap between that and a prescription. Emotions surely do act as shortcut heuristics to detail with the excess of detail, however this can lead to as many mistakes as there are successes. Moreover, AI already is capable of coming up with statements or decisions despite the immense quantity of data at its disposal.
Said another way, if "emotion" can be reduced to "decision making before sufficient detail is understood", well it seems like we're already there.
Perhaps the broader point is that there need not be any "actual" emotion for us to believe there is--there doesn't have to be a "ghost in the machine".
An AI also does not need some emotional capacity for a successful operation of ethics either as that can be (arbitrarily) programmed in. Or there could theoretically be a 50/50 dice roll between utilitarianism and deontology to give the impression of "compassionate moral reasoning"--all the AI would need to do is give an ex post facto explanation for their decision, along with adding something like "I regret the losses that were had along the way".
All of this to say, I don't believe emotions are necessary to have a fully functioning artificial intelligence--such "emotional" capacities would merely affect our interactions/feelings towards it.
Lastly, your example of the prompt you gave to 4o is illustrative but imperfect. Within the prompt, it is not clear that what is requested is a more emotionally intelligent result--a more specific prompt, or a more expansive memory could tease out the type of answer you want. That being said, the requirement to make a more specific prompt at all is indeed a difficulty as it feels like, in day to day conversation with others, there is less need to be as specific. That being said, that difference may be an illusion. After all, a day to day conversation has much more data given it is face to face, with someone you know, etc. As such, the importance of specificity with a prompt remediates the lack of all that data.
An interesting thought experiment: imagine if we had no facial expressions and our tones of voice didn't change. If this were the case, our language would have to do most of the work, and therefore we would have a much more complex language referring to attitudes, emotions, expectations, etc.--all things that our normal gesticulations already take care of.
I agree that the truth/verification distinction holds and didn't mean to imply the contrary.
Also, truth being a vanishing point or limit of justification reinvokes the false distinction between belief/knowledge (truth) that I began with.
The main claim is that the external world provides no such limit. Justification for belief does not (and cannot) rely on coherence with the external world because we are only ever inside the "world" of the internal--that is, we are stuck inside our beliefs and perceptions and never get beyond them into the external world. As such, we have no way to somehow compare our justified beliefs with the external world precisely because we have no access to it--we only have a belief that coheres with another belief already held.
A quick and dirty way to illustrate this point:
- A: We can't know that the stone that's there truly is there in reality.
- B: But if I throw it at you and you feel pain, doesn't that prove it's real?
- A: The pain is just another perception—still within my mind—not proof of the stone's external existence.
The external world outside of our perceptions cannot be touched by any internal mechanism (belief, perception, justification, verification, etc.) precisely because it is outside. As such, we have no conceivable access to it. Whatever limits there are, they are not found in the external world and only ever found in our internal world--after all, how can you find (refer, perceive, believe, etc.) anything that in its definition is incapable of being found? We can't nor could we ever. Therefore, the distinction between truth/knowledge (and internal/external) that is commonly used is unnecessary at best and illusory at worst.
The semantic distinction between uses of the words belief and knowledge certainly occurs, however, because it is (even partly) external, it ceases to be a useful distinction.
Moreover, for the holder of the belief, justification does reduce to the belief of being justified. After all, one cannot both hold "I think X-belief is justified" and "I don't believe in X-belief". One believes in things that are justified (for them) and the beliefs one believes to be justified are the ones they believe are "true".
The orthodox use of "knowledge" implies access to "the world beyond" of the observer. If no access exists (as I claim), then what is considered "knowledge" is actually (merely) belief.
Alice and Bob are both justified through their perception of the cup of coffee, and Alice's perception is "knowledge" because it coheres with reality. Understood. However, both believe that they're correct because of their justification. Alice may try to touch the cup and have the tactile perception that is there, and she may feel even more justified for her belief--but all she is doing is confirming her belief in her visual perception with her belief of tactile perception. If Bob were to have an equally convincing hallucination of visual and tactile perception of the cup of coffee, he would be equally justified in believing it was there, given that Alice and Bob both have beliefs of trust for their perceptions.
All this is to say that verification of beliefs can only occur within already held beliefs. One's successful attempts to verify a belief via other beliefs may have them consider their beliefs to be knowledge and truth but, given the nature of that conclusion, the distinction is illusory/pragmatically useless for the perceiver.
Knowledge implies a way to verify beliefs outside beliefs. Given the arguments laid above, there is no such way.
If the distinction between belief/knowledge is collapsed, then "truth" is merely justified belief. That is, "truth" is just beliefs which we believe to be justified, however that justification doesn't say anything about the world "out there", or imply that we have access to that world.
In other words, the distinction implies that coherence between belief is what determines our naming of certain statements as "true" and not coherence with the truth/reality.
You're right--definitions are hard but here we go:
Belief: A mental state that an individual holds to be true, typically in the form "X is Y."
Knowledge: When a belief accurately corresponds to objective reality. In other words, a belief that is correct.
It’s important to realize that the ideal value model depends on the situation you are in. A common mistake in both tech and life is to use a value model that made sense in a different product or society, but doesn’t make sense in this one. In essence, you have a value model that did well at solving the problems you used to have, but not at solving the problems you have right now.
Tensions: It seems unintuitively true that everything can described as good or bad ("I'm not impulsive, I'm spontaneous!"). However, if we favor vectors over rigid categories of good/bad, we are falling into the same problem. I believe the issue isn't that we should change our value models to solve contemporary problems, it's that there is an excess of solutions, each that would give something that we want. As such, there doesn't seem to be any theory to help us prioritize one desire (or moral principle) over another.
Capitalism: A virtue (vice?) of capitalism is that it allows for the satisfaction of incompatible desires within a society, thus (theoretically) allowing for the maximum growth possible in that society. Neat!