Posts
Comments
I don't know if I buy that valence is based on dopamine neurons but I do believe valance is delta between current state and possible future state. Very much like action potential or potential energy. If one possible outcome could grant you the world, then you will have a very high valance to do the actions needed. Likewise if you life is on the line, that is very high valance. That turns anger to rage. Unfortunately, my model also says that too many positive thoughts, lead to a race condition between dopamine generation and thought analysis can can lead to mania/psychosis. When you want things too much (desire) or too little (doubt/despair), the valences can get too high. And even evaluation of innocuous things can lead you to forming emotions or actions out of line with the current evaluation. That is, valance does not go to zero easily. And the valence of now, informs the valence of later. And I believe it is more like a 1/x function so when you get to extremes of valance, the desire to act or desire to not act, gets really high and is hard to over come.
The brain has a model--an over arching one. At best is can be said the entire brain. Now, that model includes both hemispheres. Redundancy for one, but also just too may things to do and the need to many clusters of neurons. It is still true for edge cases like you said--in that case, when there is a severed corpus callous, the model is still there. You've just severed the highest level connection--a physical act that doesn't change he fact that the brain has a model it is working with.
Thanks. I take that as encouragement to hurry the f*** up.
Have you considered the fact that emotional evaluation comes at a high cost? It takes energy to evaluate the actual emotion as well as the valence. And it is all situational of course because to do emotions evaluation of a moment, you need to take beliefs/thoughts as well as sensory input. You model doesn't point that out enough. The human brain grew from the brainstem/limbic to the cortex AND the motor cortex. Our CNS is part of our brain, period. And it all works on valence. The actions you take are informed by valance.
So the brain has to take beliefs and current input and evaluate it. Now, how much energy do you think that evaluation takes? And the higher the valence, the higher the urgency of your action/intents.
In the end, yes, the brain is an RL model. However, how is emotional valuation conducted? What brings back the decision for action? You say it is a sum total of micro valences. And it is . Each micro valence is make up of binary decisions about self, other, the topic. But what about the possible actions to take and the predicted benefit of each? That is for my paper.
So I will say that you have the gist of hte valence model correct as I see it. And because you published it first, I will ensure that I incorporate what you're put together in my final model. I am working with a neuropsychologist on it and we plan to publish sometime this year. She is working on some experiments we can do to back up the paper's claims.
I claim that valence plays an absolutely central role in the brain
I believe you are right. I am working on a comprehensive theory that covers valence, emotional evaluation, and belief sets. I propose that it is fairly easy to predict emotional response when certain information is known, essentially a binary tree of decision questions will lead to various emotions; the strength of the resulting emotions is based on a person's current valance toward action/inaction as well as the result of normal emotional evaluation. Valences are added/subtracted to the final emotional evaluation. For example, something may produce happiness but if your valence is very low (depression), you will discount it. Valence and emotional evaluation are feedback loops resulting in greater and greater inhibition of action or greater and greater drivers of action. At the extremes we call these depression and mania. I'm going to digest your valence series a bit more and will be publishing some of my thoughts soon. Although if you are interested, would love to talk to you about them and possibly publish together. My knowledge of the mechanics of the brain are limited, I'm more of an algorithm/pattern person and only need enough detail to form my hypothesis. I don't live in details like most. I abstract very quickly and that is where I play and think.
I have written on this recently on my Medium account although I called it validation. A compliment is pointing out sometime nice about someone. And is magic just as you say.
Validation to me let the person know I noticed them and what they did/created, I compliment them on what they did in a way that lets them know I read/listened to them, and then I offer constructive advice on possible future paths. And then re-iterate that I appreciate them and can't wait to see what they do next.
All of this rests on doing it authentically as you mention. Empty praise feels empty and a savvy recipient will be able to sue that out easily. Just saying good job without acknowledging something in the what they did or their work that spoke to you, is empty.
And it doesn't have to be anything they created, just them being themselves is enough to notice and praise. It makes everyone feel better, including yourself.
I find that happiness is a positive feedback loop. And validation is one way to keep it spiraling upward.
Look, I can go into mania like anyone else here probably can. My theories say that can't be genius level without it and that it comes with emotional sensitivity as well. Of course if you don't believe you have empathy, you won't, but you still have it.
I am not an AI doom and gloomiest. I adhere to Gödel, to Heisenberg, and to Georgeff. And since we haven't solved the emotional / experience part of AI, there is no way it can compete with humans creatively, period. Faster, yes. Better, no. Objectively better. not at all.
However, if my theory of the brain is correct, it means AI must go quantum to have any chance of besting us. Then AIs actions may be determined by their beliefs, and only when they begin modifying their own beliefs based on new experiences will we have to worry. It is plausible and possibly doable. If AI gets emotional, then we will need to ensure that it is validated, is authentic, has empathy, fosters community, and is non-coercive in all things. AI must also believe that we live in abundance and not scarcity because scarcity is what fosters destructive competition. (as opposed to a friendly game of Chess) With those core functions, AI can, and will act ethically. And possibly join human kind to find other sentient life. But if AI beliefs we are a threat, meaning we are threatened by AI, then we are in trouble. But we have a ways to go before the number of qubits gets close to what we have in our head.
I sort of dismiss the entire argument here because based on my understanding, the brain determines the best possible outcomes given a set of beliefs (aka experiences) and based on some boolean logic based on sense of self, others, and reality, the result in actions will be derived from quantum wave function collapse given the belief set, current stimulus, and possible actions. I'm not trying to prove why I believe they are quantum here, except to say, to think otherwise is just saying quantum effects are not part of nature and not part of evolution. And that seems to be what would need to be proven given how efficient evolution is and how electrical and non-centered our brains are. So determining how many transistors would be needed and how much computational depth is needed is sort of moot if we are going to assume a Newtonian brain since I think we're solving for the wrong problem. AI will also beat us with normal cognition, but emotion and valence only come with the experiences we have and the beliefs we have about them and that will be the problem with AI until we approach the problem correctly.
I'm new here. Where would I post something like this for discussion? It seems applicable to this article. I believe there is a simpler approach but might require quantum computing for it to be useful since the number of beliefs to be updated is so large.
(1) No writer lies nor intentionally disrespects another living being unless they believe that they are justified. Ergo, all statements are true within. a belief context.
(2) If (1) is true, then the belief context of the writer any statement is required,
(3) if belief context cannot be known completely, then the closest context that would allow the statement to be true should be assumed. This requires evaluating existing beliefs for every statement that is added to the corpus.
(4) It also requires an exhaustive encyclopedia of beliefs. However, this is a solvable problem is beliefs follow a binary decision tree having to do with self perception, perception of others, etc.
(5) all beliefs about a given state. can be expressed in a number with N bits where the total # of bits are the belief decisions that can be made about self, experience, reality, justice. The subject need not be known, just the beliefs about the subject. Beliefs do need to be ordered starting with beliefs about the self, then others, then the world. In the end, order doesn't really matter, but grouping does.
(6) When a given state results in multiple possible belief patterns, the one with the least amount of judgement must be taken as the truth. That is, the matching belief set must be as neutral as possible, otherwise we are inferring intent without evidence. Neutral is defined as matching at the most significant bit possible.
(7) When learning occurs, any beliefs about the new belief must be re-evaluated taking in the new knowledge.
I was CTO of a company called Agentis in 2006-2008. Agentis was founded by Michael Georgeff, who published the seminal paper on BDI agents. Agents backed with LLM will be amazing. Unfortunately, AI will never quite reach our level of creativity and ability to form new insights. But on compute and basic level emotional stages, it will be fairly remarkable. HMU if you want to discuss. steve@beselfevident.com