Spock's Dirty Little Secret
post by pjeby · 2009-03-25T19:07:21.908Z · LW · GW · Legacy · 71 commentsContents
71 comments
Related on OB: Priming and Contamination
Related on LW: When Truth Isn't Enough
When I was a kid, I wanted to be like Mr. Spock on Star Trek. He was smart, he could kick ass, and he usually saved the day while Kirk was too busy pontificating or womanizing.
And since Spock loved logic, I tried to learn something about it myself. But by the time I was 13 or 14, grasping the basics of boolean algebra (from borrowed computer science textbooks), and propositional logic (through a game of "Wff'n'Proof" I picked up at a garage sale), I began to get a little dissatisfied with it.
Spock had made it seem like logic was some sort of "formidable" thing, with which you could do all kinds of awesomeness. But real logic didn't seem to work the same way.
I mean, sure, it was neat that you could apply all these algebraic transforms and dissect things in interesting ways, but none of it seemed to go anywhere.
Logic didn't say, "thou shalt perform this sequence of transformations and thereby produce an Answer". Instead, it said something more like, "do whatever you want, as long as it's well-formed"... and left the very real question of what it was you wanted, as an exercise for the logician.
And it was at that point that I realized something that Spock hadn't mentioned (yet): that logic was only the beginning of wisdom, not the end.
Of course, I didn't phrase it exactly that way myself... but I did see that logic could only be used to check things... not to generate them. The ideas to be checked, still had to come from somewhere.
But where?
When I was 17, in college philosophy class, I learned another limitation of logic: or more precisely, of the brains with which we do logic.
Because, although I'd already learned to work with formalisms -- i.e., meaningless symbols -- working with actual syllogisms about Socrates and mortals and whatnot was actually a good bit harder.
We were supposed to determine the validity of the syllogisms, but sometimes an invalid syllogism had a true conclusion, while a valid syllogism might have a false one. And, until I learned to mentally substitute symbols like A and B for the included facts, I found my brain automatically jumping to the wrong conclusions about validity.
So "logic", then -- or rationality -- seemed to require three things to actually work:
- A way to generate possibly-useful ideas
- A way to check the logical validity -- not truth! -- of those ideas, and
- A way to test those ideas against experience.
But it wasn't until my late thirties and early forties -- just in the last couple of years -- that I realized a fourth piece, implicit in the first.
And Spock, ironically enough, is the reason I found it so difficult to grasp that last, vital piece:
That to generate possibly-useful ideas in the first place, you must have some notion of what "useful" is!
And that for humans at least, "useful" can only be defined emotionally.
Sure, Spock was supposed to be immune to emotion -- even though in retrospect, everything he does is clearly motivated by emotion, whether it's his obvious love for Kirk, or his desire to be accepted as a "real" rationalis... er, Vulcan. (In other words, he disdains emotion merely because that's what he's supposed to do, not because he doesn't actually have any.)
And although this is all still fictional evidence, one might compare Spock's version of "unemotional" with the character of the undead assasin Kai, from a different science-fiction series.
Kai, played by Michael McManus, shows us a slightly more accurate version of what true emotionlessness might be like: complete and utter apathy.
Kai has no goals or cares of his own, frequently making such comments as "the dead do not want anything", and "the dead do not have opinions". He mostly does as he's asked, but for the most part, he just doesn't care about anything one way or another.
(He'll sleep in his freezer or go on a killing spree, it's all the same to him, though he'll probably tell you the likely consequences of whatever action you see fit to request of him.)
And scientifically speaking, that's a lot closer to what you actually get, if you don't have any emotions.
Not a "formidable rationalist" and idealist, like Spock or Eliezer...
But an apathetic zombie, like Kai.
As Temple Grandin puts it (in her book, Animals In Translation):
Everyone uses emotion to make decisions. People with brain damage to their emotional systems have a hard time making any decision at all, and when they do make a decision it's usually bad.
She is, of course, summarizing Antonio Damasio's work in relation to the somatic marker hypothesis and decision coherence. From the linked article:
Somatic markers explain how goals can be efficiently prioritized by a cognitive system, without having to evaluate the propositional content of existing goals. After somatic markers are incorporated, what is compared by the deliberator is not the goal as such, but its emotional tag. [Emphasis added]
The biasing function of somatic markers explains how irrelevant information can be excluded from coherence considerations. With Damasio's thesis, choice activation can be seen as involving emotion at the most basic computational level. [Emphasis added]
...
This sketch shows how emotions help to prevent our decision calculations from becoming so complex and cumbersome that decisions would be impossible. Emotions function to reduce and limit our reasoning, and thereby make reasoning possible. [Emphasis added]
Now, we can get into all sorts of argument about what constitues "emotion", exactly. I personally like the term "somatic marker", though, because it ties in nicely with concepts such as facial micro-expressions and gestural accessing cues. It also emphasizes the fact that an emotion doesn't actually need to be conscious or persistent, in order to act as a decision influencer and a source of bias.
But I didn't find out about somatic markers or emotional decisions because I was trying to find out more about logic or rationalism. I was studying akrasia1, and writing about it on my blog.
That is, I was trying to find out why I didn't always do what I "decided to do"... and what I could do to fix that.
And in the process, I discovered what somatic markers have to do with akrasia, and with motivated reasoning... long before I read any of the theories about the underlying machinery. (After all, until I knew what they did, I didn't know what papers would've been relevant. And in any case, I was looking for practice, not theory)
Now, in future posts in this series, I'll tie somatic markers, affective synchrony, and Robin Hanson's "near/far" hypothesis together into something I call the "Akrasian Orchestra"... a fairly ambitious explanation of why/how we "don't do what we decide to" , and for that matter, don't even think the way we decide to.
But for this post, I just want to start by introducing the idea of somatic markers in decision-making, and give a little preview of what that means for rationality.
Somatic markers are effectively a kind of cached thought. They are, in essence, the "tiny XML tags of the mind", that label things "good" or "bad", or even "rational" and "irrational". (Which of course are just disguised versions of "good" and "bad", if you're a rationalist.)
And it's imporant to understand that you cannot escape this labeling, even if you wanted to. (After all, the only reason you're able to want to, is because this labeling system exists!)
See, it's not even that only strong emotions do this: weak or momentary emotional responses will do just fine for tagging purposes. Even momentary pairing of positive or negative words with nonsense syllables can carry over into the perception of the taste of otherwise-identical sodas, branded with made-up names using the nonsense syllables!
As you can see, this idea ties in rather nicely with things like priming and the IAT: your brain is always, always, always tagging things for later retrieval.
Not only that, but it's also frequently replaying these tags -- in somatic, body movement form -- as you think about things.
For example, let's say that you're working on an equation or a computer program... and you get that feeling that something's not quite right.
As I wrote the preceding sentence, my face twisted into a slight frown, my brow wrinkling slightly as well -- my somatic marker for that feeling of "not quite right-ness". And, if you actually recall a situation like that for yourself, you may feel it too.
Now, some people would claim that this marker isn't "really" an emotion: that they just "logically" or "rationally" decided that something wasn't right with the equation or program or spaceship or whatever.
But if we were to put those same people on a brain scanner and a polygraph, and observe what happens to their brain and body as they "logically" think through various possibilities, we would see somatic markers flying everywhere, as hypotheses are being considered and discarded.
It's simply that, while your conscious attention is focused on your logic, you have little interest in attending directly to the emotions that are guiding you. When you get the "information scent" of a good or a bad hypothesis, you simply direct your attention to either following the hypothesis, or discarding it and finding a replacement.
Then, when you stop reasoning, and experience the frustration or elation of your results (or lack thereof), you finally have attention to spare for the emotion itself... leading to the common illusion that emotion and reasoning don't mix. (When what actually doesn't mix, at least without practice, is reasoning and paying conscious attention to your emotions/somatic markers at the same time.)
Now, some somatic markers are shared by all humans, such as the universal facial expressions, or the salivation and mouth-pursing that happens when you recall (or imagine) eating something sour. Others may be more individual.
Some markers persist for longer periods than others -- that "not quite right" feeling might just flicker for a moment while you're recalling a situation, but persist until you find an answer, when it's a response to the actual situation.
But it's not even necessary for a somatic marker to be expressed, in order for it to influence your thinking, since emotional associations and speed of recall are tightly linked. In effect, recall is prioritized by emotional affect... meaning that your memories are sorted by what makes you feel better.
(Or what makes you feel less bad ... which is not the same thing, as we'll see later in this series!)
What this means is that all reasoning is in some sense "motivated", but it's not always consciously motivated, because your memories are pre-sorted for retrieval in an emotionally biased fashion.
In other words, the search engine of your mind...
Returns paid results first.
This means that, strictly speaking, you don't know your own motivations for thinking or acting as you do, unless you explicitly perform the necessary steps to examine them in the moment. Even if you previously believe yourself to have worked out those motivations, you cannot strictly know that your analysis still stands, since priming and other forms of conditioning can change those motivations on the fly.
This is the real reason it's important to make beliefs pay rent, and to ground your thinking as much as possible in "near" hypotheses: keeping your reasoning tied closely to physical reality represents the only possible "independent fact check" on your biased "search engine".
Okay, that's enough of the "emotional decisions are bad and scary" frame. Let's take the opposite side now:
Without emotions, we couldn't reason at all.
Spock's dirty little secret is that logic doesn't go anywhere, without emotion. Without emotion, you have no way to narrow down the field of "all possible hypotheses" to "potentially useful hypotheses" or "likely to be true" hypotheses...
Nor would you have any reason to do so in the first place!
Because the hidden meaning of the word "reason", is that it doesn't just mean logical, sensible, or rational...
It also means "purpose".
And you can't have a purpose, without an emotion.
If Spock didn't make me feel something good, I might never have studied logic. If stupid people hadn't made me feel something bad, I might never have looked up to Spock for being smart. If procrastination hadn't made me feel bad, I never would've studied it. If writing and finding answers to provocative questions didn't make me feel good, I never would've written as much as I have.
The truth is, we can't do anything -- be it good or bad -- without some emotion playing a key part.
And that fact itself, is neither good nor bad: it's just a fact.
And as Spock himself might say, it's "highly illogical" to worry about it.
No matter what your somatic markers might be telling you.
Footnotes:
1. I actually didn't know I was studying "akrasia"... in fact, I'd never even heard the term akrasia before, until I saw it in a thread on LessWrong discussing my work. As far as I was concerned, I was working on "procrastination", or "willpower", or maybe even "self-help" or "productivity". But akrasia is a nice catch-all term, so I'll use it here.
71 comments
Comments sorted by top scores.
comment by Cyan · 2009-03-25T19:35:05.039Z · LW(p) · GW(p)
The ideas are interesting, but I'm finding the use of italics and especially bold font somewhat distracting -- I feel like I'm being harangued. (Not as bad as all caps, but still.)
Replies from: gjm, olimay, Cameron_Taylor, Anatoly_Vorobey↑ comment by gjm · 2009-03-25T23:24:42.719Z · LW(p) · GW(p)
Strongly concur. I have the same reaction to pjeby's blog. I don't think it's only because of the bold; it's the writing style too, which consistently seems to me to be saying "I understand all this stuff, and you are stupid. So stupid I have to write in short sentences. And sentence fragments. Because otherwise ... you won't get it." And I find it offputting. Very offputting.
Which is a pity, because ...
... pjeby has some interesting things to say.
↑ comment by Cameron_Taylor · 2009-03-26T00:29:03.198Z · LW(p) · GW(p)
I do find It somewhat distracting to 'hear' words with the emphasis forced onto us through formatting.
I usually appreciate the use of bold and italics as a way of highlighting key concepts. It helps me navigate to the most interesting parts and give a framework to the document layout. Randomly bolding random words distracts from this.
↑ comment by Anatoly_Vorobey · 2009-03-25T20:49:43.040Z · LW(p) · GW(p)
I agree that there's too much bolding going on; but let me just add (having just returned from a bout of wiki-reading prompted by your links) that this is a superb post; I'll be thinking and reading about much of this and looking forward to the promised future posts.
comment by Scott Alexander (Yvain) · 2009-03-25T20:15:10.681Z · LW(p) · GW(p)
Very interesting post. If you can do even a fraction of what you say you will, it'll be a spectacular contribution. I already have your blog on my list of things I need to get around to reading, and it just moved up a few places on that list.
You're moving pretty quickly, though, and I have trouble following you at some areas. Maybe in the future break large essays like this into a few blog posts, one for each sub-point.
Replies from: pjeby↑ comment by pjeby · 2009-03-25T22:08:30.470Z · LW(p) · GW(p)
You're moving pretty quickly, though, and I have trouble following you at some areas. Maybe in the future break large essays like this into a few blog posts, one for each sub-point.
Heh. This is a way scaled-back version of my original planned first post, which was to jump straight from motivated reasoning into either the Speculator/Savant divide, or the Towards/Away distinctions. I went, "crap, this is getting too long" and pulled the plug before I got to anything really "interesting", figuring that this one at least laid a little bit of groundwork and some references to build a foundation for the rest.
I'm accustomed to being able to get further in one sitting, but that's because my usual writing isn't peppered with references to experimental results and tediously building my case point by point; usually I just rely on metaphor and people's personal experiences as evidence. Here, though, I've noticed that people prefer authorities in the form of citations, to looking at their own personal experiences... so it seems to take a hell of a lot longer to build up statements of any substance.
Which is not to say it's not worth it... discussions on LW, and the preparation for this post, have helped me immensely clarify and simplify certain aspects of my knowledge and work, in ways that will help me teach my self-improvement audience better, not just communicate better on LW.
Replies from: Yvain, Vladimir_Nesov↑ comment by Scott Alexander (Yvain) · 2009-03-25T23:26:05.035Z · LW(p) · GW(p)
I'm glad you're paying attention to experimental results. I wouldn't believe you if you didn't :)
Now that I've read it over a few more times, I'm still not sure if I understand correctly. Tell me if this is the right track: the brain tags thoughts or concepts as good or bad by associating them with certain micro-expressions which are the physiological correlates of emotions. When you're reasoning, you are unconsciously trying to generate pleasant emotions by using only those lines of thinking that lead to the micro-expressions associated with pleasant feelings. Generating these pleasant feelings is, on a preconscious level, the desire motivating reasoning.
Also, are you taking as a premise something like the James-Lange theory of emotions? What about something like Reich's theory of muscular armor? (see about a quarter of the way down this page)
Replies from: pjeby↑ comment by pjeby · 2009-03-26T00:25:03.401Z · LW(p) · GW(p)
Generating these pleasant feelings is, on a preconscious level, the desire motivating reasoning.
I haven't even talked about actual motivated reasoning in this post... barely touched on it. What I'm talking about here is something you might think of as "pre-biased reasoning" -- that is, before you even consciously perform any reasoning, your brain has to generate hypotheses... and these are based on manipulation of existing memories... which are retrieved in emotion-biased sequences.
This is a hell of a lot more low-level description than an idea like "unconsciously trying to generate pleasant emotions". Also, that description attributes motivation and thinking-process to the unconscious... which is pure projection. The unconscious is not a "mind", in the sense of having intentions of the sort we attribute to ourselves and to other humans.
When I get to the Savant/Speculator distinction, that part will hopefully be a lot clearer.
Also, are you taking as a premise something like the James-Lange theory of emotions? What about something like Reich's theory of muscular armor?
Not as a premise, no, although there may be similarities in our conclusions.
However, I'm not in full agreement with the idea that you can generate emotions through muscular action, partly because I see physical action as being caused by emotion (rather than being emotion as such) and partly because an existing emotion can easily dominate the relatively weak influence of working from the outside in.
I also know that, Reich to the contrary, muscular armor can be dropped through mental work alone -- body awareness is required, at least to be able to tell if you're doing things right -- but you don't necessarily need to do anything particularly physical.
The "efficiency" objection to somatic markers and James-Lange is nonsense, however. If the purpose of an emotions is to prepare the body for action, then it's not "inefficient" to send the information out to the periphery -- it's the purpose!
It's the part where we infer emotions from that information coming back that's the kludge, because we only needed that information once we became social creatures... and even then, we already had the communication taking place via the external action.
Hell, I'm not sure why evolution had any reason for us to know what our own emotions are in the first place, which would certainly explain why we have to learn to interpret them, and people vary widely in their ability to do so.
Whew. I think my next post is going to need to work on demolishing the self-applied mind projection fallacy. A massive amount of popular psychology (and not a small amount of actual psychology) is based on a flawed model of how our minds work, and you have to dismantle that before you can see how it really works. It's about as big of a leap as quantum mechanics is, really, in the sense that it utterly makes no sense to our intuitions about the classical world.
Basically, consciousness and the perception of free-will are actually a side-effect of the same brain functions that allow us to believe in disembodied minds. We believe that we decide things for the simple reason that our brain also believes that other people decide things: it's part of our in-built theory of mind.
Replies from: orthonormal, Cameron_Taylor↑ comment by orthonormal · 2009-03-26T00:55:19.636Z · LW(p) · GW(p)
Hell, I'm not sure why evolution had any reason for us to know what our own emotions are in the first place, which would certainly explain why we have to learn to interpret them, and people vary widely in their ability to do so.
This point is incisive, has important consequences for rationality, and deserves a post (by somebody).
↑ comment by Cameron_Taylor · 2009-03-26T01:07:32.074Z · LW(p) · GW(p)
Whew. I think my next post is going to need to work on demolishing the self-applied mind projection fallacy. A massive amount of popular psychology (and not a small amount of actual psychology) is based on a flawed model of how our minds work, and you have to dismantle that before you can see how it really works. It's about as big of a leap as quantum mechanics is, really, in the sense that it utterly makes no sense to our intuitions about the classical world.
Do you plan to substitute the flawed model with whatever model the 'Savant/Speculator distinction' comes from? If so, perhaps consider another post explaining and validating said system first? Google seems unwilling to tell me what on earth you are talking about. Book? Paper? Link of some kind?
↑ comment by Vladimir_Nesov · 2009-03-25T22:55:47.188Z · LW(p) · GW(p)
I'm accustomed to being able to get further in one sitting, but that's because my usual writing isn't peppered with references to experimental results and tediously building my case point by point; usually I just rely on metaphor and people's personal experiences as evidence. Here, though, I've noticed that people prefer authorities in the form of citations, to looking at their own personal experiences... so it seems to take a hell of a lot longer to build up statements of any substance.
Now this sounds like some kind of ritual, empty of substance. Authority? Give me a break.
Replies from: pjeby↑ comment by pjeby · 2009-03-25T23:05:58.850Z · LW(p) · GW(p)
You seem to be ignoring the part where I implied your previous challenge led to me learning some new things about affective synchrony, that explained some of my results more clearly and gave me some new ideas to experiment with.
As I said, it was worth it, at least for me.
comment by Roko · 2009-03-25T23:39:46.678Z · LW(p) · GW(p)
"good" or "bad", or even "rational" and "irrational". (Which of course are just disguised versions of "good" and "bad", if you're a rationalist.)
I saw this, and felt a strong urge to walk to work where my laptop is and correct it.
Rational agents/things are not synonymous with good things. A paperclip maximizer is the canonical example of an agent acting rationally. As far as most people are concerned, including me, the paperclip maximizer is not acting in a good way. When I see "rational" mental tags on an agent, I used to see "good" mental tags. They used to be synonymous to me. Then I changed my mind and realized that much of the time, instrumentally rational means "very very dangerous", "powerful optimizing agent present". This is true in the case of the off topic thing, and of humans. Many instrumentally rational humans are dangerous to me. I am lucky to live in a society where I am mostly protected from clever, powerful humans such as the mafia.
Without emotion, you have no way to narrow down the field of "all possible hypotheses" to "potentially useful hypotheses" or "likely to be true" hypotheses...
This statement is either false or meaningless, depending on how you interpret "emotion". It suffices to say that an agent can single out true hypotheses without having a goal, and an autistic human can distinguish truth from falsehood. Humans with damage to the emotional centres of their brains don't get anything done, but their ability to tell truth from falsehood is unaltered. In fact I suspect that less emotional people are better epistemic rationalists, i.e. they are good at finding "likely to be true" hypotheses.
rationality -- seemed to require three things to actually work:
A way to generate possibly-useful ideas
A way to check the logical validity -- not truth! -- of those ideas, and
A way to test those ideas against experience.
There's more to epistemic rationality than these. Probabilistic reasoning, probabilistic logic, analogy formation, introspection and reflective thinking, domain knowledge, heuristics for which approximations are valid, notions of context all come to mind. In a short amount of time working in AI, I have quickly realised that first order predicate logic plays a very very small part in a mind.
Replies from: pjeby, rhollerith, jimrandomh↑ comment by pjeby · 2009-03-26T00:00:42.374Z · LW(p) · GW(p)
This statement is either false or meaningless, depending on how you interpret "emotion".
Let's review the statement in question:
Without emotion, you have no way to narrow down the field of "all possible hypotheses" to "potentially useful hypotheses" or "likely to be true" hypotheses...
By "narrow down", I actually meant "narrow down prior to conscious evaluation" -- not consciously evaluate for truth or falsehood. You can consciously evaluate whatever you like, and you can certainly check a statement for factual accuracy without the use of emotion. But that's not what the sentence is talking about... it's referring to the sorting or scoring function of emotion in selecting what memories to retrieve, or hypotheses to consider, before you actually evaluate them.
Replies from: Roko, Cameron_Taylor↑ comment by Roko · 2009-03-26T00:37:36.940Z · LW(p) · GW(p)
I disagree again: I don't think that any reasonable definition of emotion makes the following statement true: :
emotions allow you to (prior to conscious evaluation) narrow down the field of "all possible hypotheses" to "likely to be true" hypotheses.
I think that emotions often do the opposite. They narrow down the field of "all possible hypotheses" to "likely to make me feel good about myself if I believe it" hypotheses and "likely to support my preexisting biases about the world" hypotheses, which is precisely the problem that this site is tackling... if emotions subconsciously selected "likely to be true" hypotheses, we would not be in the somewhat problematic situation we are in.
Replies from: pjeby↑ comment by pjeby · 2009-03-26T00:53:31.394Z · LW(p) · GW(p)
think that emotions often do the opposite. They narrow down the field of "all possible hypotheses" to "likely to make me feel good about myself if I believe it" hypotheses and "likely to support my preexisting biases about the world" hypotheses, which is precisely the problem that this site is tackling... if emotions subconsciously selected "likely to be true" hypotheses, we would not be in the somewhat problematic situation we are in.
Those are subsets of what you believe to be likely true.
Replies from: Roko↑ comment by Roko · 2009-03-26T13:51:04.077Z · LW(p) · GW(p)
Great! Hurrah for emotions, they make you believe things that you believe are likely to be true...
epistemic rationality is about believing things that are actually true, rather than believing things that you believe to be true.
Replies from: pjeby↑ comment by pjeby · 2009-03-26T15:15:33.444Z · LW(p) · GW(p)
epistemic rationality is about believing things that are actually true, rather than believing things that you believe to be true.
And that's why it's a good thing to know what you're up against, with respect to the hardware upon which you're trying to do that.
Replies from: Roko↑ comment by Roko · 2009-03-26T15:27:43.838Z · LW(p) · GW(p)
Right, we agree. But I think that we have overused the word emotion... That which proposes hypotheses is not exactly the same piece of brainware as that which makes you laugh and cry and love. We need different names for them. I call the latter emotion, and the former a "hypothesis generating part of your cognitive algorithm". I think and hope that one can separate the two.
Replies from: pjeby↑ comment by pjeby · 2009-03-26T16:42:34.744Z · LW(p) · GW(p)
That which proposes hypotheses is not exactly the same piece of brainware as that which makes you laugh and cry and love
No... the former merely sorts those hypotheses based on information from the latter. Or more precisely, the raw data from which those hypotheses are generated, has been stored in such a manner that retrieval is prioritized on emotion, and such that any such emotions are played back as an integral part of retrieval.
One's physio-emotional state at the time of retrieval also has an effect on retrieval priorities... if you're angry, for example, memories tagged "angry" are prioritized.
↑ comment by Cameron_Taylor · 2009-03-26T00:36:00.990Z · LW(p) · GW(p)
By "narrow down", I actually meant "narrow down prior to conscious evaluation" -- not consciously evaluate for truth or falsehood. You can consciously evaluate whatever you like, and you can certainly check a statement for factual accuracy without the use of emotion. But that's not what the sentence is talking about... it's referring to the sorting or scoring function of emotion in selecting what memories to retrieve, or hypotheses to consider, before you actually evaluate them.
Still either false or meaningless, depending on how you interpret 'emotion'. Our brains narrow things down prior to conscious evaluation. It's their speciality. If you hacked out the limbic system you would still be left with a whole bunch of cortex that is good at narrowing things down without conscious evaluation. In fact, if you hacked out the frontal lobes you would end up with tissue that retained the ability to narrow things down without being able to conscoiusly evaluate anything.
Replies from: pjeby↑ comment by pjeby · 2009-03-26T00:58:03.678Z · LW(p) · GW(p)
The point of emotions -- which I see I failed to make sufficiently explicit in this post, from the frequent questions about it -- is that their original purpose was to prepare the body to take some physical, real-world action... and thus they were built in to our memory/prediction systems long before we reused those systems to "think" or "reason" with.
Brains weren't originally built for thinking -- they were built for emoting: motivating co-ordinated physical action.
↑ comment by rhollerith · 2009-03-26T01:37:54.550Z · LW(p) · GW(p)
Rational agents/things are not synonymous with good things. A paperclip maximizer is the canonical example of an agent acting rationally. As far as most people are concerned, including me, the paperclip maximizer is not acting in a good way.
Although these days Roko is probably uninterested in whether I agree with him, I agree with that passage.
According to my definition, "epistemically rational" means "effective at achieving one's goals". If the goals are incompatible with my goals, I'm going to hope that the agent remains epistemically irrational.
(Garcia used "intelligent" and "ethical" for my "epistemically rational" and "has goals compatible with my goals".)
Since 1971, Garcia's been stressing that increasing a person's epistemic rationality increases that person's capacity for good and capacity for evil, so you should try to determine whether the person will do good or do evil before you increase the epistemic rationality of the person. (Of course your definition of "good" might differ from mine.)
The smartest person (Ph. D. in math from a top program, successful entrepreneur) I ever met before I met Eliezer was probably unethical or evil. I say "probably" only to highlight that one cannot be highly confident of one's judgement about someone's ethics or evilness even if one has observed them closely. But most people here would probably agree with me that this person was unethical or evil.
Replies from: Roko↑ comment by jimrandomh · 2009-03-26T01:45:07.582Z · LW(p) · GW(p)
"good" or "bad", or even "rational" and "irrational". (Which of course are just disguised versions of "good" and "bad", if you're a rationalist.) I saw this, and felt a strong urge to walk to work where my laptop is and correct it.
Rational agents/things are not synonymous with good things. A paperclip maximizer is the canonical example of an agent acting rationally. As far as most people are concerned, including me, the paperclip maximizer is not acting in a good way.
Rationality can be bad when it's given to an agent with undesirable goals, but your own goals are always good to, so where your own thoughts are concerned, being 'rational' means they're good and being 'irrational' means they're bad. I think the article's statement was meant to apply only to thoughts evaluated from the inside.
comment by pre · 2009-03-25T19:52:11.597Z · LW(p) · GW(p)
Even Spock's most famous gesture, that single raised eyebrow of his, is an expression of puzzlement or condescension. Course, he always claimed to have emotions under check rather than wiped out.
I like the idea of Kai as an properly emotionless rationalist. A robot. My friend just called hist newborn "Kai" but he's never seen Lexx.
I often figure that if you take emotion away from people you get Abulia rather than rationality anyway.
Replies from: Annoyance, ciphergoth↑ comment by Annoyance · 2009-03-26T21:02:17.004Z · LW(p) · GW(p)
"Course, he always claimed to have emotions under check rather than wiped out."
Precisely. The few times in which he openly displayed his emotions were those in which they were so strong to be overwhelming. For example: his exuberance at discovering that Kirk was alive, instead of having been killed by Spock during ritual battle, in "Amok Time".
Spock was generally played as being profoundly controlled and reserved. It's not that he didn't possess emotions, but that they kept private and prevented from interfering.
The original series is somewhat inconsistent in this, though, as different writers saw things in different ways.
↑ comment by Paul Crowley (ciphergoth) · 2009-03-26T09:02:57.417Z · LW(p) · GW(p)
That is surely only his second-most famous gesture, after the Vulcan salute.
comment by [deleted] · 2009-03-25T22:52:09.240Z · LW(p) · GW(p)
David Hume summed it up well : "Reason is, and ought only to be, the slave of the passions.”
Eliezer tells us that "Rationalists should WIN". But you can easily substitute 'win' for 'achieve whatever makes them happy', once again reinforcing the importance of emotions. Our passions are ultimately what drive us, rationality is just taking the best available information into account when trying to achieve them.
Replies from: quiescent↑ comment by quiescent · 2009-03-25T23:19:46.974Z · LW(p) · GW(p)
Yes, except the belief that the polygraph is accurate. It's almost useless.
http://www.nap.edu/openbook.php?record_id=10420&page=212
Replies from: pjebycomment by Paul Crowley (ciphergoth) · 2009-03-25T22:17:59.843Z · LW(p) · GW(p)
If you are interested in akrasia, you must read George Ainslie's "Breakdown of Will", which gives an economic account of akrasia based on the strong empirical evidence for hyperbolic discounting and the idea of intertemporal bargaining. See picoeconomics.com
Replies from: pjeby↑ comment by pjeby · 2009-03-25T22:32:08.579Z · LW(p) · GW(p)
Those ideas are certainly meaningful, but I don't talk about them much any more. For practical purposes, you don't actually need to understand the discounting curve -- it suffices to know that you need to use present-tense representations of experience when making decisions... as long as you consistently act in accord with that knowledge.
And knowing the hyperbolic curve equation doesn't provide any additional motivation for you to take the necessary action.
By the way, here's an example of how to use present-tense representations, combined with positive somatic markers, to create immediate positive motivation:
http://thinkingthingsdone.com/2008/07/thoughts-into-action.html
(There's actually a lot more to "present tense representation" than merely counteracting the discount curve, though, and I'll probably talk more about that in the later posts of this series. For now, I need to get back to the prep work for the workshop I'm doing on Saturday, though I'll still be reading and commenting.)
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-03-26T09:12:37.098Z · LW(p) · GW(p)
I'm confused now, because the way you discuss it here it sounds like you read Ainslie's book some time ago, but in the post you say you only very recently learned the word akrasia. If you haven't read the book, I again urge you to - there's a lot more to it than just presenting the discount curve, there's a whole theory that sets out how our wacky discounting curve leads to all sorts of behavours like making rules for ourselves. Certainly if you're actively trying to make a theory of akrasia, doing so without making sure you're thorougly familiar with this work would seem like a great mistake to me.
Replies from: pjeby↑ comment by pjeby · 2009-03-26T16:17:25.870Z · LW(p) · GW(p)
If you haven't read the book, I again urge you to - there's a lot more to it than just presenting the discount curve, there's a whole theory that sets out how our wacky discounting curve leads to all sorts of behavours like making rules for ourselves.
No, I haven't read the book -- I've just encountered discussions of the discount curve before. And I read the precis and a couple articles available on the site you linked, and find his rules and bargaining model to be massively overcomplicated, compared to what you need to know to achieve actual results. From my POV, it's like he's trying to explain a word processor by discussing pixels, instead of fonts and character buffers.
IOW, his model actively distracts one from knowing anything useful about how the human platform generates the results it gets, or how to make the platform DO anything.
It's like trying to build a model of health by discussing how to work around symptoms, instead of actually curing any diseases. And it perpetuates the notion that you need to (and can) work around your "interests" at the conscious level, instead of simply adjusting the interests directly -- i.e., it's a perpetuation of "far" (extrapolative) thinking in a place where "near" (directly-associative) thinking is desperately needed.
That having been said, there are some things he gets right: we do have conflicting interests, and they do more or less interact in the manner described. It's just that knowing that as an isolated fact, doesn't tell you anything: it's like knowing a thing's emergent properties, but not the rules that generate those properties.
(Also, his ideas about appetite moderation and satiation are interesting, so I do intend to study that further, to see if it leads to anything useful. Likewise some of his thoughts on dissociation.)
if you're actively trying to make a theory of akrasia
I'm not making a theory of akrasia; I've been reverse-engineering fixes for it. That means I've been developing a practical model that supports predictions I can test in myself and my clients, to produce quick results.
That's not quite the same thing as developing an accurate theoretical model. You might say I'm making a street map rather than a terrain map of the same territory: it might not be "accurate" in a literal sense, but it gets people where they want to go.
I'm still a rationalist and interested in truth, but I'm seeking navigational truth rather than topographical truth, and the experimental results that count are whether my clients are accomplishing the things they want to.
The reason I linked to that thoughts-into-action video is that it's a concrete and highly repeatable demonstration of the practical results that my model produces... and it doesn't need anything in Ainslie's model (AFAICT from the precis) to explain how to do it. (You can certainly fit Ainslie's model to it, but I think you'd have a hard time getting Ainslie's model to predict it in advance, or to generate the actual steps of the method.)
And that technique is only the tip of the iceberg -- it's something that I deliberately chose to be a quick-and-easy demonstration that could be done inside of YouTube's 10 minute limit, and which would work on most people if they follow the directions precisely.
It's a little bit of a cheat, in that inducing positive motivation (which is what the technique does) is considerably less useful than reducing negative motivation in practical treatment of chronic procrastination. But my techniques for reducing negative motivation are more complex to teach.
Btw, another critique of the temporal bargaining concept is that, at least in the precis, there's little discussion of negative motivation, which is in my experience is almost always the dominant factor in undesired behaviors.
Certainly, he dabbles in the idea of "credibility injury", but he appears to miss the fact that it's precisely our desire to avoid painful self-image adjustment that generates our worst failures!
The desire to avoiding self-image injury doesn't help us, it actually hurts us... and it's due to a design flaw in the human architecture that I call the "perform-to-prevent bug". (It's not a flaw from evolution's perspective, of course, just from ours!)
To be fair, he does point out that rules and willpower make things worse, but he's missing the self-image injury avoidance as the generator of both the rules and the negative behaviors. (In my work, it's clear that removing the negative motivation and stopping all attempts at using rules and willpower result in eliminating the compulsive behaviors that motivated the desire to use willpower in the first place.)
I suppose, all in all, though, I should say that he's actually doing pretty good for someone who doesn't need to have their model actually fix anyone. ;-)
comment by Unnamed · 2009-03-26T00:38:06.917Z · LW(p) · GW(p)
Based on this, you may want to call it "Trope and Liberman's near/far theory," rather than attributing it to Robin Hanson.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-03-26T09:15:39.920Z · LW(p) · GW(p)
Trope and Liberman's Construal Level Theory.
comment by Vladimir_Nesov · 2009-03-26T00:16:07.630Z · LW(p) · GW(p)
There are some good points in this post. However, you have constructed an unwieldy overloading of the word emotion, forging it into phlogiston of your theory. Taboo "emotion". When you describe the quite real operations performed by human mind, consisting in assigning properties to things and priming to see some properties easier or at all, you bless this description with the action of emotion-substance for some reason.
Somatic markers are effectively a kind of cached thought. They are, in essence, the "tiny XML tags of the mind", that label things "good" or "bad", or even "rational" and "irrational". (Which of course are just disguised versions of "good" and "bad", if you're a rationalist.) [...] See, it's not even that only strong emotions do this: weak or momentary emotional responses will do just fine for tagging purposes.
Later, you deny the inference algorithms that don't contain the emotion-substance.
Without emotions, we couldn't reason at all.
It doesn't invalidate the points you are making, but it does make the word somewhat hollow and misleading, especially as it already has a conventional meaning. When you are talking about human brain, the counterfactual situation where you take away the emotions doesn't make much sense, since the architecture depends on all of its parts working properly. (There are broken brains that can be said to exhibit a measure of this, of course, but that won't be about the mysterious emotion-substance.) You need to specify a model that has parts or properties corresponding to your use of the elements of your theory. For example, you talked about good and bad emotions in the past, and it's OK once you specify a model which is driven by good and bad emotions instead of more explicit expected terminal utility computation.
Replies from: pjeby, jimrandomh, Roko↑ comment by pjeby · 2009-03-26T00:48:10.128Z · LW(p) · GW(p)
Taboo "emotion".
By emotion, I mean, "that which controls the macro-physiological state of the body across multiple control systems, whose effects may be observed through kinesthetic awareness, and which is not a product of direct conscious effort to influence that state."
Or, in simpler words, "feelings". ;-)
Evolutionarily, I propose that the function of emotion is to prepare the body for co-ordinated action of some kind - for example, fear prepares for fight/flight and triggers heightened sensory focus.
Other emotions are more cerebral (e.g. the "aha" sensation), but still can be perceived in physical form, often still having externally visible effects, even to the naked eye.
When you describe the quite real operations performed by human mind, consisting in assigning properties to things and priming to see some properties easier or at all, you bless this description with the action of emotion-substance for some reason.
The reason is that brains were not created for us to perform reasoning, they were created to classify things by emotion -- that is, to prepare the body for responses appropriate to recognized external events. It's important to remember that thinking arose after simple memory-prediction-action chains, and that it's built on top of that legacy system. That's why emotion (using the definition I gave above) is critical: tagging things with emotions and replaying those emotions upon recall is the primary substance and function of brains.
Yes, there are goal subsystems and all that... but that's another system (like "thinking") that's layered on top of the memory-prediction-action chain.
For example, you talked about good and bad emotions in the past, and it's OK once you specify a model which is driven by good and bad emotions instead of more explicit expected terminal utility computation.
Certainly -- that's later in the sequence. It was going to be next, but yours and Yvain's comments make me think that maybe I need to get a bit more explicit about the evolutionary chain here, including the memory-prediction-action core, although maybe I can work that in at the beginning.
Replies from: Cameron_Taylor, Vladimir_Nesov↑ comment by Cameron_Taylor · 2009-03-26T01:16:10.567Z · LW(p) · GW(p)
The reason is that brains were not created for us to perform reasoning, they were created to classify things by emotion -- that is, to prepare the body for responses appropriate to recognized external events. It's important to remember that thinking arose after simple memory-prediction-action chains, and that it's built on top of that legacy system. That's why emotion (using the definition I gave above) is critical: tagging things with emotions and replaying those emotions upon recall is the primary substance and function of brains.
Really, taboo emotion, taboo feelings, taboo anything that allows "all things that are not conscious reasoning" to be compressed to a single word.
Replies from: pjeby↑ comment by pjeby · 2009-03-26T02:21:15.826Z · LW(p) · GW(p)
I don't understand your request. Do you want me to list every possible somatic marker?
(Note, btw, that different animals have different somatic marker hierarchies, so that would be a pretty extensive list, if I were even able to compile it.)
In my work, I rarely need to distinguish the nature of an emotion in any finer degree than "toward" or "away", "good" or "bad". The difference between (say) somebody feeling "terrible" about their work or "awful" is not important to me, nor do I care what specific somatic markers are involved in marking those concepts either across human beings or even within any given single human being.
However, it is important for the person experiencing that marker to be able to identify the physical components of it, in order to be able to test whether or not an intervention I suggest has actually removed the link between a concept and the marker that gets automatically played back when the concept is thought of. (Since the marker is a preparation for action -- including actions such as "hesitation" -- changing the marker also changes the behavior associated with the concept... but the markers can be tested much more quickly than full-blown behaviors, allowing for faster feedback in cases where more than one technique might be relevant.)
Thus, "emotion" to me is a testable and predictable concept that governs human motivation in a meaningful way. If somebody wants to give me a better word to use to describe the thing upon which my interventions operate and manifest as physical (muscular, visceral, etc.) sensations in the body, then by all means, suggest away.
I am not a psychology researcher -- I help people to fix motivation problems and make personality changes. My work is not to "prove" that a particular hypothesis or physical mechanism is in effect in human beings; it's to identify practical techniques, and to devise useful models for understanding how those techniques operate and by which new techniques can be developed. Of course, as I find out more about evolution or about experimental results, I incorporate that information into my theoretical models to improve my practical results.
It seems to me that you are asking me to stop using a model that actually works for producing practical results, and to substitute something else -- and what that something else is, I'm not sure.
So, I'd appreciate it if you'd explain more specifically what it is you're asking for, in the form of an actionable request, rather than primarily in the form of what you'd like me not to do.
↑ comment by Vladimir_Nesov · 2009-03-26T00:59:44.979Z · LW(p) · GW(p)
The reason is that brains were not created for us to perform reasoning, they were created to classify things by emotion -- that is, to prepare the body for responses appropriate to recognized external events. It's important to remember that thinking arose after simple memory-prediction-action chains, and that it's built on top of that legacy system. That's why emotion (using the definition I gave above) is critical: tagging things with emotions and replaying those emotions upon recall is the primary substance and function of brains.
You have already said this in the article, and I basically agree with this model. But it doesn't follow that the categories/responses/tags are in any sense simple. They have the structure of their own, the structure as powerful as any piece of imagination. The structure of these "tags" has complexity still beyond the reach of any scientific investigation hitherto entered upon. ;-) And for this reason it's an error to write them off as phlogiston, even if you proceed with describing their properties.
Replies from: pjeby↑ comment by pjeby · 2009-03-26T02:34:45.204Z · LW(p) · GW(p)
And for this reason it's an error to write them off as phlogiston, even if you proceed with describing their properties.
I'm treating emotion -- or better, somatic markers -- as a category of thing that is useful to know about. But I have not really needed to have finer distinctions than "good" or "bad", for practical purposes in teaching people how to modify their markers and change their beliefs, motivations, etc. So, if you're saying I have too broad a category, I'm saying that in practice, I haven't needed to have a smaller one.
Frankly, it seems to me that perhaps some people are quibbling about the word "emotion" because they have it labeled "bad", but I'm also using it to describe things they have labeled "good". Ergo, I must be using the word incorrectly. (I'd be happy to be wrong about that supposition, of course.)
From my perspective, though, it's a false dichotomy to split emotion in such a way -- it overcomplicates the necessary model of mind, rather than simplifying it.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-03-26T08:36:24.533Z · LW(p) · GW(p)
Frankly, it seems to me that perhaps some people are quibbling about the word "emotion" because they have it labeled "bad", but I'm also using it to describe things they have labeled "good". Ergo, I must be using the word incorrectly. (I'd be happy to be wrong about that supposition, of course.)
From my perspective, though, it's a false dichotomy to split emotion in such a way -- it overcomplicates the necessary model of mind, rather than simplifying it.
I don't believe anyone is thinking that.
↑ comment by jimrandomh · 2009-03-26T01:26:31.427Z · LW(p) · GW(p)
I agree that the word "emotion" as it's conventionally used is different form how it's used here, and overloading the term serves to confuse things, but there's a relation between the two meanings that's worth exploring.
To summarize pjeby's essay as best I can, we generate propositions, which when we think about them activate concepts like "useful" or "truthy", which are special cases of "good", or else activate concepts like "convoluted" or "absurd", which are special cases of "bad", and whether good or bad markers are active determines whether we continue along the same line or purge it from our working memory.
pjeby treats the goodness or badness of a concept in memory as being emotion, but conventional use of the word emotion refers instead to an aspect of mental state that adjusts the perceived goodness and badness of things as they are retrieved from or recorded to memory. This may be the mechanism by which concepts get tagged in the first place, but the relation between these two meanings is complex enough that assigning them the same word can lead to false conclusions.
Also, there are almost certainly more concepts basic to cognition than just the good/bad spectrum. Other possible tag spectra would be calm/excited, near/far, and certain/uncertain.
Replies from: pjeby↑ comment by pjeby · 2009-03-26T02:05:29.404Z · LW(p) · GW(p)
To summarize pjeby's essay as best I can, we generate propositions, which when we think about them activate concepts like "useful" or "truthy", which are special cases of "good", or else activate concepts like "convoluted" or "absurd", which are special cases of "bad", and whether good or bad markers are active determines whether we continue along the same line or purge it from our working memory.
Yes, and I'm further arguing that these markers are somatic -- they exist to effect physical changes in the body.
pjeby treats the goodness or badness of a concept in memory as being emotion, but conventional use of the word emotion refers instead to an aspect of mental state that adjusts the perceived goodness and badness of things as they are retrieved from or recorded to memory.
I don't even begin to understand this sentence, since in my view, the goodness or badness is represented by emotion - i.e. a somatic marker. And the markers are somatic because in an evolutionary context, goodness or badness had to do with moving towards or away from things: I can eat that, that will eat me, that's a potential mate, etc.
In a sense, that's more or less the "root" system from which all other markers derive, although it's a mistake to treat it like some sort of logical hierarchy, when in fact it's just a collection of kludges upon kludges (like most everything else that evolution does).
Replies from: jimrandomh↑ comment by jimrandomh · 2009-03-26T06:25:58.051Z · LW(p) · GW(p)
First I should clarify my rather ambiguous remark about mental states affecting perceived goodness and badness as things are retrieved or recorded from memory. What I mean is, the markers which we assign things depend partially on our state of mind. For example, we think of some things as dangerous (tigers, guns, ninjas), and some things as not-dangerous (puppies, phones, secretaries), but some things could go either way (spiders, bottles, policemen), depending on how they're interpreted. If you're feeling safe, then you'll tend to label the border cases as safe; if you're feeling frightened for unrelated reasons, the border cases will come up as dangerous. In other words, priming applies to somatic markers too, not just semantic ones. Or, as I put it in my previous post, emotional state adjusts the perceived goodness and badness of things as they are retrieved from memory.
If every time you think of something you feel frightened, then you will come to think of that thing as scary, even if the only reason you were frighted at the time was because of some irrelevant other thing. This is what I meant by saying that emotional state affects perceived goodness and badness as it's recorded to memory.
Yes, and I'm further arguing that these markers are somatic -- they exist to effect physical changes in the body.
I'm not so sure about this. They certainly effect behaviors, and those behaviors may have physiological ramifications, but many markers have no effect or only indirect effects. Or you could say that each mechanism in the mind exists to support the body, since they co-evolved, but that would be like saying that my liver exists to support my left thumb; all parts of the body are interdependent, the brain included.
↑ comment by Roko · 2009-03-26T00:29:11.315Z · LW(p) · GW(p)
unwieldy overloading of the word emotion,
Agreed. I second the request to either taboo "emotion" or define it more precisely.
the counterfactual situation where you take away the emotions doesn't make much sense
Agreed. The human brain is too much of a mess to imagine subtracting "all traces of emotion" and still have a human brain. Also, the fuzziness of the word emotion makes it hard to decide the truth of such statements.
comment by NancyLebovitz · 2010-10-04T12:47:29.301Z · LW(p) · GW(p)
Are there sequels to this post?
comment by byrnema · 2009-03-26T00:37:48.980Z · LW(p) · GW(p)
I agree with the other posts. I had a distinctly negative somatic marker when I read the word 'emotion' and this discomfort made it impossible for me to carefully read the rest of the post. If I was required to (say, for work), then I would have to wait until the negative response attenuated -- usually it take a half a day or so to willfully erase a somatic marker.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-03-26T00:42:53.002Z · LW(p) · GW(p)
I find it takes effort to distinguish between the dogmatic enforcement of particular uses of brain related concepts like 'emotion' and the actual insights being shared.
comment by Jack · 2009-03-25T22:29:55.212Z · LW(p) · GW(p)
I'm not familiar with the psychological literature on emotions but its a little counter-intuitive (I think my brain is tagging it as annoying) to use the word emotions to describe all of these different tags. Maybe the process of tagging something "morally obligatory" is indistinguishable from tagging something "happy" on an fMRI but in common parlance and, I think phenomenologically, the two are different. Different enough to justify using a word other than emotion (which traditionally refers to a much smaller set of experiences). It is worth noting, for example, that we use normative terms to describe emotions- jealousy bad, love good, etc. Even though both can motivate decisions. I assume you have it that this is just the brain tagging motivations- and maybe thats right, but in that case you probably want a different word.
Also, I assume you don't think highly of attempts to derive values from reason? I don't think such attempts have been especially successful, but its not as if they haven't been tried. Are all such attempts just trying to describe our feelings in logicy-sounding ways?
Lastly, am I the only one who gets nervous when we rely heavily on programming metaphors. Seems like the sort of thing that could steer us terribly wrong.
Replies from: pjeby↑ comment by pjeby · 2009-03-25T22:43:58.804Z · LW(p) · GW(p)
Maybe the process of tagging something "morally obligatory" is indistinguishable from tagging something "happy" on an fMRI but in common parlance and, I think phenomenologically, the two are different.
You bet... but both are going to be tagged with somatic markers that are to some extent universal... and the same term may have both negative and positive markers attached.
I think, though, that you are thinking "morally obligatory" somehow allows you to cheat and pretend that you arrived at an idea through pure reasoning, when in fact, "morally obligatory" is just a word pasted on top of a different set of somatic markers. For example, it may have the same somatic markers that another person would call "righteous indignation", or perhaps "disgust"... or maybe even something representing "elevation" (See Haidt's "Happiness Hypothesis").
The fact that we put different words on the same somatic markers doesn't magically make them pure.
OTOH, if all you meant is that "happy" is likely to be more long-lived than "morally obligatory", I'm inclined to agree, subject to the caution that verbal labels are not somatic markers... and there exist people with negative somatic markers associated with good and happy things -- for example, if they believe those things cannot be attained by them.
I'll talk more about the relationship between somatic markers and toward/away motivation in future posts.
Are all such attempts just trying to describe our feelings in logicy-sounding ways?
I thought Eliezer had already more-or-less established this in his OB posts. In other words, yes. Human values are human values because they're based on human feelings. And our moral reasoning is motivated reasoning... not just because it's influenced by emotion, but also because verbal reasoning itself appears to have been evolved for the specific purpose of manipulating other people's emotions, while defending against others' attempts at manipulation.
But now I'm getting ahead of the series again.
comment by whpearson · 2009-03-25T21:56:24.232Z · LW(p) · GW(p)
Damisio's view of the brain is very interesting stuff. His book Descartes' Error is a fairly easy introduction to it.
This is my view of why the brain and reasoning works off usefulness and emotion.
Consider the genes eye view of the brain. You want to control what a very complex and changeable system does so that it will propagate more of you, so you find a way to hook what and how that system behaves into signals from the body such as hunger, discomfit and desire. Because you can directly control those signals, you can get it do what you want. The genes don't want to directly control what the brain does, its purpose is to adapt on their behalf.
comment by timtyler · 2009-03-25T20:11:20.340Z · LW(p) · GW(p)
So: deep blue has emotions?!?
It seems like a definitional debate over what the term "emotion" means - without actually offering any definitions.
Replies from: pre, pjeby, Kaj_Sotala↑ comment by pre · 2009-03-25T20:19:13.623Z · LW(p) · GW(p)
Does Deep Blue have emotions?
Well, as I understand the way it works it does attach some kinda value to how 'good' any given board position is, then works through the tree of positions and finds the route to the 'best' of those positions.
Is that value an emotion?
Well no.
In a very single-dimensional way it might be a model of one though. I assume it's probably a single real number, maybe even an integer, rather than a complex set of semantic associations.
Deep Blue is seeking just "WIN!" and labelling potential board positions accordingly whereas humans are seeking "Fun" and "Happy" and "Enough sex" and "Intellectually Interesting" and "Not scary" and god knows how many other dimensions too.
Replies from: pjeby↑ comment by pjeby · 2009-03-25T21:47:53.915Z · LW(p) · GW(p)
Most of those dimensions can actually be classified as "towards" or "away", though, which will be part of the subject of the next post in the series.
The important distinction for humans, though, is that emotions are "somatic markers" -- meaning that they are distinctions in the body, for purposes of organizing action responses. They aren't arbitrary scores, but more like "action stances" of varying degrees. So yes, they're multi-dimensional and all of the categories you mention (e.g. "intellectually interesting" and "enough sex") qualify... but they also largely group into (and layer on top of the machinery for) the somewhat-more-fundamental operators of "toward" and "away".
↑ comment by pjeby · 2009-03-25T21:56:52.626Z · LW(p) · GW(p)
So: deep blue has emotions?!?
Sort of. It has hardware support for scoring the value of specific positions... which is actually an awful lot like the brain's sorting and tagging, albeit considerably more crude.
It seems like a definitional debate over what the term "emotion" means - without actually offering any definitions.
Which is one reason I like the "somatic marker" term, when we're talking about this -- it highlights their nature as action postures in the physical body, being used as a scoring and sorting system. The fact that we call some of these markers "emotions" isn't really all that relevant.
(Also, "somatic marker" helps to avoid some rationalists' existing negative emotional tags on the idea of "emotions" being involved in reasoning.)
↑ comment by Kaj_Sotala · 2009-03-25T20:46:51.002Z · LW(p) · GW(p)
So: deep blue has emotions?!?
I think what Fellous comments about "emotions" in machines is pretty good. As summarized by Browne:
If robots are to benefit from mechanisms that have a similar role to emotions it is suggested to use internal variables [Michaud et al. 01]. However, Fellous warns that an isolated emotion is simply an engineering hack, i.e. simply describing a single, isolated internal variable as an emotion could be descriptive or anthropomorphic, but not biologically inspired [04]. Instead, interrelated emotions, expressed due to resource mobilisation with context dependent computations dependent on perceived expression is more realistic. A consequence of this is that an artificial system must have limited resources in order to express emotions. These emotions may appear different if expressed externally or internally, but are very related due to their underlying mechanisms.
Thus robot-emotions should be built from the following guidelines [ibid]:
emotions are not a separate centre that computes a value on some predefined dimension
emotions should not be a result of cognitive evaluation (if state then this emotion)
emotions are not combinations of some pre-specified basic emotion (emotions are not independent from each other)
emotions should have temporal dynamics and interact with each other.
System wide control of some of the parameters (of the many ongoing, parallel processes) that determine the robot behaviour.
comment by Vanilla_cabs · 2021-05-19T19:55:16.901Z · LW(p) · GW(p)
While I agree with the gist, I'm looking forward to a more detailed vision of emotions. This current post gives the false impression that emotions are neatly symetrical and one-dimensional (good-bad). In reality there are multiple dimensions to emotions (desirable-undesirable, pleasurable-displeasing), and they're not clearly symetrical. If fear is the symetrical of desire, then what is disgust?
Emotions are action triggers and regulators that existed way before cognition did. We might mistakenly believe that they help our cognition by sorting stimuli in good/bad categories, while in reality it's the opposite. Cognition is just a computer that's been added on top of our emotional brain to serve (allow me that one emphasis) as an assisting tool.
comment by Vanilla_cabs · 2021-05-19T19:41:46.441Z · LW(p) · GW(p)
For the embodiment of pure rationality, why not simply a computer? Everyone knows one, we can all see that you put whatever you want on one end depending on your goals and values, and it very rationally obeys those commands to the letter, without taking initiatives. Well, that used to be that way at least.
comment by SilasBarta · 2009-03-26T03:16:17.149Z · LW(p) · GW(p)
I enjoyed reading this, pjeby. It answered and tied together a lot of the things I'd wondered since I started reading about artificial intelligence. I won't spell out the relationship between your post and the issues (this will be long anyway...), but I'll list the connections I saw and what it brought to mind:
-How evolution's "shards of desire" translate into actions.
-What it would mean to, as evolutionary psychology attempts to do, "explain emotions by the behaviors they induce". And similarly, what feelings an animal would have to get in order to execute complex but innate activities (e.g. cats burying excrement, beavers building dams).
-That paper that got a lot of attention by talking about the five dimensions or moral reasoning which went into how in certain cultures they feel physically ill at the though of failing their duties.
-The issue of "men are more rational, women more emotional". I had thought that it would be more accurate to distinguish by reductionist/holist, i.e., what we call "emotional" means basing judgments on a broader array of factors that are aggregated automatically by the brain through useful heuristics.
-The standard model of an "agent" in AI -- whereby it takes in sense data, models its environment, makes predictions, and selects actions that optimize some utility function. This had long seemed to me like the wrong way to go about the problem, at least because of all the infinite regress one runs into. I figured that very simple mechanisms go through crude but effective versions of this (e.g. a mass/spring oscillating back to its equilibrium position), and emotions in the sense you mean are another way to build up to a non-regessing agent.
comment by Annoyance · 2009-03-26T19:46:14.101Z · LW(p) · GW(p)
Read Diane Duane's "Spock's World". It goes to great lengths to correct the error you're making.
Among other things, it suggests that the word usually translated as "suppression of emotion" actually means something closer to "passion's mastery", and that the Vulcan ethos is to recognize and compensate for emotions instead of, as many seem to believe, denying them.
Also, as awesome as Kai is, Data is clearly a better example of a functioning rational being without emotions. Data isn't lacking in preferences, goals, and motivations. But he does lack specific, complex states that humans possess. He is perfectly capable of being wary around a danger, but he lacks fear. He has ethical principles and will kill to enact them if necessary, but he neither becomes angry nor experiences hatred.
Replies from: thomblake↑ comment by thomblake · 2009-03-27T21:12:35.625Z · LW(p) · GW(p)
I completely agree with you about Data. pjeby is begging a question w.r.t. metaethics - he assumes that judgments of 'good' and 'bad' have only emotional content, apparently based solely on the fact that they are correlated with emotions (we call that emotivism - it's not very popular amongst ethicists).
Replies from: pjeby↑ comment by pjeby · 2009-03-27T21:56:06.122Z · LW(p) · GW(p)
I didn't say anything about meta-ethics; I said that human brains require emotion in order to prioritize their thinking... no matter how much you might like the case to be otherwise. The brain with which you seek to devise some sort of extra-emotional calculation requires emotion in order to perform those calculations.
That doesn't say anything about the content of the calculations themselves, however. Your brain needs emotion to learn chess or play it... but that doesn't mean that chess itself is emotional. So there's your escape hatch.
comment by Cameron_Taylor · 2009-03-26T00:19:44.143Z · LW(p) · GW(p)
Kai has no goals or cares of his own, frequently making such comments as "the dead do not want anything", and "the dead do not have opinions". He mostly does as he's asked, but for the most part, he just doesn't care about anything one way or another.
The way Kai is described certainly matches what an unemotional and goalless yet powerfully rational creature would be. Yet somehow, the authors manage to slip in a remarkable amount of goal direction and 'caring'. We just can't help but assume that amoral, inhuman creatures would take on human characteristics if we socialised them enough.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-03-26T00:44:31.311Z · LW(p) · GW(p)
Emotionless (in the normal sense of the word), maybe. Goalless, no. What defines the decisions to follow requests? What defines the specific manner in which they are followed? How is your request to be understood? These all depend on how the agent in question sees the world, and on its preference to act this way and not another. The goal-less agent is not an apathetic zombie servant, but a rock.
Replies from: pjeby