The language of desire (post 2 of 3)

post by torekp · 2015-05-03T21:57:33.050Z · LW · GW · Legacy · 3 comments

Contents

3 comments

To the extent that desires explain behavior, it is primarily by meshing with beliefs to favor particular actions.  For example, if I desire to lose 5 lbs, and I believe that exercising a half hour per day will cause me to lose 5 lbs, then this belief-desire pair makes it more likely that I will exercise.  Beliefs have semantic content.  In order to explain an action as part of a belief-desire pair, a desire must also have semantic content: one that in some sense "matches" a relevant part of the belief.  In the example, the matching semantic content is "to lose 5 lbs".

Of course, desires can also explain some behaviors without functioning as part of a belief-desire pair.  I might want something so badly, I start trembling.  No beliefs are required to explain the trembling.  Also, notably, desires usually (but not always) feel like something.  We gesture in the vague direction of these feelings by talking about "a burning desire", or "thirst" (for something that is not a drink), etc.  In these ways, "desire" is a richer concept than what I am really after, here.  That's OK, though; I'm not trying to define desire.  Alternatively, we can talk about "values", "goals", or "utility functions" - anything that interacts with beliefs, via semantics, to favor particular actions.  I will mostly stick to the word "desire", but nothing hangs on it.

So how does this "semantics" stuff work?

Let me start by pointing to the Sequences.  EY explains it pretty well, with some help by pragmatist in the comments.  Like EY, I subscribe to the broad class of causal theories of mental content.  For our purposes here, we need not choose among them.

The reference of the concepts involved in desires (or goals) is determined by their causal history.  To take a simple example, suppose Allie has a secret admirer.  It's Bob, but she doesn't know this.  Bob leaves her thoughtful gifts and love letters, which Allie appreciates so much that she falls in love with "my secret admirer".  She tells all her friends, "I can't wait to meet my secret admirer and have a torrid affair!"  Allie's desire refers to Bob, because Bob is the causal source of all the gifts and love letters, which in turn caused Allie's relevant thoughts and desires.

We could have told the story differently, in a way that made the reference of "my secret admirer" doubtful, or even hopeless.  We could have had many secret admirers, or maybe some pranksters, leaving different gifts and notes at different times, with Allie mistakenly attributing all to one source.  But that would be mean, and in this context pointless.  Let's not tell that story.

Kaj_Sotala brings up another point about desire, which I'd like to quote at length:


In most artificial RL [reinforcement learning] agents, reward and value are kept strictly separate. In humans (and mammals in general), this doesn't seem to work quite the same way. Rather, if there are things or behaviors which have once given us rewards, we tend to eventually start valuing them for their own sake. If you teach a child to be generous by praising them when they share their toys with others, you don't have to keep doing it all the way to your grave. Eventually they'll internalize the behavior, and start wanting to do it. One might say that the positive feedback actually modifies their reward function, so that they will start getting some amount of pleasure from generous behavior without needing to get external praise for it. In general, behaviors which are learned strongly enough don't need to be reinforced anymore (Pryor 2006).


A desire can form, with a particular referent based on early experience, and remain focused on that object or event-type permanently.  That's a point I will be making much hay of, in my third and final post in this mini-sequence.  I think it explains why Gasoline Gal's desire is not irrational - and neither are some desires many people have on my real target subject.

3 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2015-05-03T22:18:32.727Z · LW(p) · GW(p)

This seems to me a much weaker post than the one before. Or maybe I am confused. Is your point that people can value something that does not exist? Or is an abstract concept? Or is not related to the external world?

Replies from: torekp, listic
comment by torekp · 2015-05-05T01:14:21.762Z · LW(p) · GW(p)

None of those (I guess I'm not sure I understand the last). Those are the hard cases (or the inapplicable cases) for a causal theory of reference. Please confine your attention to valuings of objects/processes that do exist, concretely.

Maybe it was unclear. I'm just trying to make some general remarks about semantics, as applied to desires. I'm trying to suggest that it typically works just like the way the concepts involved in beliefs typically get their referents - by causal patterns that underlay the concepts as they were learned.

I'm trying to sneak as many premises under the radar, as I can get, before people will see where the argument is going. That way, objections will be more likely to be motivated by a general implausibility of these premises, rather than a dislike of the conclusion.

comment by listic · 2015-05-04T19:44:22.199Z · LW(p) · GW(p)

Let's hope part 3 will make sense of all of it.