Goals vs. Rewards

post by icebrand · 2011-01-04T01:43:52.981Z · LW · GW · Legacy · 11 comments

Related: Terminal Values and Instrumental Values, Applying behavioral psychology on myself

Recently I asked myself, what do I want? My immediate response was that I wanted to be less stressed, particularly for financial reasons. So I started to affirm to myself that my goal was to become wealthy, and also to become less stressed. But then in a fit of cognitive dissonance, I realized that both money and relaxation are most easily considered in terms of being rewards, not goals. I was oddly surprised by the fact that there is a distinction between the two concepts to begin with.

It later occurred to me to wonder if some things work better when framed as goals and not as rewards. Freedom, long life, good relationships, and productivity seemed some likely candidates. I can't quite see them as rewards because a) I feel everyone innately deserves and should have them (even though they might have to work for them), and b) they don't quite give the kind of fuzzies that motivate immediate action.

These two kinds of positive motivation seem to work in psychologically dissimilar ways.  Money for example is more like chocolate, something one has immediate instinctive motive to obtain and consume. Freedom of speech is more along the lines of having enough air to breathe. A person needs and perhaps inherently deserves to have at least a little bit of it all the time, and as a general rule will have a constant background motive to ensure that it stays available. It's a longer-term form of motivation.

A reward seems to be something where you receive immediate fuzzies when you achieve it. Getting paid, getting a pat on the back, getting your posts and comments upvoted... Things where you might consider them more or less optional in the grander scheme of things, yet they tend to trigger an immediate sense of positive anticipation before the event which is reinforced by a sense of satisfaction after. Actually writing a good post or comment, actually doing a good job, being a good spouse or friend -- these are surely related, but are goals in and of themselves. The mental picture for a goal is one of achieving, as opposed to receiving.

One thing that seems likely to me is that the presence of shared goals (and the communication thereof) tends to a good way to generate long term social bonds. Rewards seem to be more of a good way to deliberately steer behavior in more specific aspects. Both are thus important elements of social signaling within a tribe, but serve different underlying purposes.

As an example I have the transhumanist goal of eliminating the current limitations of the human lifespan, and tend to have an affinity for people who also internalize that goal. But someone who does not embrace that goal on a deep level may still display specific behavior that I consider helpful for that goal, e.g. displaying comprehension of its internal logic or having a tolerant attitude towards actions I think need to be taken. I'm probably somewhat less likely to form a long-term relationship with that person than if they were identifiable as a fellow transhumanist, but I am still likely to upvote their comments or otherwise signal approval in ways that don't demand too much long term commitment.

The distinctions I've drawn here between a goal and a reward might not apply directly to non-human intelligences. In fact it might be misleading in the more generalized context to call a reward something other than a goal (it is at least an implicit goal or value). However the distinction still seems like something that could be relevant for instrumental rationality and personal development. Our brains process the two forms of motivational anticipation in different ways. It may be that a part of the akrasia problem -- failure to take action towards a goal -- actually relates to a failure to properly categorize a given motive, and hence failure to process it usefully.


Thanks to the early commenters for their feedback: TheOtherDave, nornagest, endoself, David Gerard, nazgulnarsil, and Normal Anomaly. Hopefully this expanded version is more clear.

11 comments

Comments sorted by top scores.

comment by Nornagest · 2011-01-04T02:12:18.254Z · LW(p) · GW(p)

A lot of this seems to be driven by the terminal goals/instrumental values divide, albeit with different framing.

In that context, money looks to me like an instrumental value common to many goals but not a terminal goal in its own right: beyond the point of financial stability, money can't be expected to carry much intrinsic utility if it remains unused. Same goes for most of the rest of the things you list.

Relaxation's an exception, though; I can't think of any supergoals that "lack of stress" could serve, unless you count values as vague as "happiness".

Replies from: endoself
comment by endoself · 2011-01-04T02:34:22.114Z · LW(p) · GW(p)

I don't think this is the same distinction. Instrumental vs. terminal is not specific to humans, but this seems to be about how different types of motivation affect human psychology. Goals seem to correspond to far mode motivation, abstractly causing something to be planned for in the long term, while rewards are near-mode; they are explicitly caused by certain actions and motivate immediate action. Rewards also seem to be the kind of thing that behavioral psychology describes, and that can be harnessed using the techniques in http://lesswrong.com/lw/2dg/applying_behavioral_psychology_on_myself/ .

comment by David_Gerard · 2011-01-04T14:41:52.277Z · LW(p) · GW(p)

It's the recursive question "what do I want?"/"what would I want if I got that?"

"What do I want?" is the key question to everything. (More or less.)

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-04T15:25:29.369Z · LW(p) · GW(p)

What would you do if you got an answer?

comment by TheOtherDave · 2011-01-04T15:23:52.708Z · LW(p) · GW(p)

Agreed with everybody that "goals vs rewards" as you're using it here maps pretty well to terminal and instrumental values as they've been discussed elsewhere.

Of course, I can intentionally adopt something as a goal even if it's really an instrumental value. E.g., I can adopt the goal of making a comprehensive to-do list, even though the only reason I value to-do lists is because they help me achieve something else.

But a purist would claim that this is actually a "sub-goal"; if it became a goal I'd lose my purpose (e.g., start collecting to-do lists as a hobby).

For my own part, I think "terminal value" is a concept without much real-world application, useful only as an idealization. In practice, I suspect all values are instrumental, and exist in a mutually reinforcing network, and we label as "terminal values" those values we don't want to (or don't have sufficient awareness to) decompose further. (And, by the same token, I think all goals are "sub-goals".)

But that's not a mainstream position as far as I know.

comment by nazgulnarsil · 2011-01-04T02:09:37.627Z · LW(p) · GW(p)

I'm not grokking the distinction.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-01-04T02:24:02.039Z · LW(p) · GW(p)

I'll take a stab at explaining it quickly. If that doesn't do it, read this.

Instrumental values are things done as means to an end. Terminal values are ends in themselves. For instance, buying a coat is an instrumental value: I buy a coat so I do not feel uncomfortable or get sick when outside in cold weather. If there were a way to be outside in cold weather without a coat and suffer no ill effects, I wouldn't bother. On the other hand, good health is a terminal value. I do other things so that I can be healthy. There isn't any thing X that health leads to such that I wouldn't prefer health to sickness+X. Health also helps me do other things, but the fact that I desire it in and of itself makes it a terminal value.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-04T15:29:52.673Z · LW(p) · GW(p)

There isn't any thing X that health leads to such that I wouldn't prefer health to sickness+X.

Restating: there isn't any X such that

  • You would be willing to risk getting sick in order to achieve X, and

  • Being healthy increases your chances of succeeding at X?

Did I do violence to your meaning in the restatement? (I didn't intend to.)

If not: interesting. I think that is false for a great many people for a great many Xes. The one that comes to mind most readily is bearing and raising children.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-01-04T15:38:08.040Z · LW(p) · GW(p)

I think you did change my meaning. There are tradeoffs between terminal values. The are frequently cases where one would be willing to sacrifice some of one for some of another. What I mean is there is no X for which I would not prefer Health+X to sickness+X. That is, ceteris paribus, I would always rather be healthy than sick, even if the consequences were the same. The thing is, any terminal value has other desirable consequences when achieved. But if I would value the thing even without those consequences (eg, if I was as productive when sick as when healthy) then it's a terminal.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-04T15:50:55.218Z · LW(p) · GW(p)

Ah! I see.

I understood "prefer health to sickness + X" "prefer (health) to (sickness + X)" rather than "prefer health to sickness, even if X is added to both sides."

Thanks for the clarification.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-01-04T15:52:56.878Z · LW(p) · GW(p)

I intended the grammar as you understood it, which was an oversimplification. Thanks for helping me clarify my own thoughts.