The Neuroscience of Desire

post by lukeprog · 2011-04-09T19:08:12.446Z · LW · GW · Legacy · 30 comments

Contents

    The birth of neuroeconomics
  Valuation and choice in the brain
  Self-help
    Notes
    References
None
30 comments

Who knows what I want to do? Who knows what anyone wants to do? How can you be sure about something like that? Isn’t it all a question of brain chemistry, signals going back and forth, electrical energy in the cortex? How do you know whether something is really what you want to do or just some kind of nerve impulse in the brain? Some minor little activity takes place somewhere in this unimportant place in one of the brain hemispheres and suddenly I want to go to Montana or I don’t want to go to Montana.

- Don DeLillo, White Noise

Winning at life means achieving your goals  that is, satisfying your desires. As such, it will help to understand how our desires work. (I was tempted to title this article The Hidden Complexity of Wishes: Science Edition!)

Previously, I introduced readers to the neuroscience of emotion (affective neuroscience), and explained that the reward system in the brain has three major components: liking, wanting, and learning. That post discussed 'liking' or pleasure. Today we discuss 'wanting' or desire.

 

The birth of neuroeconomics

Much work has been done on the affective neuroscience of desire,1 but I am less interested with desire as an emotion than I am with desire as a cause of decisions under uncertainty. This latter aspect of desire is mostly studied by neuroeconomics,2 not affective neuroscience.

From about 1880-1960, neoclassical economics proposed simple, axiomatic models of human choice-making focused on the idea that agents make rational decisions aimed at maximizing expected utility. In the 1950s and 60s, however, economists discovered some paradoxes of human behavior that violated the axioms of these models.3 In the 70s and 80s, psychology launched an even broader attack on these models. For example, while economists assumed that choices among objects should not depend on how they are described ('descriptive invariance'), psychologists discovered powerful framing effects.4

In response, the field of behavioral economics began to offer models of human choice-making that fit the experimental data better than simple models of neoclassical economics did.Behavioral economists often proposed models that could be thought of as information-processing algorithms, so neuroscientists began looking for evidence of these algorithms in the human brain, and neuroeconomics was born.

(Warning: the rest of this post assumes some familiarity with microeconomics.)

 

Valuation and choice in the brain

Despite their differences, models of decision-making from neoclassical economics,6 behavioral economics,7 and even computer science8 share a common conclusion:

Decision makers integrate the various dimensions of an option into a single measure of its idiosyncratic subjective value and then choose the option that is most valuable. Comparisons between different kinds of options rely on this abstract measure of subjective value, a kind of 'common currency' for choice. That humans can infact compare apples to oranges when they buy fruit is evidence for this abstract common scale.9

Though economists tend to claim only that agents act 'as if' they use the axioms of economic theory to make decisions,10 there is now surprising evidence that subjective value and economic choice are encoded by particular neurons in the brain.11

More than a dozen studies show that the subjective utility of different goods or actions are encoded on a common scale by the ventromedial prefrontal cortex and the striatum in primates (including humans),12 as is temporal discounting.13 Moreover, the brain tracks forecasted and experienced value, probably for the purpose of learning.14 Researchers have also shown how modulation of a common value signal could account for loss aversion and ambiguity aversion,15 two psychological discoveries that had threatened standard economic models of decision-making. Finally, subjective value is learned via iterative updating (after experience) in dopaminergic neurons.16

Once a common-currency valuation of goods and actions has been performed, how is a choice made between them? Evidence implicates (at least) the lateral prefrontal and parietal cortex in a process that includes neurons encoding probabilistic reasoning.17 Interestingly, while valuation structures encode absolute (and thus transitive) subjective value, choice-making structures "rescale these absolute values so as to maximize the differences between the available options before choice is attempted,"18 perhaps via a normalization mechanism like the one discovered in the visual cortex.19

Beyond these basic conclusions, many open questions and controversies remain.20 The hottest debate today concerns whether different valuation systems encode inconsistent values for the same actions (leading to different conclusions on which action to take),21 or whether different valuation systems contribute to the same final valuation process (leading to a single, unambiguous conclusion on which action to take).22 I think this race is too close to call, though I lean toward the latter model due to the persuasive case made for it by Glimcher (2010).

Despite these open questions, 15 years of neuroeconomics research suggests an impressive reduction from economics to psychology to neuroscience may be possible, resulting in something like this23:

 

Self-help

With this basic framework in place, what can the neuroscience of desire tell us about how to win at life?

  1. Wanting is different than liking, and we don't only want happiness or pleasure.24 Thus, the perfect hedonist might not be fully satisfied. Pay attention to all your desires, not just your desires for pleasure.
  2. In particular, you should subject yourself to novel and challenging activities regularly throughout your life. Doing so keeps your dopamine (motivation) system flowing, because novel and challenging circumstances drive you to act and find solutions, which in turn leads to greater satisfaction than do 'lazy' pleasures like sleeping and eating.25
  3. In particular, doing novel and challenging activities with your significant other will help you experience satisfaction together, and improve bonding and intimacy.26
  4. Your brain generates reward signals when experienced value surpasses forecasted value.14 So: lower your expectations and your brain will be pleasantly surprised when things go well. Things going perfectly according to plan is not the norm, so don't treat it as if it is.
  5. Many of the neurons involved in valuation and choice have stochastic features, meaning that when the subjective utility of two or more options are similar (represented in the brain by neurons with similar firing rates), we sometimes choose to do something other than the action that has the most subjective utility.27 In other words, we sometimes fail to do what we most want to do, even if standard biases and faults (akrasia, etc.) are considered to be part of the valuation equation. So don't beat yourself up if you have a hard time choosing between options of roughly equal subjective utility, or if you feel you've chosen an option that does not have the greatest subject utility.

The neuroscience of desire is progressing rapidly, and I have no doubt that we will know much more about it in another five years. In the meantime, it has already produced useful results.

And the neuroscience of pleasure and desire is not only relevant to self-help, of course. In later posts, I will examine the implications of recent brain research for meta-ethics and for Friendly AI.

 

 

Notes

1 Berridge (2007); Leyton (2009).

2 Good overviews of neuroeconomics include: Glimcher (2010, 2009); Glimcher et al. (2008); Kable & Glimcher (2009); Glimcher & Rustichini (2004); Camerer et al (2005); Sanfey et al (2006); Politser (2008); Montague (2007). Berns (2005) is an overview from a self-help perspective.

3 Most famously, the Allais Paradox (Allais, 1953) and the Ellsberg paradox (Ellsberg, 1961). Eliezer wrote three posts on the Allais paradox.

4 Tversky & Kahneman (1981).

5 The most famous example is Prospect Theory (Kahneman & Tversky, 1979).

6 von Neumann & Morgenstern (1944).

7 Kahneman & Tversky (1979).

8 Sutton & Barto (1998).

9 Kable & Glimcher (2009).

10 Friedman (1953); Gul & Pesendorfer (2008).

11 Kable & Glimcher (2009) is a good overview, as are sections 2 and 3 of Glimcher (2010).

12 Kable & Glimcher (2009); Padoa-Schioppa & Assad (2006, 2008); Takahashi et al. (2009); Lau & Glimcher (2008); Samejima et al. (2005); Plassmann et al. (2007); Hare et al. (2008); Hare et al. (2009).

13 Kable & Glimcher (2007); Louie & Glimcher (2010).

14 Rutledge et al. (2010); Delgado (2007); Knutson & Cooper (2005); O’Doherty (2004).

15 Fox & Poldrack (2008); Tom et al. (2007); Levy et al. (2007); Levy et al. (2010).

16 Niv & Montague (2009); Schultz et al. (1997); Tobler et al. (2003, 2005); Waelti et al. (2001); Bayer & Glimcher (2005); Fiorillo et al. (2003, 2008); Kobayashi & Schultz (2008); Roesch et al. (2007); D'Ardenne et al. (2008); Zaghloul et al. (2009); Pessiglione e tal. (2006). 

17 For technical reasons, most of this work has been done on the saccadic-control system: Glimcher & Sparks (1992); Basso & Wurtz (1998); Dorris & Munoz (1998); Platt & Glimcher (1999); Yang & Shadlen (2007); Dorris & Glimcher (2004); Sugrue et al. (2004); Shadlen & Newsome (2001); Churchland et al. (2008); Kiani et al. (2008); Wang (2008); Kable & Glimcher (2007); Yu & Dayan (2005). But Glimcher (2010) provides some reasons to think these results will generalize.

18 Kable & Glimcher (2009).

19 Heeger (1992).

20 See Kable & Glimcher (2009), and the final chapter of Glimcher (2010). Neuroeconomists are also beginning to model how game-theoretic calculations occur in the brain: Fehr & Camerer (2007); Lee (2008); Montague & Lohrenz (2007); Singer & Fehr (2005).

21 Balleine et al. (2008); Bossaerts et al. (2009); Daw et al. (2005); Dayan and Balleine (2002); Rangel et al. (2008).

22 Glimcher (2009); Levy et al. (2010).

23 Figure 16.1 from Glimcher (2010).

24 Smith et al. (2009).

25 Berns (2005) provides a popular-level overview of the evidence, here. Some of the relevant research papers include: Berns et al. (2001); Benjamin et al. (1996); Kempermann et al. (1997).

26 Aron et al. (2000, 2003).

27 See chapters 9 and 10 of Glimcher (2010).

 

References

Allais (1953). Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l’école Américaine. Econometrica, 21: 503-546.

Aron, Norman, Aron, McKenna, & Heyman (2000). Couples shared participation in novel and arousing activities and experienced relationship quality. Journal of Personality and Social Psychology, 78: 273-283.

Aron, Norman, Aron, & Lewandowski (2003). Shared participation in self- expanding activities: Positive effects on experienced marital quality. In Noller & Feeney (eds.), Marital interaction (pp. 177-196). Cambridge University Press.

Balleine, Daw, & O’Doherty (2009). Multiple forms of value learning and the function of dopamine. In Glimcher, Camerer, Fehr, & Poldrack (eds.), Neuroeconomics: Decision Making and the Brain (pp. 367-387). Academic Press.

Basso & Wurtz (1998). Modulation of neuronal activity in superior colliculus by changes in target probability. Journal of Neuroscience, 18: 7519–7534.

Bayer & Glimcher (2005). Midbrain dopamine neurons encodea quantitative reward prediction error signal. Neuron, 47: 129–141.

Benjamin, Li, Patterson, Greenberg, Murphy, & Hamer (1996). Population and familial association between the D4 dopamine receptor gene and measures of novelty seeking. Nature Genetics, 12: 81-84.

Berns (2005). Satisfaction: the science of finding true fulfillment. Henry Holt and Co.

Berns, McClure, Pagnoni, & Montague (2001). Predictability modulates human brain response to reward. Journal of Neuroscience, 21: 2793-2798.

Berridge (2007). The debate over dopamine's role in reward: the case for incentive saliencePsychopharmacology, 191: 391-431.

Bossaerts, , Preuschoff, & Hsu (2009). The neurobiological foundations of valuation in human decision-making under uncertainty. In Glimcher, Camerer, Fehr, & Poldrack (eds.), Neuroeconomics: Decision Making and the Brain (pp. 353–365). Academic Press.

Camerer, Loewenstein, & Prelec (2005). Neuroeconomics: how neuroscience can inform economicsJournal of Economic Literature, 43: 9–64.

Churchland, Kiani, & Shadlen (2008). Decision-making with multiple alternatives. Nature Neuroscience, 11: 693–702.

D'Ardenne, McClure, Nystrom, & Cohen (2008). BOLD responses reflecting dopaminergic signals in the human Ventral Tegmental Area. Science, 319: 1264–1267.

Daw, Niv, & Dayan (2005). Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience, 8: 1704–1711.

Dayan and Balleine (2002). Reward, motivation, and reinforcement learning. Neuron, 36: 285–298.

Delgado (2007). Reward-related responses in the human striatum. Annals of the New York Academy of Sciences, 1104: 70–88.

Dorris & Munoz (1998). Saccadic probability influences motorpreparation signals and time to saccadic initiation. Journal of Neuroscience, 18: 7015–7026.

Dorris & Glimcher (2004). Activity in posterior parietal cortex is correlated with the relative subjective desirability of action. Neuron 44: 365–378.

Ellsberg (1961). Risk, Ambiguity, and the Savage Axioms. Quarterly Journal of Economics, 75(4): 643–669.

Fehr & Camerer (2007). Social neuroeconomics: The neural circuitry of social preferences. Trends in Cognitive Science, 11: 419–427.

Fiorillo, Tobler, & Schultz (2003). Discrete coding of reward probability and uncertainty by dopamine neurons. Science, 299: 1898–1902.

Fiorillo, Newsome, & Schultz (2008). The temporal precision of reward prediction in dopamine neurons. Nature Neuroscience, 11: 966–973.

Fox & Poldrack (2008). Prospect theory and the brain. In Glimcher, Camerer, Fehr, & Poldrack (eds.), Neuroeconomics: Decision Making and the Brain (pp. 145-173). Academic Press.

Friedman (1953). The methodology of positive economics. In Friedman, Essays in Positive Economics. Chicago Press.

Glimcher (2009). Neuroscience, Psychology, and Economic Behavior: The Emerging Field of Neuroeconomics. In: Tommasi, Peterson, & Nadel (eds.), Cognitive Biology: Evolutionary and Developemental Perspectives on Mind, Brain, and Behavior (pp. 261-287). MIT Press.

Glimcher (2009). Choice: Towards a Standard Back-pocket Model. In Glimcher, Camerer, Fehr, & Poldrack (eds.), Neuroeconomics: Decision Making and the Brain (pp. 503-521). Academic Press.

Glimcher (2010). Foundations of Neuroeconomic Analaysis. Oxford University Press.

Glimcher & Sparks (1992). Movement selection in advance of action in the superior colliculus. Nature, 355: 542–545.

Glimcher & Rustichini (2004). Neuroeconomics: the consilience of brain and decisionScience, 306: 447–452.

Glimcher, Camerer, Fehr, & Poldrack (2008). Introduction: A Brief History of Neuroeconomics. In Glimcher, Camerer, Fehr, & Poldrack (eds.), Neuroeconomics: Decision Making and the Brain (pp. 1-12). Academic Press.

Gul & Pesendorfer (2008). The case for mindless economics. In Caplan & Schotter (eds.), The Foundations of Positive and Normative Economics (pp. 3–41). Oxford University Press.

Hare, O’Doherty, Camerer, Schultz, & Rangel (2008). Dissociating the role of the orbitofrontal cortex and the striatum in the computation of goal values and prediction errors. Journal of Neuroscience, 28: 5623–5630.

Hare, Camerer, & Rangel (2009). Self-control in decisionmaking involves modulation of the vmPFC valuation system. Science, 324: 646–648.

Heeger (1992). Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9: 181–197.

Kable & Glimcher (2007). The neural correlates of subjective value during intertemporal choice. Nature Neuroscience, 10: 1625–1633.

Kable & Glimcher (2009). The Neurobiology of Decision: Consensus and Controversy. Neuron, 63: 733-745.

Kahneman & Tversky (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, XLVII: 263-291.

Kempermann, Kuhn, & Gage (1997). More hippocampal neurons in adult mice living in an enriched environment. Nature, 386: 493-495.

Kiani, Hanks, & Shadlen (2008). Bounded integration in parietal cortex underlies decisions even when viewing duration is dictated by the environment. Journal of Neuroscience, 28: 3017–3029.

Knutson & Cooper (2005). Functional magnetic resonance imaging of reward prediction. Current Opinions in Neurology, 18: 411–417.

Kobayashi & Schultz (2008). Influence of reward delays on responses of dopamine neurons. Journal of Neuroscience, 28: 7837–7846.

Lau & Glimcher (2008). Value representations in the primate striatum during matching behavior. Neuron, 58: 451–463.

Lee (2008). Game theory and neural basis of social decision making. Nature Neuroscience, 11: 404–409.

Levy, Rustichini & Glimcher (2007). A single system represents subjective value under both risky and ambiguous decision-making in humans. In 37th Annual Society for Neuroscience Meeting, San Diego, California.

Levy, Snell, Nelson, Rustichini, & Glimcher (2010). Neural Representation of Subjective Value Under Risk and Ambiguity. Journal of Neurophysiology, 103: 1036-1047.

Leyton (2009). The neurobiology of desire: Dopamine and the regulation of mood and motivational states in humans. In Kringelbach & Berridge (eds.), Pleasures of the brain (pp. 222-243). Oxford University Press.

Louie & Glimcher (2010). Separating value from choice: delay discounting activity in the lateral intraparietal area. Journal of Neuroscience, 30(16): 5498-5507.

Montague (2007). Your brain is (almost) perfect: How we make decisions. Plume.

Montague & Lohrenz (2007). To detect and correct: Norm violations and their enforcement. Neuron, 56: 14–18.

Niv & Montague (2008). Theoretical and empirical studies oflearning. In Glimcher, Camerer, Fehr, & Poldrack (eds.), Neuroeconomics: Decision Making and the Brain (pp. 331-351). Academic Press.

O’Doherty (2004). Reward representations and reward-related learning in the human brain: insights from neuroimaging. Current Opinions in Neurobiology, 14: 769–776.

Padoa-Schioppa & Assad (2006). Neurons in the orbitofrontalcortex encode economic value. Nature, 441: 223–226.

Padoa-Schioppa & Assad (2008). The representation of economic value in the orbitofrontal cortex is invariant for changes of menu. Nature Neuroscience 11: 95–102.

Pessiglione, Seymour, Flandin, Dolan, & Frith (2006). Dopamine-dependent prediction errors underpin reward-seeking behaviour in humans. Nature, 442: 1042–1045.

Plassmann, O’Doherty, & Rangel (2007). Orbitofrontal cortex encodes willingness to pay in everyday economic transactions. Journal of Neuroscience, 27: 9984–9988.

Platt & Glimcher (1999). Neural correlates of decision variables in parietal cortex. Nature, 400: 233–238.

Politser (2008). Neuroeconomics: a guide to the new science of making choices. Oxford University Press.

Rangel, Camerer, & Montague (2008). A framework for studying the neurobiology of value-based decision making. Nature Reviews Neuroscience, 9: 545–556

Roesch, Calu, & Schoenbaum (2007). Dopamine neurons encode the better option in rats deciding between differently delayed or sized rewards. Nature Neuroscience, 10: 1615–1624.

Rutledge, Dean, Caplin, & Glimcher (2010). Testing the reward prediction error hypothesis with an axiomatic model. Journal of Neuroscience, 30(40): 13525-13536.

Samejima, Ueda, Doya & Kimura (2005). Representation ofaction-specific reward values in the striatum. Science, 310: 1337–1340.

Sanfey, Loewenstein, McClure, & Cohen (2006). Neuroeconomics: cross-currents in research on decision-makingTrends in Cognitive Science, 10: 108–116.

Schultz, Dayan, & Montague (1997). A neural substrate of prediction and reward. Science, 275: 1593–1599.

Shadlen & Newsome (2001). Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. Journal of Neurophysiology, 86: 1916–1936.

Singer & Fehr (2005). The neuroeconomics of mind reading and empathy. American Economic Review, 95: 340–345.

Smith, Mahler, Pecina, & Berridge (2009). Hedonic hotspots: generating sensory pleasure in the brain. In Kringelbach & Berridge (eds.), Pleasures of the brain (pp. 27-49). Oxford University Press.

Sugrue, Corrado, & Newsome (2004). Matching behavior and the representation of value in the parietal cortex. Science, 304: 1782–1787.

Sutton & Barto (1998). Reinforcement Learning: An Introduction. MIT Press.

Takahashi, Roesch, Stalnaker, Haney, Calu, Taylor, Burke, & Schoenbaum (2009). The orbitofrontal cortex andventral tegmental area are necessary for learning from unexpected outcomes. Neuron, 62: 269–280.

Tobler, Dickinson, & Schultz (2003). Coding of predicted reward omission by dopamine neurons in a conditioned inhibition paradigm. Journal of Neuroscience, 23: 10402–10410.

Tobler, Fiorillo, & Schultz (2005). Adaptive coding of rewardvalue by dopamine neurons. Science, 307: 1642–1645.

Tom, Fox, Trepel & Poldrack (2007). The neural basis of loss aversion in decision-making under risk. Science, 315: 515–518.

Tversky & Kahneman (1981). The framing of decisions and the psychology of choice. Science, 211(4481): 453–458.

von Neumann & Morgenstern (1944). Theory of Games and Economic Behavior. Princeton University Press.

Waelti, Dickinson, & Schultz (2001). Dopamine responses comply with basic assumptions of formal learning theory. Nature, 412: 43–48.

Wang (2008). Decision making in recurrent neuronal circuits. Neuron, 60: 215–234.

Yang & Shadlen (2007). Probabilistic reasoning by neurons. Nature, 447: 1075–1080.

Yu & Dayan (2005). Uncertainty, neuromodulation and attention. Neuron, 46: 681–692.

Zaghloul, Blanco, Weidemann, McGill, Jaggi, Baltuch, & Kahana (2009). Human substantia nigra neurons encode unexpected financial rewards. Science, 323: 1496–1499.

30 comments

Comments sorted by top scores.

comment by Louie · 2011-04-12T15:18:39.745Z · LW(p) · GW(p)

Wow. This is god damn amazing.

You're starting to spoil us. I feel like reactions to a post like this should be like "Holy shit! Someone just sat down and summarized the 50+ most important research papers on an important FAI-related facet of human cognition! They probably had to read 250 papers they didn't even cite in order to produce this! OMFG this is amazing1!!!"

Instead, we're like. "Oh, lukeprog just wrote something."

I for one, continue to be impressed by your astounding summaries of scientific data on these topics. And even though I'm on a surf vacation in Bali, the neurons in my brain that code for the value of your upcoming meta-ethics series are firing faster than my neurons that code for the value of most everything else in my real life. That's pretty hard to do to me right now. Well done!

Replies from: shokwave, lukeprog
comment by shokwave · 2011-04-12T15:41:53.871Z · LW(p) · GW(p)

Yeah, seconded. Every time you post, instead of burning an hour in the comments, I burn three hours following links and reading abstracts on Google Reader. I feel like I'm building an 800 piece jigsaw in huge, 50-piece chunks.

comment by lukeprog · 2011-04-12T15:49:20.469Z · LW(p) · GW(p)

Heh, thanks.

comment by Vladimir_Nesov · 2011-04-10T13:13:11.976Z · LW(p) · GW(p)

This post is too short for the amount of knowledge it seeks to refer. It does little more than list the references associated with terse and vague-to-the-reader hints about their topic and interrelation. It looks more like an obligatory survey section of a paper that ought to include a survey section to position itself in the context of a field and less like an introductory survey of survey articles that I expect it was intended as being.

I can only see it being useful for a reader who would follow it by digging into the referenced papers. A good cause, but it feels like there was more low-hanging fruit potential for exposition resulting from your study of these topics.

Replies from: lukeprog
comment by lukeprog · 2011-04-10T15:14:36.169Z · LW(p) · GW(p)

We may have different ideas about how much knowledge I'm trying to refer.

I'm not hoping that people come away from this article with a good understanding of how the primate brain calculates value and uses this data in decision-making. I'm merely hoping to explain that we do actually seem to be doing something like maximizing subjective expected utility - much to the surprise even of economists - and that neuroscientists know a great deal about how this works.

If you want the full story, you need to read a book. I've recommended several.

comment by steven0461 · 2011-04-10T03:58:31.996Z · LW(p) · GW(p)

Winning at life means achieving your goals — that is, satisfying your desires. As such, it will help to understand how our desires work and how to satisfy them.

That sounds like the wirehead fallacy to me. You can't satisfy your desires to a greater degree just by causing yourself to feel like your desires have been satisfied to a greater degree, unless your desire happens to be a desire for your own feeling of desire satisfaction, which is not a given.

(Consider not just the example of someone who is explicitly an altruist, but also the example of someone who is explicitly an egoist because he only wants to do what is in some sense the right thing and mistakenly believes egoism rather than altruism to be in that sense the right thing.)

"Winning" has technical and everyday senses that often don't come apart but sometimes do; the simplest justification for the saying that "rationalists win" uses the technical sense, so it's worth being careful (more so than LW has been) when interpreting the saying in the everyday sense.

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2011-04-10T13:17:56.030Z · LW(p) · GW(p)

This paragraph jumped out at me as well. While neuroscience might refer to knowledge useful for figuring out the content of our goals, it's not at all clear in what way it can inform us. The simple "achieving your goals - that is, satisfying your desires" doesn't help, and is outright wrong in the context where "desires" refers to the technical sense from neuroscience.

comment by wedrifid · 2011-04-10T08:38:38.036Z · LW(p) · GW(p)

"Winning" has technical and everyday senses that often don't come apart but sometimes do; the simplest justification for the saying that "rationalists win" uses the technical sense

(And is technically wrong even then.)

Replies from: steven0461
comment by steven0461 · 2011-04-11T19:57:16.096Z · LW(p) · GW(p)

How so?

One way I can see to go wrong even with the technical sense of "winning" is if you're comparing a rational agent to an irrational agent who happens to start out with other, more important advantages. The right comparison is between rational and irrational versions of the same agent.

comment by Cyan · 2011-04-10T03:12:51.180Z · LW(p) · GW(p)

Please excerpt the caption along with the figure. A figure without a caption is like a map without an index and compass rose.

comment by Alex_Altair · 2011-04-09T21:36:55.034Z · LW(p) · GW(p)

This serves as a great collection of references, but the post itself has too much opaque jargon to be a helpful explanation.

Replies from: lukeprog, Swimmer963
comment by lukeprog · 2011-04-09T22:05:42.933Z · LW(p) · GW(p)

Looking back, I see that my post assumes a fair bit of familiarity with microeconomics. I don't have the space to give an economics lesson in this post, but I've noted this dependency in a parenthetical paragraph in the original post now, thanks.

Replies from: handoflixue
comment by handoflixue · 2011-04-11T20:53:07.197Z · LW(p) · GW(p)

As someone with a casual background in micro-economics, I find it entirely readable. I also really appreciated the warning that I might need that background.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-09T22:07:14.047Z · LW(p) · GW(p)

Really? I didn't notice it was jargon-y at all...and I have zero background in economics.

Replies from: Perplexed
comment by Perplexed · 2011-04-09T22:29:09.128Z · LW(p) · GW(p)

In a post referencing a paper claiming "...orbitofrontal cortex and ventral tegmental area are necessary ..." in the title, I somehow doubt that the complaint was about economic jargon. But then I do have a background in economics.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-09T23:17:36.630Z · LW(p) · GW(p)

I suppose that I do have some background in neuroscience (at least the basics covered in our mandatory Anatomy and Physiology courses). I don't know per se what the ventral tegmental area does, but I know it's a part of the brain and, well, I'm relying on the post/article to tell me what it does if that is relevant to the point.

Replies from: lukeprog
comment by lukeprog · 2011-04-10T05:54:06.725Z · LW(p) · GW(p)

Right. You don't need to know what the VTA does or even where it is to get the point that we have these functions mapped to very clusters of neurons.

comment by Antisuji · 2011-04-13T02:19:30.528Z · LW(p) · GW(p)

Consider this, from The Neuroscience of Pleasure

3 Anticipation matters. Anticipating future pain is itself painful, and anticipating pleasure is itself pleasant. Spend more time reliving happy memories and anticipating future pleasures, and spend less time anticipating future pains. [emphasis mine]

and this

4 Your brain generates reward signals when experienced value surpasses forecasted value. So: lower your expectations and your brain will be pleasantly surprised when things go well. Things going perfectly according to plan is not the norm, so don't treat it as if it is.

How does one balance these recommendations? In my experience, when I anticipate future pleasures in cases where I am not certain of the outcome I tend to inadvertently boost my estimation of success or "get my hopes up". Is the solution to only actively anticipate pleasure when my estimation of the probability of success is high to begin with? This is not an easy thing to do, and in fact 4. in general seems difficult.

Replies from: lukeprog, Armok_GoB
comment by lukeprog · 2011-04-13T02:47:02.286Z · LW(p) · GW(p)

Good question! One way to achieve both things is to spend time anticipating relatively certain future pleasures and also lower your expectations concerning how future complex (and thus uncertain) events will play out.

Replies from: LukeStebbing
comment by Luke Stebbing (LukeStebbing) · 2011-04-13T05:54:09.342Z · LW(p) · GW(p)

Good point, but since an accurate model of the future is helpful, this may be a case where you should purchase your warm fuzzies separately.

(Since people tend to make overly optimistic plans, the two strategies might be similar in practice.)

comment by Armok_GoB · 2011-04-14T19:56:35.348Z · LW(p) · GW(p)

I have found that this can be hacked in innumerable ways. The simplest one might be the lottery hack: imagine vividly awesome things that'd happen conditional on something nearly impossible happening, and just hide that impossibility from your brain, for example using scope sensitivity. What'd you do if a mayor banks computer got a random bitflip giving you 2^20 dollars? What'd you do if you suddenly transformed into a magical unicorn with a purple tentacle?

comment by Goobahman · 2011-04-12T22:25:36.304Z · LW(p) · GW(p)

Thanks for all your hard work Luke.

Been following you for almost two years now, since your earlier days on CSA. I had a hunch you would be worth keeping an eye on, and look at you. And I'm sharing in all the benefits! Posts like this, are so well put-together, so accessible but so sophisticated and reliable. It's like an artform.

Keep it up. Your an inspiration.

comment by pjeby · 2011-04-09T19:27:44.950Z · LW(p) · GW(p)

Just as an FYI, you should probably tag this under "fun theory" and perhaps cross-reference it in the relevant main sequence.

Replies from: lukeprog
comment by lukeprog · 2011-04-09T19:30:09.221Z · LW(p) · GW(p)

I added a link at the top.

comment by RobinZ · 2011-04-10T04:00:58.786Z · LW(p) · GW(p)

You might footnote the Allais Paradox footnote to Eliezer's three posts on the subject.

Replies from: lukeprog
comment by lukeprog · 2011-04-10T05:51:41.996Z · LW(p) · GW(p)

Done, thanks.

comment by Jonathan_Graehl · 2011-04-11T22:44:53.672Z · LW(p) · GW(p)

About the impressive-looking economy/psychology/neuroscience diagram: what would I need to do in order to understand all the displayed concepts and links between them, and what would the benefit be? It looks like some fun reading, if I had time to kill, but is there anything beyond that?

Also, I like this series of posts.

Replies from: lukeprog
comment by lukeprog · 2011-04-12T14:24:28.481Z · LW(p) · GW(p)

The best solution would be to read the book from which it comes, Glimcher (2010). There would be limited self-help applications, though.

comment by Curiouskid · 2011-08-16T22:49:06.160Z · LW(p) · GW(p)

A good example of the paradox between wanting and liking is tickling. People enjoy being tickled. BUT they don't want to be tickled (at least while they're being tickled). I wonder what neuroscience has to say about that.

comment by James_K · 2011-04-13T11:09:14.453Z · LW(p) · GW(p)

This was really interesting, while I got a decent primer in behavioural economics at university, neuroeconomics was still too cutting edge.