Are wireheads happy?
post by Scott Alexander (Yvain) · 2010-01-01T16:41:31.102Z · LW · GW · Legacy · 109 commentsContents
109 comments
Related to: Utilons vs. Hedons, Would Your Real Preferences Please Stand Up
And I don't mean that question in the semantic "but what is happiness?" sense, or in the deep philosophical "but can anyone not facing struggle and adversity truly be happy?" sense. I mean it in the totally literal sense. Are wireheads having fun?
They look like they are. People and animals connected to wireheading devices get upset when the wireheading is taken away and will do anything to get it back. And it's electricity shot directly into the reward center of the brain. What's not to like?
Only now neuroscientists are starting to recognize a difference between "reward" and "pleasure", or call it "wanting" and "liking". The two are usually closely correlated. You want something, you get it, then you feel happy. The simple principle behind our entire consumer culture. But do neuroscience and our own experience really support that?
It would be too easy to point out times when people want things, get them, and then later realize they weren't so great. That could be a simple case of misunderstanding the object's true utility. What about wanting something, getting it, realizing it's not so great, and then wanting it just as much the next day? Or what about not wanting something, getting it, realizing it makes you very happy, and then continuing not to want it?
The first category, "things you do even though you don't like them very much" sounds like many drug addictions. Smokers may enjoy smoking, and they may want to avoid the physiological signs of withdrawl, but neither of those is enough to explain their reluctance to quit smoking. I don't smoke, but I made the mistake of starting a can of Pringles yesterday. If you asked me my favorite food, there are dozens of things I would say before "Pringles". Right now, and for the vast majority of my life, I feel no desire to go and get Pringles. But once I've had that first chip, my motivation for a second chip goes through the roof, without my subjective assessment of how tasty Pringles are changing one bit.
Think of the second category as "things you procrastinate even though you like them." I used to think procrastination applied only to things you disliked but did anyway. Then I tried to write a novel. I loved writing. Every second I was writing, I was thinking "This is so much fun". And I never got past the second chapter, because I just couldn't motivate myself to sit down and start writing. Other things in this category for me: going on long walks, doing yoga, reading fiction. I can know with near certainty that I will be happier doing X than Y, and still go and do Y.
Neuroscience provides some basis for this. A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for "wanting" and "liking", and were able to knock out either circuit without affecting the other (it was actually kind of cute - they measured the number of times the rats licked their lips as a proxy for "liking", though of course they had a highly technical rationale behind it). When they knocked out the "liking" system, the rats would eat exactly as much of the food without making any of the satisifed lip-licking expression, and areas of the brain thought to be correlated with pleasure wouldn't show up in the MRI. Knock out "wanting", and the rats seem to enjoy the food as much when they get it but not be especially motivated to seek it out. To quote the science1:
Pleasure and desire circuitry have intimately connected but distinguishable neural substrates. Some investigators believe that the role of the mesolimbic dopamine system is not primarily to encode pleasure, but "wanting" i.e. incentive-motivation. On this analysis, endomorphins and enkephalins - which activate mu and delta opioid receptors most especially in the ventral pallidum - are most directly implicated in pleasure itself. Mesolimbic dopamine, signalling to the ventral pallidum, mediates desire. Thus "dopamine overdrive", whether natural or drug-induced, promotes a sense of urgency and a motivation to engage with the world, whereas direct activation of mu opioid receptors in the ventral pallidum induces emotionally self-sufficient bliss.
The wanting system is activated by dopamine, and the liking system is activated by opioids. There are enough connections between them that there's a big correlation in their activity, but the correlation isn't one and in fact activation of the opioids is less common than the dopamine. Another quote:
It's relatively hard for a brain to generate pleasure, because it needs to activate different opioid sites together to make you like something more. It's easier to activate desire, because a brain has several 'wanting' pathways available for the task. Sometimes a brain will like the rewards it wants. But other times it just wants them.
So you could go through all that trouble to find a black market brain surgeon who'll wirehead you, and you'll end up not even being happy. You'll just really really want to keep the wirehead circuit running.
Problem: large chunks of philosophy and economics are based upon wanting and liking being the same thing.
By definition, if you choose X over Y, then X is a higher utility option than Y. That means utility represents wanting and not liking. But good utilitarians (and, presumably, artificial intelligences) try to maximize utility (or do they?). This correlates contingently with maximizing happiness, but not necessarily. In a worst-case scenario, it might not correlate at all - two possible such scenarios being wireheading and an AI without the appropriate common sense.
Thus the deep and heavy ramifications. A more down-to-earth example came to mind when I was reading something by Steven Landsburg recently (not recommended). I don't have the exact quote, but it was something along the lines of:
According to a recent poll, two out of three New Yorkers say that, given the choice, they would rather live somewhere else. But all of them have the choice, and none of them live anywhere else. A proper summary of the results of this poll would be: two out of three New Yorkers lie on polls.
This summarizes a common strain of thought in economics, the idea of "revealed preferences". People tend to say they like a lot of things, like family or the environment or a friendly workplace. Many of the same people who say these things then go and ignore their families, pollute, and take high-paying but stressful jobs. The traditional economic explanation is that the people's actions reveal their true preferences, and that all the talk about caring about family and the environment is just stuff people say to look good and gain status. If a person works hard to get lots of money, spends it on an iPhone, and doesn't have time for their family, the economist will say that this proves that they value iPhones more than their family, no matter what they may say to the contrary.
The difference between enjoyment and motivation provides an argument that could rescue these people. It may be that a person really does enjoy spending time with their family more than they enjoy their iPhone, but they're more motivated to work and buy iPhones than they are to spend time with their family. If this were true, people's introspective beliefs and public statements about their values would be true as far as it goes, and their tendency to work overtime for an iPhone would be as much a "hijacking" of their "true preferences" as a revelation of them. This accords better with my introspective experience, with happiness research, and with common sense than the alternative.
Not that the two explanations are necessarily entirely contradictory. One could come up with a story about how people are motivated to act selfishly but enjoy acting morally, which allows them to tell others a story about how virtuous they are while still pursuing their own selfish gain.
Go too far toward the liking direction, and you risk something different from wireheading only in that the probe is stuck in a different part of the brain. Go too far in the wanting direction, and you risk people getting lots of shiny stuff they thought they wanted but don't actually enjoy. So which form of good should altruists, governments, FAIs, and other agencies in the helping people business respect?
Sources/Further Reading:
1. Wireheading.com, especially on a particular University of Michigan study
2. New York Times: A Molecule of Motivation, Dopamine Excels at its Task
3. Slate: The Powerful and Mysterious Brain Circuitry...
4. Related journal articles (1, 2, 3)
109 comments
Comments sorted by top scores.
comment by SilasBarta · 2010-01-01T21:07:49.737Z · LW(p) · GW(p)
Interesting article, but these really bugged me:
1) Using the environment as an example of false revealed preference. One person's pollution never ruins "the environment", at least not "their environment". The environment is only ruined by the aggregate effects of many people's pollution; or, the person is massively polluting a different environment.
Environmental solutions require collective agreement and enforcement, not unilateral disarmament. So polluting while claiming to value the environment is not hypocrisy even in the conventional sense of the term (that you criticize here).
And this is at least the second time I've explained this to you. Please stop using it as an example.
2) This phrasing:
There are enough connections between them that there's a big correlation in their activity, but the correlation isn't one ...
That initially reads like you're saying "the correlation isn't a correlation" so I had to re-read it. I recommend using any of the following terms as a replacement for the bolded word: perfect, unity, 1.0, 1, or "equal to one", any of which would have been clearer.
(Btw, I agree with your disrecommendation of Landsburg!)
Replies from: Alexei↑ comment by Alexei · 2011-07-11T05:55:29.214Z · LW(p) · GW(p)
I really like your point about the environment. I am wondering if you can make a broader post discussing that kind of reasoning. For example, could one argue using this logic that an individual voter makes no difference, therefore voting, on the individual level, is pointless? (The solution would be to organize massive groups of people that would vote the same way.) What other examples fall under this reasoning? And what are some examples that seem like they should fall under this reasoning, but don't?
Replies from: SilasBarta↑ comment by SilasBarta · 2011-07-11T16:39:39.857Z · LW(p) · GW(p)
Thanks for bringing that up. I've actually argued the opposite in the case of voting. Using timeless decision theory, you can justify voting (even without causing a bunch of people to go along with you) on the grounds that, if you would make this decision, the like-minded would reason the same way. (See my post "real-world newcomb-like problems".)
I think a crucial difference between the two cases is that non-pollution makes it even more profitable for others to pollute, which would make collective non-pollution (in the absence of a collective agreement) an unstable node. (For example, using less oil bids down the price and extends the scope of profitable uses.)
Replies from: agrajag, elityre, Alexei, Randaly↑ comment by agrajag · 2011-11-14T10:24:26.798Z · LW(p) · GW(p)
Getting this point across is difficult, and it's a common problem. For example, I'm from Norway and favor the system we have here with comparatively high taxes on the high earners, and high benefits. When I discuss economics with people from other political systems, say Americans, invariably I get a version of the same:
If I'm happy to pay higher taxes, then I can do that in USA too -- I can just donate to charities of my choice. As an added bonus, this would let me pick which charities I care most about.
The problem is the same as the polluting though: By donating to charities, I reduce the need for government-intervention, which again reduces the need for taxes, which mostly benefit those people paying most taxes.
That is, by donating to charities, I reward those people who earn well and (imho) "should" contribute more to society (by donating themselves) but don't.
So that situation is unstable: The higher the fraction of needed-support is paid for trough charitable giving, the larger is the reward for not giving.
Replies from: SilasBarta, phob↑ comment by SilasBarta · 2011-11-14T15:20:24.252Z · LW(p) · GW(p)
Glad to hear your take on the issue and know that I'm not alone in having to explain this. Coincidentally, I just recently put up a blog post discussing the unilateral disarmament issue in the context of taxes, making similar points to you (though not endorsing higher tax rates).
↑ comment by Eli Tyre (elityre) · 2019-11-15T04:01:57.823Z · LW(p) · GW(p)
I think a crucial difference between the two cases is that non-pollution makes it even more profitable for others to pollute, which would make collective non-pollution (in the absence of a collective agreement) an unstable node. (For example, using less oil bids down the price and extends the scope of profitable uses.)
Wow. This is a really important point. I'd never realized that.
↑ comment by Alexei · 2011-07-11T23:48:30.766Z · LW(p) · GW(p)
Oh, I see! I missed the key factor that by playing strategy NOT X (not polluting) you make strategy X (polluting) more favorable for others. And, of course, that doesn't apply to voting. This helps draw the line for what kind of problems you can use this reasoning. Thanks for clarifying!
Replies from: asr↑ comment by Randaly · 2012-01-21T04:55:08.807Z · LW(p) · GW(p)
Using timeless decision theory, you can justify voting (even without causing a bunch of people to go along with you) on the grounds that, if you would make this decision, the like-minded would reason the same way.
Given that probably only ~2,000 people know of TDT at all, only ~500 would think of it in this context, these people aren't geographically concentrated, these people aren't overwhelming concentrated in any one political party, at least some of the people considering TDT don't believe that it is a strong argument in favor of voting (example: me), and the harms from voting scale up linearly with the number of people voting, it's exceedingly unlikely that TDT serves a significant justification for voting. (As a bit of context: in 2000, Bush won Florida by over 500 votes.)
comment by [deleted] · 2010-11-14T20:11:05.980Z · LW(p) · GW(p)
I've seen a fair amount of happiness research, and happiness tends towards the "liking" end of the scale. What makes people happy is giving to charity, meditating, long walks, and so on; what makes people unhappy is commuting, work stress, and child-rearing. Religion, old age, and living in Utah also make people happy.
A life designed to maximize happiness, according to happiness researchers, would not be a hedonistic orgy, as one might imagine. You are actually happier with a fair degree of self-restraint. But it would have a lot more peaceful hobbies and fewer grand, stressful goals (like strenuous careers and parenthood.) To me, the happiness-optimized life does not sound fun. It is not something I would look forward to with anticipation and eagerness. Statistically speaking, we'd like such a life, but we wouldn't want it. Myself, I'd rather be given what I want than what would make me happy.
Replies from: diegocaleiro, Jack, NancyLebovitz↑ comment by diegocaleiro · 2011-01-06T01:58:08.083Z · LW(p) · GW(p)
Wow, this is unexpected in so many levels for me. You have access to happiness research yet you would stick to what you want instead. I don't mean to insult or think there is anything wrong, I'm just genuinely staggered at the fact.
I have read some thousands of pgs in happiness research, and started to follow advice. I'm more generous, I take long walks, I cherish friendships, I care very little for a long career, I go to evolutionary envinroments all the time (the park, swimming pools and beaches) I pursue objectives which really ought to make me say "I was doing something I consider important" and ignore money, having children, and some parts of familial obligations.
We had the same info, and we took such different paths...... this is awesome.
So I suppose I am much happier but am in a constant struggle not to want lots of things that I naturally would. So I'm in a kind of strenuous effort of self-control leading to constant bliss. I suppose that you are less happier (though probably not in any way perceivable from a first person perspective) but way more relaxed, prone to be guided by your desires and wishes, and willing to actually go there and do that thing you feel like doing.....
I wish I was you for two weeks or something, if only that were possible, and then I came back....
Replies from: ramanspectre↑ comment by ramanspectre · 2012-01-17T03:44:13.661Z · LW(p) · GW(p)
"I suppose that you are less happier (though probably not in any way perceivable from a first person perspective) but way more relaxed, prone to be guided by your desires and wishes, and willing to actually go there and do that thing you feel like doing....."
What makes you think that the person you are responding to is more relaxed? You'd think that constantly pursuing wants would make them less relaxed since it takes a lot of energy to pursue worldly things.
And, what you think that you aren't relaxed?
↑ comment by Jack · 2010-11-14T20:19:59.557Z · LW(p) · GW(p)
Living in Utah does not make people happy.
Replies from: None↑ comment by [deleted] · 2010-11-14T20:27:07.499Z · LW(p) · GW(p)
Sure. But if I wanted to live in the best place to make me happy, and all I knew was the happiness distribution by geographic location, it would be dumb to choose to live somewhere other than the happiest place, right?
Replies from: ata↑ comment by ata · 2010-11-14T20:28:10.601Z · LW(p) · GW(p)
Yes, but happiness distribution by geographic location isn't all you know.
Replies from: diegocaleiro↑ comment by diegocaleiro · 2011-01-06T02:00:26.185Z · LW(p) · GW(p)
Also it is not relevant, since happiness varies infinitely more due to other circumstances. 50% unchangeable genes 40% how you deal with the lemons life give you and the strawberries as well. 10% all your life conditions, from marriage, to children, to how rich you are. A tiny tiny bit of those 10% is determined by where you live. (2008 Lyubuomirsky)
↑ comment by NancyLebovitz · 2010-11-15T00:31:58.546Z · LW(p) · GW(p)
The other aspect is that the low-intensity hedonic life might suit a majority or a plurality of people, but not optimize happiness for a large minority.
comment by Vladimir_Nesov · 2010-01-01T19:19:31.182Z · LW(p) · GW(p)
Thus wanting (motivation) is near, liking (enjoyment) is far (dopamine is near, opioids are far!). If liking doesn't have the power to make you actually do things, its role is primarily in forming your beliefs about what you want, which leads to presenting good images of yourself to others with sincerity.
So far, this is not a disagreement with "revealed preferences" thought. The disagreement would come in value judgment, where instead of taking the side of wanting (as economists seem to), or the side of liking (naive view, or one of the many varieties of moral ideologies), one carefully considers the virtues on case-to-case basis, being open to discard parts from either category. True preference is neither revealed nor felt.
comment by Pablo (Pablo_Stafforini) · 2010-01-01T17:24:12.343Z · LW(p) · GW(p)
By definition, if you choose X over Y, then X is a higher utility option than Y. That means utility represents wanting and not liking. But good utilitarians (and, presumably, artificial intelligences) try to maximize utility. This correlates contingently with maximizing happiness, but not necessarily
You are equivocating on the term 'utility' here, as have so many other commenters before in this forum. In the first sentence above, 'utility' is used in the sense given to that term by axiomatic utility theory. When the preferences of an individual conform to a set of axioms, they can be represented by a 'utility function'. The 'utilities' of this individual are the values of that function. By contrast, when ethicists discuss utilitarianism, what they mean by 'utility' is either pleasure or good. The empirical studies you cite, therefore, do not pose problems for utility theory or utilitarianism. They only pose problems for the muddled view on which utility functions represent that which hedonistic utilitarians think we ought to maximize.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-01-01T17:45:18.163Z · LW(p) · GW(p)
You are equivocating on the term 'utility' here, as have so many other commenters before in this forum.
That seems to me to be an unfair reading. Nowhere does Yvain say that he's using the axiomatic theory of utility. It's true that he writes, "By definition, if you choose X over Y, then X is a higher utility option than Y." However, this definition can hold in other theoretical frameworks besides axiomatic utility theory. In particular, the definition plausibly holds in the framework used by some ethical utilitarians. Yvain can therefore be read as using the same definition for utility throughout.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2010-01-01T17:56:48.603Z · LW(p) · GW(p)
I accept Benthamite's criticism as valid. It may not be obvious from the text, but in my mind I was definitely equivocating.
If we can't use preference to determine ethical utility, it makes ethical utilitarianism a lot harder, but that might be something we have to live with. I don't remember very much about Coherent Extrapolated Volition, but my vague memories say it makes that a lot harder too.
Replies from: Eliezer_Yudkowsky, Vladimir_Nesov↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-01T20:12:12.278Z · LW(p) · GW(p)
I observe that you might have caught this mistake earlier via this heuristic: "Using the phrase "by definition", anywhere outside of math, is among the most alarming signals of flawed argument I've ever found. It's right up there with "Hitler", "God", "absolutely certain" and "can't prove that"." I should probably rewrite "math" as "pure math" just to make this clearer.
↑ comment by Vladimir_Nesov · 2010-01-01T19:27:14.356Z · LW(p) · GW(p)
If we can't use preference to determine ethical utility, it makes ethical utilitarianism a lot harder [...]
The way "preference" tends to be used in this community (as a more general word for "utility", communicating the same idea without explicit reference to expected utility maximization), this isn't right either. The actual decisions should be higher in utility than their alternatives, it is preferable if they are higher utility, but the correspondence is far from being factual, let alone "by definition" (Re: "By definition, if you choose X over Y, then X is a higher utility option than Y"). One can go a fair amount from actions to revealed preference, but only modulo human craziness and stupidity.
comment by cousin_it · 2010-01-01T18:07:39.428Z · LW(p) · GW(p)
Great post. It raised a question for me: why did evolution give us the pleasure mechanism at all, if the urge mechanism is sufficient to make us do stuff?
Replies from: timtyler, Matt_Simpson↑ comment by timtyler · 2010-01-01T18:34:40.364Z · LW(p) · GW(p)
The "urge" mechanism does not help us learn to do rewarding things.
Replies from: Furcas, cousin_it↑ comment by Furcas · 2010-01-01T21:10:14.373Z · LW(p) · GW(p)
I agree that pleasure has something to do with learning, but I don't see why the "urge" or "desire" mechanism couldn't help us learn to do rewarding things without the existence of pleasure.
Without pleasure, things could work like this: If X is good for the animal, make the animal do X more often.
With pleasure, like this: If X is good for the animal, make the animal feel pleasure. Make the animal seek pleasure. (Therefore the animal will do X more often.)
So pleasure would seem to be a kind of buffer. My guess is that its purpose is to reduce the number of modifications to the animal's desires, thereby reducing the likelihood of mistaken modifications, which would be impossible to override.
↑ comment by cousin_it · 2010-01-01T18:40:23.591Z · LW(p) · GW(p)
Awesome answer, thanks! So the "urge" mechanism is for things we know how to do, and the "pleasure" mechanism is for things we don't? Now I wonder how to test this idea.
Replies from: whpearson↑ comment by whpearson · 2010-01-01T19:11:15.270Z · LW(p) · GW(p)
If dopamine = urge you can make dopamine deficient mice. They don't learn so well...
Replies from: timtyler, Vladimir_Nesov↑ comment by timtyler · 2010-01-01T20:22:01.256Z · LW(p) · GW(p)
I would say the study supports the thesis that they learn, but then aren't motivated to act:
"A retest after 24 h showed that DD mice can learn and remember in the absence of dopamine, leading to the inference that the lack of dopamine results in a performance/motivational decrement that masks their learning competence for this relatively simple task."
Replies from: whpearson↑ comment by whpearson · 2010-01-01T21:01:48.313Z · LW(p) · GW(p)
I really wish I could get into that paper. I'd like to know whether was dopamine precursor given to the rats before the retest to enable eating? If so the learning may have been buffered and acquired through sleep or there is a different method for learning in sleep. I'll see if I can get to it in the next few days.
I'd agree that some learning did occur without dopamine, the knowledge of where to go was learnt. The brain is to complex to mediate all learning with direct feedback. What we are interested in is learning what should be done. That is the behaviour was learnt but that the behaviour should be performed wasn't immediately learnt. Or in other words it didn't know it should be motivated.
There is lots of work on dopamine and learning. I'm currently watching another interesting video on the subject.
Do you know of any related to opioids? All I can find some stuff on fear response learning.
Replies from: Alicorn↑ comment by Vladimir_Nesov · 2010-01-01T19:42:14.179Z · LW(p) · GW(p)
A bad experiment specification: it only tests that brains don't work so well after you damage them. (That is, more detail is absolutely necessary in this case.)
Replies from: whpearson↑ comment by whpearson · 2010-01-01T19:58:01.469Z · LW(p) · GW(p)
From the link (pretty much the whole content unless you have access)
Dopamine-deficient (DD) mice have selective inactivation of the tyrosine hydroxylase gene in dopaminergic neurons, and they die of starvation and dehydration at 3-4 weeks of age. Daily injections of L-DOPA (50 mg/kg, i.p.) starting approximately 2 weeks after birth allow these animals to eat and drink enough for survival and growth. They are hyperactive for 6-9 h after receiving L-DOPA and become hypoactive thereafter. Because these animals can be tested in the presence or absence of DA, they were used to determine whether DA is necessary for learning to occur. DD mice > were tested for learning to swim to an escape platform in a straight alley in the presence (30 min after an L-DOPA injection) or absence (22-24 h after an L-DOPA injection) of dopamine. The groups were split 24 h later and retested 30 min or 22-24 h after their last L-DOPA injection. In the initial test, DD mice without dopamine showed no evidence of learning, whereas those with dopamine had a learning curve similar in slope to controls but significantly slower. A retest after 24 h showed that DD mice can learn and remember in the absence of dopamine, leading to the inference that the lack of dopamine results in a performance/motivational decrement that masks their learning competence for this relatively simple task.
That is: The mice were made so they couldn't manufacture dopamine naturally due to inability to make a precursor. Some were then given a dopamine precursor suplement they couldn't make that enabled them to make dopamine. These learnt almost as well as mice that could manufacture dopamine by themselves. So they showed that they could make the mice learn almost as well as those without damage by replacing something that was lost.
If this doesn't narrow down things enough, what more do you want?
Replies from: timtyler↑ comment by timtyler · 2010-01-01T20:18:32.228Z · LW(p) · GW(p)
The dopamine and opiate mechanisms are rather tangled together in practice:
The following study tests the hypothesis that dopamine is an essential mediator of various opiate-induced responses:
http://www.nature.com/nature/journal/v438/n7069/full/nature04172.html
↑ comment by Matt_Simpson · 2010-01-01T18:27:57.861Z · LW(p) · GW(p)
My first thought: redundancy. Having multiple circuits for the same task means there is a higher probability that at least one of them is working. However, this doesn't explain the differences between the two circuits.
comment by EvelynM · 2010-01-03T23:29:15.470Z · LW(p) · GW(p)
I noticed the distinction between wanting and liking as a result of my meditation practice. I began to derive great pleasure from very simple things, like the quality of an intake of breath, or the color combination of trees and sky.
And, I began to notice a significant decrease in compulsive wanting, such as for excess food, and for any amount of alcohol.
I also noticed a significant decrease in my startle reflex.
Similar results have been reported from Davidson's lab at the University of Wisconsin. http://psyphz.psych.wisc.edu/
Replies from: bbleeker, ThufirHawat, k3nt↑ comment by Sabiola (bbleeker) · 2011-01-28T14:07:21.928Z · LW(p) · GW(p)
You just convinced me to take up meditation again. :-)
Replies from: EvelynM↑ comment by ThufirHawat · 2013-04-07T16:37:40.981Z · LW(p) · GW(p)
How do you meditate? I've tried to get into meditation before, but never found a variant I was comfortable with.
Replies from: EvelynMcomment by clockbackward · 2010-01-10T17:45:29.500Z · LW(p) · GW(p)
Perhaps it is true that our modest technology for altering brain states (simple wireheading, recreational drugs, magnetic stimulation, etc.) leads only to stimulation of the "wanting" centers of the brain and to simple (though at times intense) pleasurable sensations. On the other hand though, it seems almost inevitable that as the secrets of the brain are progressively unlocked, and as our ability to manipulate the brain grows, it will be possible to generate all sorts of brain states, including those "higher" ones associated with love, accomplishment, fulfillment, joy, religious experiences, insight, bliss, tranquility and so on. Hence, while your analysis appears to be quite relevant with regard to wireheading today, I am skeptical that it is likely to apply much to the brain technology that could exist 50 years from now.
comment by Sebastian_Hagen · 2010-01-04T15:18:14.916Z · LW(p) · GW(p)
The first category, "things you do even though you don't like them very much" sounds like many drug addictions.
It's not limited to drugs or even similar physical stimuli like tasty food; according to my personal experience you can get the same effect with computer games. There's games that can be plenty of fun in the beginning (while you're learning what works), but stop being so once you abstract from that to a simple set of rules by which you can (usually) win, but nevertheless stay quite addictive in the latter phase. Whenever I play Dungeon Crawl Stone Soup for more than a few hours, I inevitably end up at a point where I don't even need to verbally think about what I'm doing for 95%+ of the wall-clock time spent playing, but that doesn't make it much easier to quit.
Popular vocabulary suggests that this is a fairly common effect.
Replies from: k3nt↑ comment by k3nt · 2010-01-05T05:21:48.665Z · LW(p) · GW(p)
Agree 100%. I just played a flash game last night and then again this morning, because I "just wanted to finish it." The challenge was gone, I had it all figured out, and there was nothing left but the mopping up ... which took three hours of my life. At the end of it, I told myself, "Well, that was a waste of time." But I was also glad to have completed the task.
It's probably a very good thing that I've never tried any drug stronger than alcohol.
comment by Eli Tyre (elityre) · 2019-11-15T04:11:38.197Z · LW(p) · GW(p)
Man this post is horrifying to me. It seems to imply (or at least suggest) a world where everyone, individually and collectively, is in the grip of mind controlling parasites, shifting people's choices and redirecting societies resources, against our best interests. We're all trapped in these mental snares where we can't even choose something different.
I've never really taken akrasia, per se, seriously, because I basically believed the claim about revealed preferences. If you're engaging in some behavior that's because some part of you wants (and (implicitly) likes) the resulting consequences.
This new view is dystopic.
comment by MatthewB · 2010-01-02T15:12:56.419Z · LW(p) · GW(p)
I will need to go back through this again, but as a DD person, I know that my ability to motivate myself to learn new things was astronomical compared to after I destroyed most of the dopaminergic systems in my head with Drug Abuse.
The largest area I have noticed is in painting and sculpting. Two areas where I used to spend inordinate amounts of time practicing/doing. I used to have the vast majority of my work-spaces covered with miniatures and sculptures that I was working on. Now... I have a hard time getting motivated to just get them out (which is I think most of the problem).
I do know that it is possible for me to mechanically activate the motivation to perform these tasks (and I am on medication that is supposed to help, but I get the feeling it isn't), just like the rats were lacking motivation to eat when their "wanting" circuits were knocked out.
Thanks for the article. I will need to dig through some posts on another forum where I recently posted a link to a paper about modifying the brains of people with obsessive-compulsions (Drug Addicts mostly) who were able to knock out the wanting to do drugs part of their brain... I'll post the title and a link as soon as I can find the name of it. It talks about some of the same things (I think it is a U of Mich. study as well)
Replies from: spamham↑ comment by spamham · 2010-01-02T21:11:27.141Z · LW(p) · GW(p)
Sorry to hear about the drug problems, but how can you be sure they "destroyed" your dopamine neurons? Not all drugs that increase these neurons' activity kill them. Psychological changes might be a simpler explanation IMHO (but I don't know you, so that might be far off the mark).
[...] knock out the wanting to do drugs part of their brain...
Sounds draconian. That part isn't just there for drugs...
Replies from: MatthewB↑ comment by MatthewB · 2010-01-02T21:36:46.031Z · LW(p) · GW(p)
I don't think that they destroyed the Dopamine neurons, just destroyed their ability to function properly. From the various scans that have been done of my brain; not only do I have a decreased production of Dopamine, but I have an increase in the number of receptor sites (I cannot recall from which area they sampled ). Thus, I have a major portion of dopamine sites that are demanding dopamine, and a shortage of dopamine to go around to satisfy the demand. I've been in so many MRI and NMR machines that I no longer even get claustrophobic.
As for the studies about creating lesions on the brain (knocking out the part of their brain that demands to do drugs)... Obviously it isn't there to want to do drugs.
It is there because it controls various aspects of our survival drives, yet they have been hijacked and malfunction due to the use/abuse (differentiation between the two) of various chemicals. The study is about the human trials of a procedure that was first done on rats and monkeys (Macaques I think) where they ablated a portion of the Amygdala and Thalamus (I cannot recall how they located it, as it was in the days before high resolution ƒMRI or NMRi), and the Rats and Monkeys went from being junkies (with either single or poly-substance dependence) to being relatively normal rats and monkeys. In the human trials, they found the same things as in the rat/monkey study, but with changes in some other behaviors in some of the participants (altered motivational drives, for instance). I know that one of the doctors is hoping to begin using this method on Sexual Predators, and also hopes to create a chemical method for altering the location of the brain that is ablated or abraided.
Anyway, I have made it through about six months of posts, and I am pretty sure that it was this year that I posted the link (in another forum... I could have sworn that I bookmarked it as well, but that might have been on my old laptop - I have a new laptop that was for "Christmas" even though I got it in November)
edit: found it:
The Neurosurgical Treatment of Addiction
Replies from: loqi↑ comment by loqi · 2010-01-02T23:55:32.935Z · LW(p) · GW(p)
From the various scans that have been done of my brain; not only do I have a decreased production of Dopamine, but I have an increase in the number of receptor sites (I cannot recall from which area they sampled ). Thus, I have a major portion of dopamine sites that are demanding dopamine, and a shortage of dopamine to go around to satisfy the demand.
If you're comfortable sharing, what drugs led to this? Cocaine? Amphetamine? Did alcohol tend to be involved?
Replies from: MatthewB↑ comment by MatthewB · 2010-01-03T07:01:22.275Z · LW(p) · GW(p)
Mostly it was Heroin, but there was a modest amount of Amphetamine usage involved as well (for completely patriotic reasons as well - /rolls eyes), and Cocaine became a problem for a few years, but strangely, I just stopped doing it one day like I would decide to throw out an old pair of underwear.
No alcohol was involved, which was mostly how I managed to get my brain into so many ƒMRI tunnels. I have never had any impairment from alcohol use, nor any dysfunction usage or abuse of alcohol either. Then, when several doctors found out about my anomalous cessation of cocaine, I got even more attention. That attention helped to free me from Heroin without the usual entanglement with a 12-step group or AA/NA (which at this point in time I have rather low opinions of).
I often wonder if I would still be alive if I hadn't started using these drugs though (which is contrary to what most people expect to hear). They do give a person a certain cognitive augmentation for each different drug, each of which can be highly useful depending upon the situation. I happened to be in a situation, during the 80s where amphetamines were indicated. I began to use the heroin because the amphetamines made me a little too shaky, and I liked the calm that the two drugs together gave me when having to do things... eventually though, all hell broke loose when I was no longer in that environment and still had the drug use (which rapidly turned into abuse). Fortunately, I am still alive and past that (well, the drug part of it. My brain still has some getting adjusted to life to do).
Replies from: Kevin↑ comment by Kevin · 2010-01-03T11:01:17.176Z · LW(p) · GW(p)
Is the current medication you refer to an anti-depressant? Does it do anything for you at all?
Replies from: MatthewB↑ comment by MatthewB · 2010-01-03T12:20:10.596Z · LW(p) · GW(p)
Ugh.. I just made a huge post addressing an issue that I realized was not the one to which you are probably referring.
I don't think I referred to any current medications in the prior post. I made a reference to the use of the drugs I began to abuse, and how these allowed me to live through situations which would probably have resulted in a poor outcome otherwise (not that I could qualify the outcome as good either, save for the fact that I am alive instead of dead)...
Are you referring to the beginning of the third paragraph???
Replies from: spamhamI often wonder if I would still be alive...
↑ comment by spamham · 2010-01-03T13:52:53.254Z · LW(p) · GW(p)
Kevin means this I suppose?
Replies from: MatthewB, KevinI do know that it is possible for me to mechanically activate the motivation to perform these tasks (and I am on medication that is supposed to help, but I get the feeling it isn't)
↑ comment by MatthewB · 2010-01-03T15:06:22.856Z · LW(p) · GW(p)
Ah... That... Yes... from the previous post...
I am referring mostly to anti-depressants and Drugs to control ADD, which ironically, are very much like Amphetamines (Provigil, Adderall or Ritalin, probably Provigil or Ritalin). I did a two weeks on Provigil, and I will be doing 2 weeks on Ritalin to compare the two. It is unlikely that my Dr would prescribe Adderall, but she said it isn't totally out of the question depending upon how I respond to the others (and the fact that I haven't shown any signs that I would be likely to abuse it at this point).
The current medications I am on work to a degree. I can tell when I am off my anti-depressants, for instance, yet my anti-anxiety drugs do absolutely nothing.
The drugs to control ADD are kinda a fudge by the Dr. as I have not been diagnosed explicitly as having ADD (it is something that she suspects, yet for which I haven't displayed many of the more common symptoms. If my mother had not been a Christian Scientist when I was a kid, we might have clinical records that could help out in this case a bit more), yet she feels that they will help out with some of the motivational and concentration problems I have been having with school (and life).
comment by wedrifid · 2010-01-02T04:26:10.706Z · LW(p) · GW(p)
A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for "wanting" and "liking", and were able to knock out either circuit without affecting the other (it was actually kind of cute - they measured the number of times the rats licked their lips as a proxy for "liking", though of course they had a highly technical rationale behind it).
One could come up with a story about how people are motivated to act selfishly but enjoy acting morally, which allows them to tell others a story about how virtuous they are while still pursuing their own selfish gain.
Rats! The neuroscientists were studying rats. It is troubling how easy it is to come up with these signalling stories to explain whatever observations we encounter.
What explanation can be suggested for a different mechanism for enjoying a food than the one for motivation to get food that doesn't rely on impressing our little rat friends with our culinary sophistication?
Replies from: Nick_Tarleton, Blueberry↑ comment by Nick_Tarleton · 2010-01-02T05:11:05.096Z · LW(p) · GW(p)
On the other hand, apparent signaling behavior has been observed in cockroaches.
↑ comment by Blueberry · 2010-01-03T08:14:40.539Z · LW(p) · GW(p)
I'm not sure why this comment got upvoted so much. If I understand what you're saying correctly, you have it exactly backwards. The signaling story wasn't intended to explain the two different mechanisms, which evolved long before humans. The signaling story is just one way that the two different mechanisms affect our lives today.
comment by Blueberry · 2010-01-01T18:30:39.158Z · LW(p) · GW(p)
The other problem here is distinguishing pleasure, fun, and happiness.
As I understand wireheading, it's equivalent to experiencing a lengthy orgasm. I would describe an orgasm as pleasurable, but it seems inaccurate to call it "fun", or to call the state of experiencing an orgasm as "happiness".
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-11-29T23:49:02.391Z · LW(p) · GW(p)
Just as a note, I don't experience orgasms as pleasurable. In fact, orgasms seem to me an excellent case in point of the difference between wanting and liking, at least in my case.
comment by dclayh · 2010-01-01T18:24:07.516Z · LW(p) · GW(p)
For the record, the actual Landsburg quote is
In one recent survey, 39 percent of New Yorkers said they would leave the city "if they could"! Every one of them was in New York on the day of the interview, so we know that at a minimum, 39 percent of New Yorkers lie to pollsters.
page 63 of his latest book The Big Questions.
Although I'm generally a big fan of Landsburg, this seems much more a case of confusion over what "leave the city" and "if you can" mean than one of lying.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2010-01-01T20:06:12.310Z · LW(p) · GW(p)
I was reading More Sex is Safer Sex, so he must like using this anecdote a lot.
comment by Larks · 2010-01-01T23:32:29.121Z · LW(p) · GW(p)
On a related note, it seems people do not use 'happy' and 'unhappy' as opposite, at least when they're referring to a whole life. Rather, happiness involves normative notions (a good life) whereas being unhappy is simply about endorphins.
http://experimentalphilosophy.typepad.com/experimental_philosophy/2009/12/can-.html
Replies from: FrankAdamek↑ comment by FrankAdamek · 2010-01-02T21:08:20.034Z · LW(p) · GW(p)
Culturally it may generally be considered too much of a blow to ever say someone is unhappy in general, or has an unhappy life. Or it may be too depressing for the people themselves to think, that the other person was unhappy, is unhappy, and will continue to be unhappy, rather than just happens to be unhappy now.
Replies from: Larks↑ comment by Larks · 2010-01-03T00:22:42.991Z · LW(p) · GW(p)
If that were true, we’d expect to see people more willing to pronounce a ‘happy’ verdict than a ‘sad’ verdict, but the link I posted suggests that people are willing to agree that the wholesome woman is unhappy if she thinks she is, but unwilling to say the superficial woman is happy, even though she thinks she is.
comment by kurt · 2016-12-30T16:16:01.794Z · LW(p) · GW(p)
There is a very simple explanation to the seeming discrepancy between wanting and liking, and that is that a person is always experiencing a tension between wanting a bit of pleasure now, versus a lot of pleasure later. Yes, spending time with your family may give you some pleasure now, but staying in NY and putting money aside will give you a lot of security later on. This may not explain the whole difference but perhaps a good chunk of it.
I think wireheading is dismissed far too quickly here and elsewhere. Like it or not (pun intended), the only reason we do things is to obtain a sensation out of them, and if we can obtain the sensation without going through the motions I see no objective reason why we should, other than "we've always done it that way".
I think we have to seriously consider the possibility that wireheading may be the right thing to do. It seems to trivialize the entire human subjective experience to reduce wireheading to "a needle in the brain". That may be the crude state in which it is now. However it should be uncontroversial that with mastery over physics and matter the only events worthy of notice will be mental events (as in a Dyson sphere for instance). Then, engineering will all be about producing internal worlds, not outcomes on the outside - a very, very complex, multidimensional type of wireheading. Which gives us the sum total of the value that the universe can produce, without the mess.
The difference between being stuck in orgasm mode, and having an infinity of worlds and experiences to explore, is simply in whether we value variety more than intensity or vice versa. Hopefully the AI will know what's best for us :D
Replies from: kurt↑ comment by kurt · 2016-12-30T16:41:17.612Z · LW(p) · GW(p)
One could also note that 'liking' is something the mammalian and reptile brains are good at, and 'wanting' is often related to deliberation and executive system motivation. Though there are probably different wanting (drive) systems in lower brain layers and in the frontal lobe.
Also, some things we want because they produce pleasure, and some are just interim steps that we carry out 'because we decided'. Evolutionary history determines that we get rewarded when we get actual things far more than when we make one small step. We can sometimes utilize will power, which could be seen as an evolutionarily novel and therefore weak mechanism to provide short term reward for executing steps towards a longer term benefit. Steps that are not rewarded by the opioid system.
I think that the more we hack into the brain and the more we will discover that 'wanting' and 'liking' are umbrella terms completely unsuited to capture the subtlety and the sheer messiness of the spaghetti code of human motivation.
I am also going to comment on the idea that intelligent agents could have a 'crystalline' (ordered, deterministic) utility evaluation system. We went down that road trying to make AI work, i.e. making brittle systems based on IF/THEN - that stuff doesn't work.
So what makes us think that using the same type of approach will work for utility evaluation (which is a hard problem requiring a lot of intelligence)?
Humans are adaptable because they get bored, and try new things, and their utility function can change, and different drives can interact in novel ways as the person matures and grows wiser. That can be very dangerous in an AI.
But can we really avoid that danger? I am skeptical that we will be able to have a completely bayesian, deterministic utility function. Perhaps we are underestimating how big a chunk of intelligence the evaluation of payoffs really is; and thinking that it won't require the same kind of fine-grained, big-data type of messy, uncertain pattern smashing that we now know is necessary to do anything, like distinguish cars from trees.
We have insufficient information about the universe to judge the hedonic value of all actions in an accurate way, that's another reason to want the utility evaluation to be as plastic as possible. Chaos must be introduced into the system to avoid getting caught in locally optimal spaces. Dangerous yes, but possibly this necessity is what will allow the AI to eventually bypass human stupidity in constructing it.
comment by Jiro · 2015-09-28T21:26:18.551Z · LW(p) · GW(p)
(Response to old post)
According to a recent poll, two out of three New Yorkers say that, given the choice, they would rather live somewhere else. But all of them have the choice, and none of them live anywhere else. A proper summary of the results of this poll would be: two out of three New Yorkers lie on polls.
Ordinary people do not interpret the statement "given the choice" to mean "under at least one circumstance where it is not physically impossible". That's not an example of revealed preferences or inconsistency--it's an example of real people not acting like Internet geeks.
comment by Dean · 2010-02-03T08:01:04.082Z · LW(p) · GW(p)
I was reading a free on line book "The Authoritarians" by Robert Altemeyer one of his many findings of studying fundamentalists authoritarian follower types is that many deal with the guilt of doing some thing morally wrong by asking God for forgiveness and then feel closer to "Much less guilty" then "Appreciably less guilty" it may not be a wire in the head but it should keep one from some suffering.
I also very much like Dan Gilbert's Ted talk on synthesizing happiness I use it all the time because "it really is not so bad" and "it turned out for the best"
comment by Utilitarian · 2010-01-04T00:01:31.722Z · LW(p) · GW(p)
Great post! I completely agree with the criticism of revealed preferences in economics.
As a hedonistic utilitarian, I can't quite understand why we would favor anything other than the "liking" response. Converting the universe to utilitronium producing real pleasure is my preferred outcome. (And fortunately, there's enough of a connection between my "wanting" and "liking" systems that I want this to happen!)
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2010-01-04T00:39:52.518Z · LW(p) · GW(p)
I agree that this is a great post. (I'm sorry I didn't make that clear in my previous comment.)
I can't quite understand your parenthetical remark. I though your position was that you wanted, rather than liked, experiences of liking to be maximized. Since you can want this regardless of whether you like it, I don't see why the connection you note between your 'wanting' and 'liking' systems is actually relevant.
Replies from: Utilitarian↑ comment by Utilitarian · 2010-01-04T01:02:30.915Z · LW(p) · GW(p)
Actually, you're right -- thanks for the correction! Indeed, in general, I want altruistic equal consideration of the pleasure and pain of all sentient organisms, but this need have little connection with what I like.
As it so happens, I do often feel pleasure in taking utilitarian actions, but from a utilitarian perspective, whether that's the case is basically trivial. A miserable hard-core utilitarian would be much better for the suffering masses than a more happy only-sometimes-utilitarian (like myself).
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2010-01-04T01:17:20.693Z · LW(p) · GW(p)
Thanks for the clarification. :-)
comment by Adam Zerner (adamzerner) · 2014-05-04T15:52:44.206Z · LW(p) · GW(p)
The big point I took away from this article is that wanting and liking are different, and thus we should be skeptical of "revealed preferences".
But the title seemed to imply that the article wanted to address the question of whether or not we should wirehead. The last paragraph seems to argue that we should be really careful with wireheading, because we could get it wrong and not really know that we got it wrong.
Go too far toward the liking direction, and you risk something different from wireheading only in that the probe is stuck in a different part of the brain. Go too far in the wanting direction, and you risk people getting lots of shiny stuff they thought they wanted but don't actually enjoy. So which form of good should altruists, governments, FAIs, and other agencies in the helping people business respect?
I agree with this, but given that it's a central argument of the article, I think it could use a longer explanation.
comment by Adam Zerner (adamzerner) · 2014-05-04T15:45:57.381Z · LW(p) · GW(p)
I'm a neuroscience major and have known about the different circuits for liking vs. wanting. And it's always been a belief of mine that people's revealed preferences are often times just wrong, and that this is huge problem with our economy. But somehow I never connected this to the liking/wanting circuits being different. Thanks!
comment by Adam Zerner (adamzerner) · 2014-05-04T15:37:37.349Z · LW(p) · GW(p)
they measured the number of times the rats licked their lips as a proxy for "liking", though of course they had a highly technical rationale behind it
Could you give a quick summary of this rationale?
comment by AndrewH · 2010-01-25T20:05:37.705Z · LW(p) · GW(p)
I don't smoke, but I made the mistake of starting a can of Pringles yesterday. If you >asked me my favorite food, there are dozens of things I would say before "Pringles". >Right now, and for the vast majority of my life, I feel no desire to go and get Pringles. >But once I've had that first chip, my motivation for a second chip goes through the >roof, without my subjective assessment of how tasty Pringles are changing one bit.
What is missing from this is the effort (which eats up the limited willpower budget) required to get the second Pringle chip. Your motivation for a second Pringle chip would be much lower if you only brought one bag of Pringle chips, and all bags contained one chip. However, your motivation to have another classof(Pringle) = potato chip no doubt rises -- due to the fact that chips are on your thoughts rather than iPhones.
Talking about effort allows us to bring in habits into the discussion, which you might define as sets of actions that, due to their frequent use, are much less effort to perform.
The difference between enjoyment and motivation provides an argument that could >rescue these people. It may be that a person really does enjoy spending time with >their family more than they enjoy their iPhone, but they're more motivated to work >and buy iPhones than they are to spend time with their family.
Alternatively, for potentially good reasons before (working hard to buy a house for said family), work has become habitual while spending time with the family has not. Hence, work is the default set of actions, the default no-effort state, and anything that takes time off work requires effort. Spending time with the family could do this, yet buying an iPhone with the tons of money this person has would not.
A way of summarizing the effect of effort is that it is a function of a particular persons set of no-effort (no willpower) actions. This function defines how much 'wanting' is required to do that action -- less effort actions of the same amount of 'wanting' are more 'desirable' to be done.
Willpower plays a big role in this in that you can spend willpower to pull yourself out of the default state (a default state such as being in New York), but it only last so long.
comment by CronoDAS · 2010-01-02T23:59:11.711Z · LW(p) · GW(p)
It's hard to track down specific things from that wireheading.com site, but this seems to be a good overview. Of particular note are a couple of excerpts:
The results of these experiments indicate that reinforcing brain stimulation may have two distinct effects: (a) it activates pathways related to natural drives, and (b) it stimulates reinforcement pathways normally activated by natural rewards. The empirical observations seem to contradict classic "drive-reduction" theories of reinforcement (reinforcement appears to be associated with increased drive in the EBS paradigm). However, it is not difficult to construct a plausible alternate hypothesis: Animals may self-stimulate because the stimulation provides the experience of an intense drive that is instantly reduced due to the concurrent activation of related reward neurons. This interpretation accounts neatly for many of the apparent paradoxes we have already encountered. Priming is necessary, according to this interpretation, because EBS reinforcement not only activates reward pathways but also provides the reason why that should be pleasurable (Deutsch, 1976). (This also accounts for rapid extinction, as well as the decreased efficacy of intermittent reinforcement.) The hypothesis assumes that the reinforcing properties of EBS are determined by the degree of activation of related motivational systems. It therefore accounts readily for the observed interactions between the reinforcing properties of a stimulus and various experimental conditions that affect related primary drives such as hunger. When there is little endogenous activity, for instance immediately after a meal, the stimulation elicits only a small amount of drive-related activity. Concurrent activation of related reward circuits therefore can produce only a small reinforcement effect. When hunger-related neural pathways are already active because of deprivation, the same stimulation elicits more drive and hence more reinforcement. Indeed, it may arouse the drive system sufficiently to elicit consumatory behavior that further potentiates the reinforcing effects of the electrical stimulation.
...
It is interesting to note that while the animal literature suggests that brain stimulation has positive, reinforcing effects, the human literature indicates that relief of anxiety, depression and other unpleasant affective conditions may be the most common "reward" of electrical brain stimulation in humans. Patients with electrodes in the septum, thalamus, and periventricular gray of the midbrain often express euphoria because the stimulation seems to reduce existing negative affective reactions (even intractable pain appears to loose its affective impact). However, many psychiatrists caution that this may not reflect an activation of a basic reward mechanism (Delgado, 1976; Heath et al., 1968). Relief from chronic anxiety has been reported during and even long after stimulation of frontal cortex. Again, the experiential response appears to be relief rather than reward per se (Crow&Cooper, 1972).
In general, it seems as though electrical brain stimulation isn't quite as effective at producing bliss as one might wish (or fear).
comment by mwengler · 2012-06-26T21:34:36.725Z · LW(p) · GW(p)
So which form of good should altruists, governments, FAIs, and other agencies in the helping people business respect?
Somehow, trying to figure out the policy an FAI or paternalist government should have by examining our addictive reactions strikes me as like doing transportation planning by looking at people's reactions to red, yellow, and green lights. Not that people's reactions to these things are irrelevant to traffic planning, but rather that figuring out where people are trying to go is even more important than figuring out their reactions to traffic signals.
The post talks about signals in the brain that motivate us powerfully. Almost certainly, evolution favored these signalling mechanisms because of where it tended to take us rather than how it tended to direct us there.
Maybe an FAI rather than figuring how to make us feel like we are satisfied or happy would instead figure out how to evoke these responses when we were doing something good for us and how to get negative responses when we were doing something bad for us. Maybe an FAI would rather re-wire us rather than wirehead us.
How do we stop our CEV from bringing us all beyond human, and should we even want to?
comment by A1987dM (army1987) · 2011-11-19T21:24:31.571Z · LW(p) · GW(p)
My grandma always assumes that if I don't want to have [some kind of food] right now, that means I don't like it.
comment by timtyler · 2011-01-26T01:26:22.907Z · LW(p) · GW(p)
Problem: large chunks of philosophy and economics are based upon wanting and liking being the same thing.
I don't think that is true.
"Wanting" maps onto expected utility; "liking" is the reward signal - the actual utility.
That framing surely makes it seem like pretty standard economics.
There are some minor footnotes about how the reward signal can sometimes be self-generated - e.g. when you know you should have got the reward, but were just unlucky.
comment by nazgulnarsil · 2011-01-25T20:55:42.146Z · LW(p) · GW(p)
as a person who plans to wirehead themselves if other positive futures don't work out I find this very interesting but unconvincing.
comment by Robi Rahman (robirahman) · 2016-05-02T20:47:57.518Z · LW(p) · GW(p)
This summarizes a common strain of thought in economics, the idea of "revealed preferences". People tend to say they like a lot of things, like family or the environment or a friendly workplace. Many of the same people who say these things then go and ignore their families, pollute, and take high-paying but stressful jobs. The traditional economic explanation is that the people's actions reveal their true preferences, and that all the talk about caring about family and the environment is just stuff people say to look good and gain status.
I think you are mischaracterizing the concept of revealed preference. It's not that they claim to care about family or the environment "just for status", but rather, they exaggerate how much they care about one thing relative to another. For example, when I was overweight, I used to say stuff like "I want to be skinny". But I'd keep eating junk food anyway. The reality was that I wanted to eat junk food more than I wanted to be healthy. (Maybe not long-term: hyperbolic discounting can explain this, since people over-weight rewards that come sooner, so even people who will pick an apple instead of a cookie for tomorrow's lunch might be tempted enough to eat the cookie when the choice is right in front of them.) Nowadays, I enjoy being healthy more than I enjoy the taste of ice cream, so I can convince myself to stop eating it if I think about the downsides.
comment by Adam Zerner (adamzerner) · 2015-02-12T04:06:07.136Z · LW(p) · GW(p)
To me, the question of whether wireheading is good is one where wireheading is defined as stimulating actual liking (not wanting). Because to me, the question is trying to get at whether or not happiness is reducible in theory, not in practice.
And so, I sense that this shouldn't be titled "Are wireheads happy?". It's more about the distinction between wanting and liking.
comment by Xaos · 2013-04-03T18:49:39.073Z · LW(p) · GW(p)
"One could come up with a story about how people are motivated to act selfishly but enjoy acting morally..."
Actually, I think a lot of stories are like that.
Because "CONFLICT IS DRAMA!!!!!1!!!one!!!!", a whole lot of stories I've been reading involve the characters having an arc that goes like this:
-Problem occurs, everyone has different ideas about how to solve it. -Ignored dissenting character, perhaps with prodding by certain outside forces, blows up, acts like a jerk, and storms off. -Dissenting character realizes that regardless of how much better their own plan was, that they've let a lot of relatively small and meaningless things drive a wedge between themselves and their friends, right when their friends needed them to be there and "Oh I was a fool!" and blah blah blah kiss and make up, Friendship is Magic.
Unless its a zombie apocolypse story, in which case the character in question NEVER stop fighting until they die, or at least they stop and reveal "they were a good person all along who made bad decisions" about ten mintues or so before the zombies eat them.
comment by mwengler · 2012-06-26T20:41:27.922Z · LW(p) · GW(p)
For research on human happiness that really does a great job of presenting non-intuitive results in a compelling fashion, I recommend Daniel Gilbert's Stumbling on Happiness.
For a great book in a lot of ways, I recommend Robert Frank's "Darwin Economy." I read this because in his interview on Russ Robert's "Econtalk" podcast, Robert Frank made the claim that 100 years from now Darwin will be recognized as the greatest economist.
Some of their points relevant to wireheading: happiness seems mostly relative, relative to where you were recently. In engineering/physics terms, I think of it as there is no DC term, if you attach to something pleasure producing (whether wirehead, or just plain head) it is delicious, and delicious for a while, but even if you don't stay attached long enough to have it just be OK, its deliciousness declines back towards the middle.
comment by AGirlAlone · 2012-02-10T09:15:44.811Z · LW(p) · GW(p)
I wonder. I grew up with experience in multiple systems of meditation, and found a way that works for me. Without electrodes or drugs or Nobel Prizes, I can choose to feel happy and relaxed and whatever. When I think about it, meditation can feel more pleasing and satisfying than every other experience in my life. Yet (luckily?) I do not feel any compulsion to do that in place of many other things, or try to advocate it. This is not because of willpower. While it lasts I like and want it, as if there's fulfillment of purpose, and when it's over I cannot recall that feeling faithfully enough to desire it more than I desire chocolate. Also, I cannot very reliably reproduce the feeling - it occurs only some of the time I try, cannot be had too frequently(no idea why) and cannot be consciously prolonged. So I consider it a positive addition to my life, especially helpful in yanking me out of episodes of gloom.
This of course raises multiple questions. There's such thing as ambient mood as opposed to current momentary pleasure, and if a person is pissed off too often to concentrate productively, would improving the mood be the right choice, especially if it has an upper bound and doesn't lead to the person madly pressing the button indefinitely? Hell, if there's any way to make people happier with no other change, without causing crippling obsession - maybe there's such a quirk in the brain (with want and pleasure detached from each other) to be exploited safely with meditation, maybe the button is in responsible hands, would it be acceptable? Though, the meditation sometimes make me wonder if the mind can directly change the world (I changed my emotional reality, and it felt real). Is impaired rationality an acceptable price then?
comment by Uni · 2011-06-28T05:43:00.184Z · LW(p) · GW(p)
So which form of good should altruists, governments, FAIs, and other agencies in the helping people business respect?
Governments should give people what people say they want, rather than giving people what the governments think will make people happier, whenever they can't do both. But this is not because it's intrinsically better for people to get what they want than to get what makes them happier (it isn't), it's because people will resent what they percieve as paternalism in governments and because they won't pay taxes and obey laws in general if they resent their governments. Without taxes and law-abiding citizens, there will not be much happiness in the long run. So, simply for the sake of happiness maximizing, governments should (except, possibly, in some very, very extreme situations) just do what people want.
It's understandable that people want others to respect what they want, rather than wanting others to try to make them happier: even if we are not all experts ourselves on what will make us happier (not all people know about happiness research), we may need to make our own mistakes in order to really come to trust that what people say works works and that what people say doesn't work doesn't work. Also, some of government's alleged benevolent paternalism "for people's own good" (for example Orwellian surveillance in the name of the "war on terror") may even be part of a plan to enslave or otherwise exploit the people. We may know these things subconsciously, and that may explain why some of us are so reluctant to conclude that what we want has no intrinsic value and that pleasure is the only thing that has intrinsical value. The instrumental value of letting people have what they want (rather than paternalistically giving them what some government thinks they need) is so huge, that saying it has "mere" instrumental value feels like neglecting how huge a value it has. However, it doesn't really have intrinsic value, it just feels that way, because we are not accostumed to thinking that something that has only instrumental value can have such a huge instrumental value.
For example, freedom of speech is of huge importance, but not primarily because people want it, but primarily because it provides happiness and prevents too much suffering from happening. If it were the case that freedom of speech didn't provide any happiness and didn't prevent any suffering, but people still eagerly wanted it, there would be no point in letting anybody have freedom of speech. However, this would imply either that being denied freedom of speech in no way caused any form of suffering in people, or that, if it caused suffering, then getting freedom of speech wouldn't relieve any of that suffering. That is a hypothetical scenario so hard to imagine that I think the fact that it is so hard to imagine is the reason why people have difficulties accepting the truth that freedom of speech has merely instrumental value.
comment by [deleted] · 2010-04-28T16:07:17.219Z · LW(p) · GW(p)
I certainly find that I like creative work but don't want to work, like music but don't want to listen to music, like exercise but (sometimes) don't want to exercise. I like volunteering, but don't want to volunteer. (Perhaps tautologically) I like being in a good mood but sometimes don't want to be in a good mood.
From that short list, it seems that one ought to give more credence to "like" than "want." What I like doing, in the moment, correlates fairly well with conventional judgments of good behavior. (To be fair, some of what I like most -- learning about what interests me, friendly discussions/conversations -- are not particularly good behavior.)
But having kids, for instance, is something that people want but don't like, and it certainly falls into the good behavior category. Working a dull job for pay. Any kind of goal that is about the destination rather than the journey -- curing a disease or securing a civil right or a similar aim -- may well be a process you dislike. Exposing yourself to danger. I don't think these are failures to properly estimate future happiness, I think these are examples of not caring about future happiness. (This is a friend of mine's argument against utilitarianism -- there are things he wants that he knows he will not like, and he doesn't want anyone forcing him to make his opioids happy against his will.)
I would guess that liking is more uniform than wanting. You can choose to value almost anything, and therefore want almost anything. What you like may be neurologically determined. Which may be why in my observation "liking" corresponds so well with being healthy, productive, well-rounded, and so on; I suspect you like, in a rough sense, what is good for you. Whereas you may want a variety of things that are not good for you, whether in a destructive sense or a noble sense.
comment by giles_english · 2010-01-12T09:35:55.783Z · LW(p) · GW(p)
Does this liking/enjoying dichotomy explain the various flavours of erotic masochism?
Replies from: ciphergoth, pdf23ds↑ comment by Paul Crowley (ciphergoth) · 2010-01-12T10:11:16.311Z · LW(p) · GW(p)
Speaking from my own experience, I worry a lot that I'm often drawn to play computer games for reasons other than enjoying them. I never worry this about SM.
↑ comment by pdf23ds · 2010-01-12T09:48:03.828Z · LW(p) · GW(p)
I think you mean the "enjoyment/motivation dichotomy" or the "wanting/liking dichotomy". Many sexual masochists report a sort of euphoria and dissociation that begins to arise after a certain amount of pain, which would seem to be a positive liking. Many also report a positive thrill from humiliation that might be considered to overwhelm the negative parts of the experience. (I think pretty much all report that if the same things were to happen in a non-role-playing situation, they would not like or enjoy them. But it's hard to test that.)
comment by Peter_Twieg · 2010-01-08T01:38:45.255Z · LW(p) · GW(p)
I realize that I'm late to the game on this post, but I have to say that as economist, I found the take home point about revealed preference to be quite interesting, and it makes me wonder about the extent to which further neuroscience research will find systematic disjunctions in everyday circumstances between what motivates us and what gives us pleasure. Undoubtedly this would be leveraged into new sorts of paternalistic arguments... I'm guessing we'll need another decade or two before we have the neuropaternalist's equivalent of Nudge, however.
comment by wedrifid · 2010-01-02T04:26:09.870Z · LW(p) · GW(p)
A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for "wanting" and "liking", and were able to knock out either circuit without affecting the other (it was actually kind of cute - they measured the number of times the rats licked their lips as a proxy for "liking", though of course they had a highly technical rationale behind it).
One could come up with a story about how people are motivated to act selfishly but enjoy acting morally, which allows them to tell others a story about how virtuous they are while still pursuing their own selfish gain.
Rats! The neuroscientists were studying rats. It is troubling how easy it is to come up with these signalling stories to explain whatever observations we encounter.
What explanation can be suggested for a different mechanism for enjoying a food then motivation to get food that doesn't rely on impressing our little rat friends with our culinary sophistication?
comment by wedrifid · 2010-01-02T04:26:07.089Z · LW(p) · GW(p)
A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for "wanting" and "liking", and were able to knock out either circuit without affecting the other (it was actually kind of cute - they measured the number of times the rats licked their lips as a proxy for "liking", though of course they had a highly technical rationale behind it).
One could come up with a story about how people are motivated to act selfishly but enjoy acting morally, which allows them to tell others a story about how virtuous they are while still pursuing their own selfish gain.
Rats! The neuroscientists were studying rats. It is troubling how easy it is to come up with these signalling stories to explain whatever observations we encounter.
What explanation can be suggested for a different mechanism for enjoying a food then motivation to get food that doesn't rely on impressing our little rat friends with our culinary sophistication?
comment by wedrifid · 2010-01-02T04:25:45.116Z · LW(p) · GW(p)
A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for "wanting" and "liking", and were able to knock out either circuit without affecting the other (it was actually kind of cute - they measured the number of times the rats licked their lips as a proxy for "liking", though of course they had a highly technical rationale behind it).
One could come up with a story about how people are motivated to act selfishly but enjoy acting morally, which allows them to tell others a story about how virtuous they are while still pursuing their own selfish gain.
Rats! The neuroscientists were studying rats. It is troubling how easy it is to come up with these signalling stories to explain whatever observations we encounter.
What explanation can be suggested for a different mechanism for enjoying a food then motivation to get food that doesn't rely on impressing our little rat friends with our culinary sophistication?
comment by wedrifid · 2010-01-02T04:25:25.291Z · LW(p) · GW(p)
A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for "wanting" and "liking", and were able to knock out either circuit without affecting the other (it was actually kind of cute - they measured the number of times the rats licked their lips as a proxy for "liking", though of course they had a highly technical rationale behind it).
One could come up with a story about how people are motivated to act selfishly but enjoy acting morally, which allows them to tell others a story about how virtuous they are while still pursuing their own selfish gain.
Rats! The neuroscientists were studying rats. It is troubling how easy it is to come up with these signalling stories to explain whatever observations we encounter.
What explanation can be suggested for a different mechanism for enjoying a food then motivation to get food that doesn't rely on impressing our little rat friends with our culinary sophistication?
Replies from: Unknownscomment by spamham · 2010-01-02T03:23:46.856Z · LW(p) · GW(p)
Seems like a pretty large leap from certain simple behaviours of rats to the natural-language meaning of "wanting" and "liking". Far-reaching claims such as this one should have strong evidence. Why not give humans drugs selective for either system and ask them? (Incidentally, at least with the dopamine system, this has been done millions of times ;) The opioids are a bit trickier because activating mu receptors (e.g. by means of opiates) will in turn cause a dopamine surge, too)
(Yes, I should just read the paper for their rationale, but can't be bothered right now...)
comment by spamham · 2010-01-02T03:09:38.904Z · LW(p) · GW(p)
Edit: Sorry, editing messed up the formatting. Trying to fix...
Seems like a pretty large leap from certain behaviours of rats to the natural-language meaning of "wanting" and "liking". Far-reaching claims such as this one should have strong evidence. Why not give humans drugs selective for either system and ask them? (Incidentally, at least with the dopamine system, this has been done millions of times ;) The opioids are a bit trickier because activating mu receptors (e.g. by means of opiates) will in turn cause a dopamine surge, too)
(Yes, I should just read the paper for their rationale, but can't be bothered right now...)
comment by Roko · 2010-01-01T19:25:40.887Z · LW(p) · GW(p)
One could come up with a story about how people are motivated to act selfishly but enjoy acting morally, which allows them to tell others a story about how virtuous they are while still pursuing their own selfish gain.
Surely that should read "while still pursuing their genes' selfish gain.", because if you do something that makes you less happy, you have not gained...
comment by DanielLC · 2010-01-02T00:22:02.286Z · LW(p) · GW(p)
There is not a single utility qualia. There is a huge number of different kinds of qualia. Each kind of qualia is associated with a different amount of utility. For example, the qualia of red is zero utility. The qualia of fun is positive, as is the qualia of love. Even that's an oversimplification, as those each can refer to different kinds of qualia.
The mere idea of having separate qualia for "wanting" and "liking" means nothing.
The question is: are we estimating the utility of each kind of qualia correctly?
The more formal methods suggest that "wanting" is, if anything, more important than "liking", whereas intuitive methods say the reverse.
I suggest figuring out what causes us to intuitively give one answer.
Does the fact that we believe "wanting" is unimportant mean it is? Perhaps we're just lying about it to ourselves. It makes us happy, but we don't know it does. I think this is the same as saying we only feel it on a subconscious level. If so, the question is equivalent to asking if the subconscious mind feels qualia.
Replies from: wedrifid↑ comment by wedrifid · 2010-01-02T09:02:26.111Z · LW(p) · GW(p)
The mere idea of having separate qualia for "wanting" and "liking" means nothing.
Knowing that 'wanting' and 'liking' use distinct neurological mechanisms is useful. (Using the term 'qualia' - not so much.)
Replies from: DanielLC↑ comment by DanielLC · 2010-01-06T18:53:47.524Z · LW(p) · GW(p)
There is more than one emotion that can be used for reward. Nobody has argued against that. There can't be more than one unless there are separate neurological mechanisms.
The important thing is that not every emotion useable for reward is good. There is no way they could have possibly figured that out from that study. That part was either from introspection or someone misinterpreting the study.