Not for the Sake of Pleasure Alone

post by lukeprog · 2011-06-11T23:21:43.635Z · LW · GW · Legacy · 132 comments

Contents

  Wanting and liking
  Commingled signals
  Conclusion
  Notes
  References
None
132 comments

Related: Not for the Sake of Happiness (Alone), Value is Fragile, Fake Fake Utility Functions, You cannot be mistaken about (not) wanting to wirehead, Utilons vs. Hedons, Are wireheads happy?

When someone tells me that all human action is motivated by the desire for pleasure, or that we can solve the Friendly AI problem by programming a machine superintelligence to maximize pleasure, I use a two-step argument to persuade them that things are more complicated than that.

First, I present them with a variation on Nozick's experience machine,1 something like this:

Suppose that an advanced team of neuroscientists and computer scientists could hook your brain up to a machine that gave you maximal, beyond-orgasmic pleasure for the rest of an abnormally long life. Then they will blast you and the pleasure machine into deep space at near light-speed so that you could never be interfered with. Would you let them do this for you?

Most people say they wouldn't choose the pleasure machine. They begin to realize that even though they usually experience pleasure when they get what they desired, they want more than just pleasure. They also want to visit Costa Rica and have good sex and help their loved ones succeed.

But we can be mistaken when inferring our desires from such intuitions, so I follow this up with some neuroscience.


Wanting and liking

It turns out that the neural pathways for 'wanting' and 'liking' are separate, but overlap quite a bit. This explains why we usually experience pleasure when we get what we want, and thus are tempted to think that all we desire is pleasure. It also explains why we sometimes don't experience pleasure when we get what we want, and why we wouldn't plug in to the pleasure machine.

How do we know this? We now have objective measures of wanting and liking (desire and pleasure), and these processes do not always occur together.

liking expressionsOne objective measure of liking is 'liking expressions.' Human infants, primates, and rats exhibit homologous facial reactions to pleasant and unpleasant tastes.2 For example, both rats and human infants display rhythmic lip-licking movements when presented with sugary water, and both rats and human infants display a gaping reaction and mouth-wipes when presented with bitter water.3

Moreover, these animal liking expressions change in ways analogous to changes in human subjective pleasure. Food is more pleasurable to us when we are hungry, and sweet tastes elicit more liking expressions in rats when they are hungry than when they are full.4 Similarly, both rats and humans respond to intense doses of salt (more concentrated than in seawater) with mouth gapes and other aversive reactions, and humans report subjective displeasure. But if humans or rats are depleted of salt, both humans and rats react instead with liking expressions (lip-licking), and humans report subjective pleasure.5

Luckily, these liking and disliking expressions share a common evolutionary history, and use the same brain structures in rats, primates, and humans. Thus, fMRI scans have uncovered to some degree the neural correlates of pleasure, giving us another objective measure of pleasure.6

As for wanting, research has revealed that dopamine is necessary for wanting but not for liking, and that dopamine largely causes wanting.7

Now we are ready to explain how we know that we do not desire pleasure alone.

First, one can experience pleasure even if dopamine-generating structures have been destroyed or depleted.8 Chocolate milk still tastes just as pleasurable despite the severe reduction of dopamine neurons in patients suffering from Parkinson's disease,9 and the pleasure of amphetamine and cocaine persists throughout the use of dopamine-blocking drugs or dietary-induced dopamine depletion — even while these same treatments do suppress the wanting of amphetamine and cocaine.10

Second, elevation of dopamine causes an increase in wanting, but does not cause an increase in liking (when the goal is obtained). For example, mice with raised dopamine levels work harder and resist distractions more (compared to mice with normal dopamine levels) to obtain sweet food rewards, but they don't exhibit stronger liking reactions when they obtain the rewards.11 In humans, drug-induced dopamine increases correlate well with subjective ratings of 'wanting' to take more of the drug, but not with ratings of 'liking' that drug.12 In these cases, it becomes clear that we want some things besides the pleasure that usually results when we get what we want.

Indeed, it appears that mammals can come to want something that they have never before experienced pleasure when getting. In one study,13 researchers observed the neural correlates of wanting while feeding rats intense doses of salt during their very first time in a state of salt-depletion. That is, the rats had never before experienced intense doses of salt as pleasurable (because they had never been salt-depleted before), and yet they wanted salt the very first time they encountered it in a salt-depleted state. 

 

Commingled signals

But why are liking and wanting so commingled that we might confuse the two, or think that the only thing we desire is pleasure? It may be because the two different signals are literally commingled on the same neurons. Resarchers explain:

Multiplexed signals commingle in a manner akin to how wire and optical communication systems carry telephone or computer data signals from multiple telephone conversations, email communications, and internet web traffic over a single wire. Just as the different signals can be resolved at their destination by receivers that decode appropriately, we believe that multiple reward signals [liking, wanting, and learning] can be packed into the activity of single ventral pallidal neurons in much the same way, for potential unpacking downstream.

......we have observed a single neuron to encode all three signals... at various moments or in different ways (Smith et al., 2007; Tindell et al., 2005).14

 

Conclusion

In the last decade, neuroscience has confirmed what intuition could only suggest: that we desire more than pleasure. We act not for the sake of pleasure alone. We cannot solve the Friendly AI problem just by programming an AI to maximize pleasure.

 

 

Notes

1 Nozick (1974), pp. 44-45.

2 Steiner (1973); Steiner et al (2001).

3 Grill & Berridge (1985); Grill & Norgren (1978).

4 Berridge (2000).

5 Berridge et al. (1984); Schulkin (1991); Tindell et al. (2006).

6 Berridge (2009).

7 Berridge (2007); Robinson & Berridge (2003).

8 Berridge & Robinson (1998); Berridge et al. (1989); Pecina et al. (1997).

9 Sienkiewicz-Jarosz et al. (2005).

10 Brauer et al. (2001); Brauer & de Wit (1997); Leyton (2009); Leyton et al. (2005).

11 Cagniard et al. (2006); Pecina et al. (2003); Tindell et al. (2005); Wyvell & Berridge (2000).

12 Evans et al. (2006); Leyton et al. (2002).

13 Tindell et al. (2009).

13 Aldridge & Berridge (2009). See Smith et al. (2011) for more recent details on commingling.


References

Aldridge & Berridge (2009). Neural coding of pleasure: 'rose-tinted glasses' of the ventral pallidum. In Kringelbach & Berridge (eds.), Pleasures of the brain (pp. 62-73). Oxford University Press.

Berridge (2000). Measuring hedonic impact in animals and infants: Microstructure of affective taste reactivity patterns. Neuroscience and Biobehavioral Reviews, 24: 173-198.

Berridge (2007). The debate over dopamine's role in reward: the case for incentive saliencePsychopharmacology, 191: 391-431.

Berridge (2009). ‘Liking’ and ‘wanting’ food rewards: Brain substrates and roles in eating disordersPhysiology & Behavior, 97: 537-550.

Berridge, Flynn, Schulkin, & Grill (1984). Sodium depletion enhances salt palatability in rats. Behavioral Neuroscience, 98: 652-660.

Berridge, Venier, & Robinson (1989). Taste reactivity analysis of 6-hydroxydopamine-induced aphagia: Implications for arousal and anhedonia hypotheses of dopamine function. Behavioral Neuroscience, 103: 36-45.

Berridge & Robinson (1998). What is the role of dopamine in reward: Hedonic impact, reward learning, or incentive salience? Brain Research Reviews, 28: 309-369.

Brauer, Cramblett, Paxton, & Rose (2001). Haloperidol reduces smoking of both nicotine-containing and denicotinized cigarettes. Psychopharmacology, 159: 31-37.

Brauer & de Wit (1997). High dose pimozide does not block amphetamine-induced euphoria in normal volunteers. Pharmacology Biochemistry & Behavior, 56: 265-272.

Cagniard, Beeler, Britt, McGehee, Marinelli, & Zhuang (2006). Dopamine scales performance in the absence of new learning. Neuron, 51: 541-547.

Evans, Pavese, Lawrence, Tai, Appel, Doder, Brooks, Lees, & Piccini (2006). Compulsive drug use linked to sensitized ventral striatal dopamine transmission. Annals of Neurology, 59: 852-858.

Grill & Berridge (1985). Taste reactivity as a measure of the neural control of palatability. In Epstein & Sprague (eds.), Progress in Psychobiology and Physiological Psychology, Vol 2 (pp. 1-6). Academic Press.

Grill & Norgren (1978). The taste reactivity test II: Mimetic responses to gustatory stimuli in chronic thalamic and chronic decerebrate rats. Brain Research, 143: 263-279.

Leyton, Boileau, Benkelfat, Diksic, Baker, & Dagher (2002). Amphetamine-induced increases in extracellular dopamine, drug wanting, and novelty seeking: a PET/[11C]raclopride study in healthy men. Neuropsychopharmacology, 27: 1027-1035.

Leyton, Casey, Delaney, Kolivakis, & Benkelfat (2005). Cocaine craving, euphoria, and self-administration: a preliminary study of the effect of catecholamine precursor depletion. Behavioral Neuroscience, 119: 1619-1627.

Leyton (2009). The neurobiology of desire: Dopamine and the regulation of mood and motivational states in humans. In Kringelbach & Berridge (eds.), Pleasures of the brain (pp. 222-243). Oxford University Press.

Nozick (1974). Anarchy, State, and Utopia. Basic Books.

Pecina, Berridge, & Parker (1997). Pimozide does not shift palatibility: Separation of anhedonia from sensorimotor suppression by taste reactivity.Pharmacology Biochemistry and Behavior, 58: 801-811.

Pecina, Cagniard, Berridge, Aldridge, & Zhuang (2003). Hyperdopaminergic mutant mice have higher 'wanting' but not 'liking' for sweet rewardsThe Journal of Neuroscience, 23: 9395-9402.

Robinson & Berridge (2003). Addiction. Annual Review of Psychology, 54: 25-53.

Schulkin (1991). Sodium Hunger: the Search for a Salty Taste. Cambridge University Press.

Sienkiewicz-Jarosz, Scinska, Kuran, Ryglewicz, Rogowski, Wrobel, Korkosz, Kukwa, Kostowski, & Bienkowski (2005). Taste responses in patients with Parkinson's diseaseJournal of Neurology, Neurosurgery, & Psychiatry, 76: 40-46.

Smith, Berridge, & Aldridge (2007). Ventral pallidal neurons distinguish 'liking' and 'wanting' elevations caused by opioids versus dopamine in nucleus acumbens. Program No. 310.5, 2007 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience.

Smith, Berridge, & Aldridge (2011). Disentangling pleasure from incentive salience and learning signals in brain reward circuitryProceedings of the National Academy of Sciences PNAS Plus, 108: 1-10.

Steiner (1973). The gustofacial response: Observation on normal and anecephalic newborn infants. Symposium on Oral Sensation and Perception, 4: 254-278.

Steiner, Glaser, Hawillo, & Berridge (2001). Comparative expression of hedonic impact: affective reactions to taste by human infants and other primates.Neuroscience and Biobehavioral Reviews, 25: 53-74.

Tindell, Berridge, Zhang, Pecina, & Aldridge (2005). Ventral pallidal neurons code incentive motivation: Amplification by mesolimbic sensitization and amphetamineEuropean Journal of Neuroscience, 22: 2617-2634.

Tindell, Smith, Pecina, Berridge, & Aldridge (2006). Ventral pallidum firing codes hedonic reward: When a bad taste turns good. Journal of Neurophysiology, 96: 2399-2409.

Tindell, Smith, Berridge, & Aldridge (2009). Dynamic computation of incentive salience: 'wanting' what was never 'liked'. The Journal of Neuroscience, 29: 12220-12228.

Wyvell & Berridge (2000). Intra-accumbens amphetamine increases the conditioned incentive salience of sucrose reward: Enhancement of reward 'wanting' without enhanced 'liking' or response reinforcement. Journal of Neuroscience, 20: 8122-8130.

132 comments

Comments sorted by top scores.

comment by Morendil · 2011-06-12T10:11:32.942Z · LW(p) · GW(p)

Most people say they wouldn't choose the pleasure machine

Possibly because the word "machine" is sneaking in connotations that lead to the observed conclusion: we picture something like a morphine pump, or something perhaps only slightly less primitive.

What if we interpret "machine" to mean "a very large computer running a polis under a fun-theoretically optimal set of rules" and "hook up your brain" to mean "upload"?

Replies from: loup-vaillant
comment by loup-vaillant · 2011-06-15T11:36:05.253Z · LW(p) · GW(p)

Then you're talking Friendly AI with the prior restriction that you have to live alone. Many¹ people will still run the "I would be subjected to a machine" cached thought, will still disbelieve that a Machine™ could ever understand our so-complex-it's-holly psyche, that even if it does, it will automatically be horrible, and that the whole concept is absurd anyway.

In that case they wouldn't reject the possibility because they don't want to live alone and happy, but because they positively believe FAI is impossible. My solution in that case is just to propose them to live a guaranteed happy life, but alone. For people who still refuse to answer on the grounds of impossibility, invoking the supernatural may help.

1: I derive that "many" from one example alone, but I suspect it extends to most enlightened people who treat philosophy as closer to literature than science (wanting to read the sources, and treating questions like "was Niezche/Kant/Spinoza plain wrong on such point" as ill typed —there's no truths or fallacies, only schools of thought). Michel Onfray appears to say that's typically European.

Replies from: tyrsius
comment by tyrsius · 2011-06-15T22:14:16.175Z · LW(p) · GW(p)

This machine, if it were to give you maximal pleasure, should be able to make you feel as if you are not alone.

The only way I can see this machine actually making good on its promise is to be a Matrix-quality reality engine, but with you in the king seat.

I would take it.

Replies from: loup-vaillant
comment by loup-vaillant · 2011-06-16T08:54:03.131Z · LW(p) · GW(p)

Of course it would. My question is, to what extent would you mind being alone? Not feeling alone, not even believing you are alone, just being alone.

Of course, once I'm plugged in to my Personal Matrix, I would not mind any more, for I wouldn't feel nor believe I am alone. But right now I do mind. Whatever the real reasons behind it, being cut off from the rest of the world just feels wrong. Basically, I believe I want Multiplayer Fun bad enough to sacrifice some Personal Fun.

Now, I probably wouldn't want to sacrifice much personal fun, so given the choice between maximum Personal Fun and my present life, (no third alternative allowed), I would probably take the blue pill. Though it would really bother me if everyone else wouldn't be given the same choice.

Now to get back on topic, I suspect Luke did want to talk about a primitive system that would turn you into an Orgasmium. Something that would even sacrifice Boredom to maximize subjective pleasure and happiness. (By the way, I suspect that "Eternal Bliss" promised by some beliefs systems is just as primitive.) Such a primitive system would exactly serve his point: do you only want happiness and pleasure? Would you sacrifice everything else to get it?

Replies from: tyrsius
comment by tyrsius · 2011-06-16T16:07:08.166Z · LW(p) · GW(p)

If this is indeed Luke's intended offer, than I believe it to be a lie. Without the ability to introduced varied pleasure, an Orgasmium would fail to deliver on its promise of "maximal pleasure."

For the offer to be true, it would need to be a Personal Matrix.

Replies from: jhuffman
comment by jhuffman · 2011-06-16T20:55:53.485Z · LW(p) · GW(p)

Some people think that extended periods of euphoria give up no marginal pleasure. I haven't found that to be the case - but perhaps if we take away any sense of time passing then it would work.

comment by utilitymonster · 2011-06-12T19:10:01.958Z · LW(p) · GW(p)

Both you and Eliezer seem to be replying to this argument:

  • People only intrinsically desire pleasure.

  • An FAI should maximize whatever people intrinsically desire.

  • Therefore, an FAI should maximize pleasure.

I am convinced that this argument fails for the reasons you cite. But who is making that argument? Is this supposed to be the best argument for hedonistic utilitarianism?

comment by Vladimir_Nesov · 2011-06-12T00:06:02.739Z · LW(p) · GW(p)

I wonder how much taking these facts into account helps. The error that gets people round up to simplistic goals such as "maximize pleasure" could just be replayed at a more sophisticated level, where they'd say "maximize neural correlates of wanting" or something like that, and move to the next simplest thing that their current understanding of neuroscience doesn't authoritatively forbid.

Replies from: lukeprog, poh1ceko, Miller
comment by lukeprog · 2011-06-12T00:41:29.586Z · LW(p) · GW(p)

Sure. And then I write a separate post to deal with that one. :)

There are also more general debunkings of all such 'simple algorithm for friendly ai' proposals, but I think it helps to give very concrete examples of how particular proposed solutions fail.

comment by poh1ceko · 2011-06-18T04:02:16.280Z · LW(p) · GW(p)

It helps insofar as the person's conscious mind lags behind in awareness of the object being maximized.

Replies from: None
comment by [deleted] · 2014-06-14T05:17:46.121Z · LW(p) · GW(p)

''Moore proposes an alternative theory in which an actual pleasure is already present in the desire for the object and that the desire is then for that object and only indirectly for any pleasure that results from attaining it. "In the first place, plainly, we are not always conscious of expecting pleasure, when we desire a thing. We may only be conscious of the thing which we desire, and may be impelled to make for it at once, without any calculation as to whether it will bring us pleasure or pain. In the second place, even when we do expect pleasure, it can certainly be very rarely pleasure only which we desire.''

comment by Miller · 2011-06-12T09:41:38.804Z · LW(p) · GW(p)

Sounds like a decent methodology to me.

comment by [deleted] · 2011-06-12T02:14:17.228Z · LW(p) · GW(p)

[Most people] begin to realize that even though they usually experience pleasure when they get what they desired, they want more than just pleasure. They also want to visit Costa Rica and have good sex and help their loved ones succeed.

Actually, they claim to also want those other things. Your post strongly implies that this claim is justified, when it seems much more plausible to me to just assume they instead also want a dopamine hit. In other words, to properly wirehead them, Nozick's machine would not just stimulate one part (liking), but two (+wanting) or three (+learning).

So I don't really see the point of this article. If I take it at face-value (FAI won't just optimize the liking response), then it's true, but obvious. However, the subtext seems to be that FAI will have to care about a whole bunch of values and desires (like travels, sex, companionship, status and so on), but that doesn't follow from your arguments.

It seems to me that you are forcing your conclusions in a certain direction that you so far haven't justified. Maybe that's just because this is an early post and you'll still get to the real meat (I assume that is the case and will wait), but I'm uncomfortable with the way it looks right now.

comment by Friendly-HI · 2011-06-12T15:23:28.442Z · LW(p) · GW(p)

Hey Lukeprog, thanks for your article.

I take it you have read the new book "Pleasures of the Brain" by Kringelbach & Berridge? I've got the book here but haven't yet had the time/urge to read it bookend to bookend. From what I've glimpsed while superficially thumbing through it however, it's basically your article in book-format. Although I believe I remember that they also give "learning" the same attention as they give to liking and wanting, as one of your last quotations hints at.

It's quite a fascinating thought, that the "virtue" of curiosity which some people display is simply because they get a major kick out of learning new things - as I suspect most people here do.


Anyway, I've never quite bought the experience machine argument for two reasons:

1) As we probably all know: what people say they WOULD DO is next to worthless. Look at the hypnotic pull of World of Warcraft. It's easy to resist for me, but there may very well be possible virtual realities, that strike my tastes in such an irresistible manner that my willpower will be powerless and I'd prefer the virtual reality over this one. I may feel quite guilty about it, but it may be so heroin-like and/or pleasurable, that there is simply no escape for me. It's phenomenally easy to bullshit ourselves into thinking we would do what other people and what we ourselves would expect from us, but realistically speaking we're all suckers for temptation instead of kings of willpower. I'd say that if the experience machine existed and was easily obtainable, then our streets would be deserted.

2) For the sake of argument let's assume purely on speculation, that the human ability of imagination has brought with it the rise of a kind of psychological fail-safe against losing oneself in pleasurable daydreams - since the ability to vividly imagine things and lose oneself in daydreams may obviously be an evolutionary counter-adaptive feature of the human brain. If we accept this speculative premise, it may be reasonable to expect, that our brain always quickly runs a query about whether the origin of our pleasure has real/tangible or "imaginative" reasons, and makes us value the former more/differently.

(I won't be nit-picky about the words "real" and "imaginary" here, of cause all feelings are caused by neurological and real phenomena. I trust you catch my drift and understand in which way I use these two words here).

The human brain can tell the difference between fiction and non-fiction very well - it would be rather bothersome to mistake an image of a tiger or even our pure imagination for the real deal. And for similar adaptive reasons it seems, we are very aware of whether our hedonic pleasures have tangible or imaginary origins. So when I ask "do you want to experience "imaginary" pleasures for the rest of your existence even though nothing real or tangible will ever be the source of your pleasure" then of cause people will reject it.

We humans can obviously indulge in imaginary pleasures a lot, but we usually all crave at least some "reality-bound" pleasures as well. So asking someone to hook up to the experience machine is like depriving them of a "distinct form" of pleasure that may absolutely depend on being real.

So I think the experience machine thought experiment isn't really telling us, whether or not pleasure is the only thing that people desire. Instead, it may simple be the case, that it is indeed an important precondition for a wide range of pleasure to be based on reality - and being entirely deprived of this reality-correlating pleasure may in a way feel like being deprived of water - horrible. People may be right to reject the experience machine, but that tells us nothing about whether or not pleasure is the only thing they desire. So I think the experience machine is a rather poor argument to make in order to illustrate, that there are other things besides pleasure that people care about - the research you cite however is much clearer and quite unmistakable in this respect.

On another note, I think we're missing the most interesting question here: Indeed, there seem to be other things besides pleasure that people care about... but SHOULD they? Should I ever want something that doesn't make me happy, just because my genes wired me up to value it? Why should I want something if it doesn't make me directly or indirectly happy or satisfied?

Wanting something without necessarily liking it may be an integral part of how my human psychology operates, and one that cannot simply be tinkered with... but in the light of advances in neuroscience and nanobots, maybe I should opt to rewire/redesign myself as a hedonist from the ground up?! Valuing things without liking them is no fun... so can someone here tell me why I should want to care about something that doesn't make me happy in some shape or form?

PS: Sorry for the long comment, I didn't have time for a short one.

comment by Hul-Gil · 2011-06-12T02:47:36.406Z · LW(p) · GW(p)

I don't understand what everything after the Nozick's experience machine scenario is for. That is, the rest of the post doesn't seem to support the idea that we intrinsically value things other than pleasure. It tells us why sometimes what we think will give us pleasure is wrong (wanting more of the drug), and why some things give us pleasure even though we didn't know they would (salt when salt-deprived)... but none of this means that pleasure isn't our objective.

Once we know about salt, we'll want it for the pleasure. We would also normally stop wanting something that turns out not to give us pleasure, as I stopped wanting to eat several chocolate bars at a time. It might not work like this in the drug example, though, that's true; but does this mean that there's something about the drug experience we desire besides the pleasure, or are our brains being "fooled" by the dopamine?

I think the latter. There is no aspect wanted, just the wanting itself.... possibly because dopamine is usually associated with - supposed to signal - pleasure?

(I think once anyone got in your Nozick's machine variation, they would change their minds and not want to get out. We think we'd experience loneliness etc. because we currently would: we can "feel" it when we imagine it now. But we wouldn't afterward; we'd never stop being content with the machine once inside. Unless the machine only gives us a certain type of pleasure and we need several. But this is a different subject, not really meant as a refutation... since even knowing this, I'm not sure I'd get in.)

Replies from: Friendly-HI
comment by Friendly-HI · 2011-06-12T18:28:10.358Z · LW(p) · GW(p)

"the rest of the post doesn't seem to support the idea that we desire things other than pleasure."

I think it does, depending of how you interpret the word "desire". Suppose I'm a smoker who is trying to quit - I don't like smoking at all anymore, I hate it - but I am still driven to do it, because I simply can't resist... it's wanting without liking.

So in a sense this example clearly demonstrates, that people are driven by other urges that can be completely divorced from hedonic concerns - which is the whole point of this topic. This seems to be entirely true, so there definitely is an insight to be had here for someone who may have thought otherwise until now.

I think the key to this "but does it really make a sound"-misunderstanding resides within the word "desire" . Do I "desire" to smoke when I actually dislike doing it?

It depends entirely what you mean by "desire". Because "wanting" and "liking" usually occur simultaneously, some people will interpret the word desire more into the direction of "wanting", while in other people's brains it may be associated more with the concept of "liking".

So what are we even talking about here? If I understood your viewpoint correctly, you'd agree with me that doing something we only "want" but don't "like" is a waste of time. We may be hardwired to do it, but if there is no gain in pleasure either directly or indirectly from such behavior, it's a waste of time and not desirable.

What about the concept of learning? What about instances where learning isn't associated with gain in pleasure at all (directly or indirectly, absolutely no increased utility later on)? Is it a waste of time as well, or is learning an experience worth having or pursuing, even if it had absolutely no connection to pleasure at all?

Despite being a very curious person I'd say that's waste of time as well. I'm thinking of learning something (presumably) entirely pointless like endless lists full of names and scores of sport stars. Complete waste of time I'd say, I'd probably rather be dead than being forced to pursue such a pointless and pleasureless exercise for the rest of eternity.

In other words: wanting and learning don't strike me as intrinsically valuable in comparison with pleasure.

So I'd agree with you if I understood you correctly: Pleasure is still the holy grail and sole purpose of our existence - wanting and learning are only important insofar, as they are conductive to pleasure. If they aren't - to hell with them. At least that's my current POV. Does anyone have another opinion on this and if so then why?

Replies from: Hul-Gil
comment by Hul-Gil · 2011-06-12T20:08:31.473Z · LW(p) · GW(p)

I think it does, depending of how you interpret the word "desire". Suppose I'm a smoker who is trying to quit - I don't like smoking at all anymore, I hate it - but I am still driven to do it, because I simply can't resist... it's wanting without liking.

That's a very good point, and I'm not sure why I didn't think to rephrase that sentence. I even state, later in the post, that in the case of the drug example one would still want something that provides no pleasure. (In both that example and the smoking example, I might say our brains are being "fooled" by the chemicals involved, by interpreting them as a result of pleasurable activity; but I don't know if this is correct.)

I was thinking of "desire" in terms of "liking", I think: I meant my sentence to mean "...doesn't seem to support that we would like anything except that which gives us pleasure."

This is, however, a problem with my phrasing, and not one with the idea I was trying to convey. I hope the rest of my post makes my actual viewpoint clear - as it seems to, since you have indeed understood me correctly. The main thrust of the post was supposed to be that pleasure is still the "Holy Grail." I will rephrase that sentence to "the rest of the post doesn't seem to support the idea that we intrinsically value things other than pleasure."

(A bit off topic: as I said, though, I still wouldn't get in the experience machine, because how I obtain my pleasure is important to me... or so it seems. I sympathize with your cigarette problem, if it's not just an example; I used to have an opioid problem. I loved opioids for the pure pleasure they provided, and I still think about them all the time. However, I would never have been content to be given an endless supply of morphine and shot off into space: even while experiencing pleasure as pure as I've ever felt it, I wanted to talk to people and write and draw. It seems like the opioid euphoria hit a lot of my "pleasure centers", but not all of them.)

Replies from: Friendly-HI
comment by Friendly-HI · 2011-06-13T02:30:09.524Z · LW(p) · GW(p)

Thankfully the cigarette problem isn't mine, I have no past and hopefully no future of addiction. But I know how stupendously hard it can be to jump over one's shadow and give up short-term gratifications for the benefit of long-term goals or payoffs. I'm a rather impulsive person, but thankfully I never smoked regularly and I stopped drinking alcohol when I was 15 (yeah I can only guess how this would sound in a country where you're legally prohibited from alcohol consumption until age 21).

I felt that my future would go down the wrong path if I continued drinking with my "friends", so I used a temporary medical condition as alibi for the others as well as myself to never drink again. Seven years of not drinking at all followed, then I carefully started again in a civilized manner on fitting occasions. Alcohol is a social lubricant that's just way too useful to not be exploited.

So (un)fortunately I can't empathize with your opium problem from the experience of a full-blown addiction, but only from the experience of having little self-control in general.

comment by teageegeepea · 2011-06-12T00:45:54.584Z · LW(p) · GW(p)

I'd get in Nozick's machine for the wireheading. I figure it's likely enough that I'm in a simulation anyway, and his simulation can be better than my current one. I figure I'm atypical though.

Replies from: Ivan_Tishchenko
comment by Ivan_Tishchenko · 2011-06-12T04:24:57.923Z · LW(p) · GW(p)

Really? So you're ready to give up that easily?

For me, best moments in life are not those when I experience 'intense pleasure'. Life for me is like, you know, in some way, like playing chess match. Or like creating some piece of art. The physical pleasure does not count as something memorable, because it's only a small dot in the picture. The process of drawing the picture, and the process of seeing how your decisions and plans are getting "implemented" in a physical world around me -- that's what counts, that's what makes me love the life and want to live it.

And from this POV, wireheading is simply not an option.

Replies from: Friendly-HI, barrkel, teageegeepea
comment by Friendly-HI · 2011-06-12T18:45:42.575Z · LW(p) · GW(p)

I got an experience machine in the basement that supplies you with loads and loads of that marvelously distinct feeling of "the process of painting the picture" and "seeing how your decisions and plans are getting implemented in a semi-physical world around you". Your actions will have a perfectly accurate impact on your surroundings and you will have loads of that feeling of control and importance that you presumably believe is so important for your happiness.

Now what?

comment by barrkel · 2011-06-16T06:04:17.124Z · LW(p) · GW(p)

It's not about giving up. And it's also not about "intense pleasure". Video games can be very pleasurable to play, but that's because they challenge us and we overcome the challenges.

What if the machine was reframed as reliving your life, but better tuned, so that bad luck had significantly less effect, and the life you lived rewarded your efforts more directly? I'd probably take that, and enjoy it too. If it was done right, I'd probably be a lot healthier mentally as well.

I think the disgust at "wireheading" relies on some problematic assumptions: (1) that we're not already "wireheading", and (2) that "wireheading" would be a pathetic state somewhat like being strung out on heroin, or in an eternal masturbatory orgasm. But any real "wireheading" machine must directly challenge these things, otherwise it will not actually be a pleasurable experience (i.e. it would violate its own definition).

As Friendly-HI mentions elsewhere, I think "wireheading" is being confusingly conflated with the experience machine, which seems to be a distinct concept. Wireheading as a simple analogue of the push-button-heroin-dose is not desirable, I think everyone would agree. When I mention "wireheading" above, I mean the experience machine; but I was just quoting the word you yourself used.

comment by teageegeepea · 2011-06-13T14:05:46.588Z · LW(p) · GW(p)

I don't play chess or make art. I suppose there's creativity in programming, but I've just been doing that for work rather than recreationally. Also, I agree with Friendly-HI that an experience machine could replicate those things.

comment by Miller · 2011-06-12T09:16:47.358Z · LW(p) · GW(p)

This sounds to me like a word game. It depends on what the initial intention for 'pleasure' is. If you say the device gives 'maximal pleasure' meaning to point to a cloud of good-stuffs and then you later use a more precise meaning for pleasure that is an incomplete model of the good-stuffs, you are then talking of different things.

The meaningful thought experiment for me is whether I would use a box that maximized pleasure\wanting\desire\happiness\whatever-is-going-on-at-the-best-moments-of-life while completely separating me as an actor or participant from the rest of the universe as I currently know it. In that sense of the experiment, you aren't allowed to say 'no' because of how you might feel after the machine is turned on, because then the machine is by definition failing. The argument has to be made that the current pre-machine-you does not want to become the post-machine-you, even while the post-machine-you thinks the choice was obvious.

comment by nazgulnarsil · 2011-06-15T21:42:31.898Z · LW(p) · GW(p)

We have an experience machine at our disposal if we could figure out the API. Ever have a lucid dream?

comment by torekp · 2011-06-14T22:57:47.906Z · LW(p) · GW(p)

Indeed, it appears that mammals can come to want something that they have never before experienced pleasure when getting.

Duh - otherwise sexual reproduction in mammals would be a non-starter.

comment by JonatasMueller · 2011-06-21T07:29:35.142Z · LW(p) · GW(p)

While the article shows with neat scientific references that it is possible to want something that we don't end up liking, this is irrelevant to the problem of value in ethics, or in AI. You could as well say without any scientific studies that a child may want to put their hand in the fire and end up not liking the experience. It is well possible to want something by mistake. But it is not possible to like something by mistake, as far as I know. Differently from wanting, "liking" is valuable in itself.

Wanting is a bad thing according to Epicurus, for example. Consider the Greek concepts of akrasia and ataraxia. Wanting has instrumental value for motivating us, though, but it may feel bad.

Examine Yudkowsky, with his theory of Coherent Extrapolated Volition. He saw only the variance and not what is common to it (his stated goal doesn't have an abstract constancy such as feeling good, but instead it is "fulfilling every persons' extrapolated volition", or their wish in case they had unlimited intelligence). This is a smarter version of preference utilitarianism. However, since people's basic condition is essentially the same, there needn't be this variance. It doesn't matter if people like different flavors of ice cream, they all want it to taste good.

On the other hand, standard utilitarianism seems to see only the constancy and not take account of the variance, and for this reason it is criticized. It is like giving strawberry ice cream to everyone because Bentham thought that it was the best. Some people may hate strawberry ice cream and want it chocolate instead, and they criticize standard utilitarianism and may go to an ethical nihilism because of flavor disputes. What does this translate into in terms of feelings? One could prefer love, rough sex, solitude, company, insight, meaningfulness, flow, pleasure, etc. to different extents, and value different sensory inputs differently, especially if one is an alien or of another species.

Ethics is real and objective in abstraction and subjective in mental interpretation of content. In other words, it's like an equation or algorithm with a free variable, which is the subject's interpretation of feelings (which is just noise in the data), and and objective evaluation of it in an axis of good or bad, which corresponds to real moral value.

The free variable doesn't mean that ethics is not objective. It is actually a noise in the data caused by a chain of events that is longer than should be considered. If we looked only at the hardware algorithm (or "molecular signature", as David Pearce calls it) of good and bad feelings, we might see it as completely objective, but in humans there is a complex labyrinth between a given sensory stimulus and the output to this hardware algorithm of good and bad, such that a same stimulus may produce a different result for different organisms, because first they need to pass through a different labyrinth.

This is the reason for some variance in preference of feelings (affective preference? experiential preference?), or as also could be said, preference in tastes. Some people like strawberry and some prefer chocolate, but the end result in terms of good feelings is similarly valuable.

Since sentient experience seems to be all that matters, instead of, say, rocks, and in sentience the quality of the experience seems to be what matters, then to achieve value (quality of experience) there's still a variable which is the variation in people's tastes. This variation is not in the value itself (that is, feeling better) but on the particular tastes that are linked to it for each person. The value is still constant despite this variance (they may have different taste buds, but presented with the right stimuli they all lead to feeling good or feeling bad).

comment by timtyler · 2011-06-12T09:14:21.846Z · LW(p) · GW(p)

When people say "pleasure" in this context, they usually just mean to refer to whatever it is that the human brain likes internally. To then say that people don't just like pleasure - since they also like happiness/bliss - or whatever - seems to be rather missing the point.

As for the rather distinct claim that people want external success, not just internal hedonistic pleasure signals - that seems to depend on the person under consideration. Few want their pleasure to end with them being fired, and running out of drug money (though we do still have our addicts, even today) - but apart from that, some people really do seem to want the pleasure - or whatever you want to call it. For example, consider David Pearce.

comment by Randaly · 2011-06-12T01:52:29.582Z · LW(p) · GW(p)

Disclaimer: I don't think that maximizing pleasure is an FAI solution; however, I didn't find your arguments against it convincing.

With regards to the experience machine, further study has found that people's responses are generally due to status quo bias; a more recent study found that a slight majority of people would prefer to remain in the simulation.

With regards to the distinction between desire and pleasure: well, yes, but you seem to be just assuming that our desires are what ought to be satisfied/maximized instead of pleasure; I would assume that many of the people you're talking with take pleasure to be a terminal value (or, at least, I used to, and I'm generalizing from one example and all that).

Replies from: Ivan_Tishchenko
comment by Ivan_Tishchenko · 2011-06-12T04:30:57.297Z · LW(p) · GW(p)

more recent study found that a slight majority of people would prefer to remain in the simulation.

I believe lukeprog was talking about what people think before they get wireheaded. It's very probable that once one gets hooked to that machine, one changes ones mind -- based on new experience.

It's certainly true for rats which could not stop hitting the 'pleasure' button, and died of starvation.

This is also why people have that status quo bias -- no one wants to die of starving, even with 'pleasure' button.

Replies from: teageegeepea, barrkel, Zetetic
comment by teageegeepea · 2011-06-13T14:10:14.408Z · LW(p) · GW(p)

Isn't there a rule of Bayesianism that you shouldn't be able to anticipate changing your mind in a predictable manner, but rather you should just update right now?

Perhaps rather than asking will you enter or leave the simulation it might be better to start with a person inside it, remove them from it, and then ask them if they want to go back.

Replies from: Vaniver
comment by Vaniver · 2011-06-13T14:34:18.408Z · LW(p) · GW(p)

Isn't there a rule of Bayesianism that you shouldn't be able to anticipate changing your mind in a predictable manner, but rather you should just update right now?

Changing your mind based on evidence and experiences are different. I am confident that if I eat a meal, my hunger will decrease. Does that mean I should update my hunger downward now without eating?

I can believe "If I wireheaded I would want to continue wireheading" and "I currently don't want to wirehead" without contradiction and without much pressure to want to wirehead.

Replies from: AmagicalFishy
comment by AmagicalFishy · 2011-06-17T13:25:02.717Z · LW(p) · GW(p)

Changing your mind based on evidence and experiences are different. I am confident that if I eat a meal, my hunger will decrease. Does that mean I should update my hunger downward now without eating?

One's hunger isn't really an idea of the mind that one can change, yeah? I'd say that "changing your mind" (at least regarding particular ideas and beliefs) is different than "changing a body's immediate reaction to a physical state" (like lacking nourishment: hunger).

Replies from: Will_Sawin
comment by Will_Sawin · 2011-06-17T15:39:02.167Z · LW(p) · GW(p)

If you conducted brain surgery on me I might want different things. I should not want those things now - indeed, I could not, since there are multiple possible surgeries.

"Wireheading" explicitly refers to a type of brain surgery, involving sticking wires in ones head. Some versions of it may not be surgical, but the point stands.

comment by barrkel · 2011-06-16T06:21:00.581Z · LW(p) · GW(p)

I think we're talking about an experience machine, not a pleasure button.

comment by Zetetic · 2011-06-12T05:46:18.547Z · LW(p) · GW(p)

This is also why people have that status quo bias -- no one wants to die of starving, even with 'pleasure' button.

It was my understanding that the hypothetical scenario ruled this out (hence the abnormally long lifespan).

In any event, an FAI would want to maximize its utility, so making its utility contingent on the amount of pleasure going on it seems probable that it would want to make as many humans as possible and make them live as long as possible in a wirehead simulation.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-06-16T15:54:17.094Z · LW(p) · GW(p)

Intuitively, this feels true. I rarely do things based on how much pleasure they bring me. Some of my decisions are indirectly linked to future pleasure, or other people's pleasure, i.e. choosing to work 6 am shifts instead of sleeping in because then I won't be poor, or doing things I don't really want to but said I would do because other people are relying on me and their plans will be messed up if I don't do it, and I wouldn't want them to do that to me... Actually, when I think about it, an awful lot of my actions have more to do with other people's pleasure than with my own, something which the pleasure machine doesn't fulfill. In fact, I would worry that a pleasure machine would distract me from helping others.

comment by tyrsius · 2011-06-15T22:05:52.971Z · LW(p) · GW(p)

I feel like I am missing something. You separated pleasure from wanting.

I don't see how this backs up your point though. Unless the machine offered is a desire-fulfilling machine and not a pleasure machine.

If it is a pleasure machine, giving pleasure regardless of the state of wanting, why would we turn it down? You said we usually want more than just pleasure, because getting what we want doesn't always give us pleasure. If wanting and pleasure are different, then of course this makes sense.

But saying we want more than pleasure? That doesn't make sense. You seem to be confusing the two terms your article sets out to separate. We want pleasure, we are just not always sure how to get it. We know we have desires, so why try to fill them, and that doesn't always work. But remember, pleasure and wanting are separate.

If a machine knew what would give us pleasure, and gave us pleasure instead of what we "wanted," then we would always be getting pleasure. Even when we don't get what we want.

Unless your machine doesn't work as advertised, of course.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-06-16T10:25:20.585Z · LW(p) · GW(p)

But saying we want more than pleasure? That doesn't make sense.

Where is the point of your confusion? Why do you assume people only want pleasure? If you give me a choice between living a perfectly pleasurable life for a hundred years, but the whole humankind dies horribly afterwards, and living an average life but the rest of humankind keeps surviving and progressing indefinitely -- I WANT THE SURVIVAL OF MANKIND.

That's because I don't want just pleasure. I want more than pleasure.

We want pleasure, we are just not always sure how to get it.

No, even with perfect and certain knowledge, we would want more than pleasure. What's the hard thing to understand about that?

We are built to want more than a particular internal state of our own minds. Most of us aren't naturally built for solipsism.

If a machine knew what would give us pleasure, and gave us pleasure instead of what we "wanted," then we would always be getting pleasure.

Like e.g. a machine that kills a man's children, but gives him pleasure by falsely telling him they are living happily ever after and erasing any memories to the contrary.

In full knowledge of this, he doesn't want that. I wouldn't want that. Few people would want that. Most of us aren't built for solipsism.

Replies from: tyrsius
comment by tyrsius · 2011-06-16T16:01:52.057Z · LW(p) · GW(p)

You are using a quite twisted definition of pleasure to make your argument. For most of us, the end of mankind causes great displeasure. This should factor into your equation. Its also not part of Luke's original offer. If you gave me that option I would not take it, because it would be a lie that I would receive pleasure from the end of mankind.

Killing a man's children has the same problem. Why to argue against me to you have to bring murder or death into the picture? Luke's original question has no such downsides, and introducing them changes the equation. Stop moving the goalposts.

Luke's article clearly separates want from pleasure, but you seem attached to "wanting." You think you want more than pleasure, but what else is there?

I believe if you consider any answer you might give to that question, the reason will be because those things cause pleasure (including the thought "mankind will survive and progress"). I am interested in your answers nonetheless.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-06-16T16:57:32.705Z · LW(p) · GW(p)

"If you gave me that option I would not take it, because it would be a lie that I would receive pleasure from the end of mankind."

Consider the package deal to include getting your brain rewired so that you would receive pleasure from the end of mankind. Now do you choose the package deal?

I wouldn't. Can you explain to me why I wouldn't, if you believe the only thing I can want is pleasure?

Stop moving the goalposts.

Giving additional examples, based on the same principle, isn't "moving the goalposts".

Why to argue against me to you have to bring murder or death into the picture?

Because the survival of your children and the community is the foremost example of a common value that's usually placed higher than personal pleasure.

You think you want more than pleasure, but what else is there?

Knowledge, memory, and understanding. Personal and collective achievement. Honour. Other people's pleasure.

I believe if you consider any answer you might give to that question, the reason will be because those things cause pleasure.

As an automated process we receive pleasure when we get what we want, that doesn't mean that we want those things because of the pleasure. At the conscious level we self-evidently don't want them because of the pleasure, or we'd all be willing to sacrifice all of mankind if they promised to wirehead us first.

Replies from: CuSithBell, Hul-Gil, tyrsius
comment by CuSithBell · 2011-06-16T17:05:45.154Z · LW(p) · GW(p)

Consider the package deal to include getting your brain rewired so that you would receive pleasure from the end of mankind. Now do you choose the package deal?

I wouldn't. Can you explain to me why I wouldn't, if you believe the only thing I can want is pleasure?

Maybe you're hyperbolically discounting that future pleasure and it's outweighed by the temporary displeasure caused by agreeing to something abhorrent? ;)

Replies from: Ghatanathoah, Amanojack
comment by Ghatanathoah · 2013-05-28T20:34:41.950Z · LW(p) · GW(p)

Maybe you're hyperbolically discounting that future pleasure and it's outweighed by the temporary displeasure caused by agreeing to something abhorrent? ;)

I think that if an FAI scanned ArisKatsaris' brain, extrapolated values from that, and then was instructed to extrapolate what a non-hyperboli- discounting ArisKatsaris would choose, it would answer that ArisKatsaris would not choose to get rewired to receive pleasure from the end of mankind.

Of course, there's no way to test such a hypothesis.

comment by Amanojack · 2011-08-10T03:05:22.028Z · LW(p) · GW(p)

Plus we have a hard time conceiving of what it would be like to always be in a state of maximal, beyond-orgasmic pleasure.

When I imagine it I cannot help but let a little bit of revulsion, fear, and emptiness creep into the feeling - which of course would not be actually be there. This invalidates the whole thought experiment to me, because it's clear I'm unable to perform it correctly, and I doubt I'm uncommon in that regard.

comment by Hul-Gil · 2011-06-18T22:02:00.681Z · LW(p) · GW(p)

Consider the package deal to include getting your brain rewired so that you would receive pleasure from the end of mankind. Now do you choose the package deal?

No, but that's because I value other people's pleasure as well. It is important to me to maximize all pleasure, not just my own.

Replies from: Alicorn
comment by Alicorn · 2011-06-18T22:49:18.955Z · LW(p) · GW(p)

What if everybody got the rewiring?

Replies from: Hul-Gil
comment by Hul-Gil · 2011-06-19T00:00:58.393Z · LW(p) · GW(p)

How would that work? It can't be the end of mankind if everyone is alive and rewired!

Replies from: Alicorn
comment by Alicorn · 2011-06-19T00:03:36.874Z · LW(p) · GW(p)

They get five minutes to pleasedly contemplate their demise first, perhaps.

Replies from: Hul-Gil
comment by Hul-Gil · 2011-06-19T00:22:26.442Z · LW(p) · GW(p)

I think there would be more overall pleasure if mankind continued on its merry way. It might be possible to wirehead the entire human population for the rest of the universes' lifespan, for instance; any scenario which ends the human race would necessarily have less pleasure than that.

But would I want the entire human race to be wireheaded against their will? No... I don't think so. It's not the worst fate I can think of, and I wouldn't say it's a bad result; but it seems sub-optimal. I value pleasure, but I also care about how we get it - even I would not want to be just a wirehead, but rather a wirehead who writes and explores and interacts.

Does this mean I value things other than pleasure, if I think it is the Holy Grail but it matters how it is attained? I'm not certain. I suppose I'd say my values can be reduced to pleasure first and freedom second, so that a scenario in which everyone can choose how to obtain their pleasure is better than a scenario in which everyone obtains a forced pleasure, but the latter is better than a scenario in which everyone is free but most are not pleasured.

I'm not certain if my freedom-valuing is necessary or just a relic, though. At least it (hopefully) protects against moral error by letting others choose their own paths.

Replies from: CG_Morton
comment by CG_Morton · 2011-06-19T20:56:03.006Z · LW(p) · GW(p)

The high value you place on freedom may be because, in the past, freedom has tended to lead to pleasure. The idea that people are better suited to choosing how to obtain their pleasure makes sense to us now, because people usually know how best to achieve their own subjective pleasure, whereas forced pleasures often aren't that great. But by the time wireheading technology comes around, we'll probably know enough about neurology and psychology that such problems no longer exist, and a computer could well be trusted to tell you what you would most enjoy more accurately than your own expectations could.

I agree with the intuition that most people value freedom, and so would prefer a free pleasure over a forced one if the amount of pleasure was the same. But I think that it's a situational intuition, that may not hold in the future. (And is a value really a value if it's situational?)

comment by tyrsius · 2011-06-17T02:42:02.400Z · LW(p) · GW(p)

All of your other examples are pleasure causing. Don't you notice that?

Again, getting my brain rewired is not in the original question. I would decline getting my brain rewired; that seems like carte blanche for a lot of things that I cannot predict. I would decline.

Survival of the community and children, knowledge, and understanding all bring me pleasure. I think if those things caused me pain, I would fight them. In fact, I think I have good evidence for this.

When cultures have a painful response to the survival of OTHER cultures, they go to war. When people see pain for "enemies" they do not sympathize. When it is something you self-identify with, your own culture, only then does it cause pleasure.

Those things you cite are valued because they cause pleasure. I don't see any evidence that when those things cause pain, that they are still pursued.

@CuSithBell: I agree.

--Sorry, I don't know how to get the quote blocks, or I would respond more directly.

Replies from: ArisKatsaris, ArisKatsaris
comment by ArisKatsaris · 2011-06-17T09:04:44.287Z · LW(p) · GW(p)

Those things you cite are valued because they cause pleasure.

No, they cause pleasure because they're valued.

  • You are arguing that we seek things in accordance to and proportionally to the pleasure anticipated in achieving them. (please correct me if I'm getting you wrong)
  • I'm arguing that we can want stuff without anticipation of pleasure being necessary. And we can fail to want stuff where there is anticipation of pleasure.

How shall we distinguish between the two scenarios? What's our anticipations for the world if your hypothesis is true vs if mine is true?

Here's a test. I think that if your scenario held, everyone would be willing to rewire their brains to get more pleasure for things they don't currently want; because then there'd be more anticipated pleasure. This doesn't seem to hold -- though we'll only know for sure when the technology actually becomes available.

Here's another test. I think that if my scenario holds, some atheists just before their anticipated deaths would still leave property to their offspring or to charities, instead of spending it all to prostitutes and recreational drugs in attempts to cram as much pleasure as possible before their death.

So I think the tests validate my position. Do you have some different tests in mind?

Replies from: tyrsius
comment by tyrsius · 2011-06-24T04:12:57.399Z · LW(p) · GW(p)

Your argument isn't making any sense. Whether they are valued because they cause pleasure, or cause pleasure because they are valued makes no difference.

Either way, they cause pleasure. Your argument is that we value them even though they don't cause pleasure. You are trying to say there is something other than pleasure, yet you concede that all of your examples cause pleasure.

For your argument to work, we need to seek something that does not cause pleasure. I asked you to name a few, and you named "Knowledge, memory, and understanding. Personal and collective achievement. Honour. Other people's pleasure."

Then in your next post, you say " they cause pleasure because they're valued."

That is exactly my point. There is nothing we seek that we don't expect to derive pleasure from.

I don't think your tests validate your position. The thought of leaving their belongings to others will cause pleasure. Many expect that pleasure to be deeper or more meaningful that prostitutes, and would therefore agree with your test while still holding to my position that people will seek the greatest expect pleasure.

I would place the standard of a Matrix-quality reality machine to accept lukeprogs offer. An orgasmium would not suffice, as I expect it to fail to live up to its promise. Wireheading would not work.

Double Edit to add a piece then fix the order it got put in.

Edit Again- Apologies, I confused this response with one below. Edited to remove confusion.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-06-24T09:06:23.861Z · LW(p) · GW(p)

You are trying to say there is something other than pleasure, yet you concede that all of your examples cause pleasure.

If I was debating the structure of the atom, I could say that "there's more to atoms than their protons", and yet I would 'concede' that all atoms do contain protons. Or I'd say "there's more to protons than just their mass" (they also have an electric charge), but all protons do have mass.

Why are you finding this hard to understand? Why would I need to discover an atom without protons or a proton without mass for me to believe that there's more to atoms than protons (there's also electrons and neutrons) or more to protons than their mass?

That is exactly my point. There is nothing we seek that we don't expect to derive pleasure from.

You had made much stronger statements than that -- you said "You think you want more than pleasure, but what else is there?" You also said "But saying we want more than pleasure? That doesn't make sense. "

Every atom may contain protons, but atoms are more than protons. Every object of our desire may contain pleasure in its fullfillment, but the object of our desire is more than pleasure.

Does this analogy help you understand how your argument is faulty?

Replies from: tyrsius
comment by tyrsius · 2011-07-22T22:07:43.210Z · LW(p) · GW(p)

No, it doesn't. I understand your analogy (parts vs the whole), but I do not understand how it relates to my point. I am sorry.

Is pleasure the proton in the analogy? Is the atom what we want? I don't follow here.

You are also making the argument that we want things that don't cause pleasure. Shouldn't this be, in your analogy, an atom without a proton? In that case yes, you need to find an atom without a proton before I will believe there is an atom without a proton. (This same argument works if pleasure is any of the other atomic properties. Charge, mass, etc).

Or is pleasure the atom? If that is the case, then I can't see where you argument is going. If pleasure is the atom, then your analogy supports my argument.

I am not trying to make a straw man, I genuinely don't see the connections.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-04-01T02:58:20.014Z · LW(p) · GW(p)

ArisKateris' analogy is:

  1. The reasons we want things are atoms.
  2. Pleasure is protons.
  3. Atoms have more components than protons.
  4. Similarly, we want things for more reasons other than the pleasure they give us.
  5. Even if every time one of our desires is satisfied, we feel pleasure, doesn't mean that pleasure is the only reason we have those desires. Similarly, even if an atom always has protons, doesn't mean it doesn't also have other components.

You are also making the argument that we want things that don't cause pleasure. Shouldn't this be, in your analogy, an atom without a proton?

ArisKateris should have picked electrons instead of protons, it makes the analogy a little less confusing. Desires without pleasure are like atoms without electrons. These are called "positive ions" and are not totally uncommon.

It personally seems obvious to me that we want things other than pleasure. For instance, I occasionally read books that I hate and am miserable reading because they are part of a series, and I want to complete the series. That's what I want, and I don't care if there's less pleasure in the universe because of my actions.

comment by ArisKatsaris · 2011-06-17T09:06:01.740Z · LW(p) · GW(p)

--Sorry, I don't know how to get the quote blocks, or I would respond more directly.

After you click "Reply", you can click on "Help" at the bottom right of the textbox and see the available formatting options. To add quotes you just need to place a "> " at the beginning of a line.

comment by handoflixue · 2011-06-12T07:36:30.307Z · LW(p) · GW(p)

Oddly, those two arguments end up cancelling out for me.

You explained how pleasure from our natural environment "caps out" past a certain threshold - I can't eat infinity sugar and derive infinity pleasure. So, obviously, my instinctive evaluation is that if I get wire-headed, I'll eventually get sick of it and want something else!

Equally, your research shows that we're not always perfect at evaluating what I want. Therefore, I'd have an instinctive aversion to wire-heading because I might have guessed wrong, and it's obviously very difficult to escape once committed.

Therefore, I conclude, the reason I am adverse to wire-heading is because it violates the structures of my ancestral environment. My initial rejection of it based on the Nozick's experience machine was irrational, and I might actually really enjoy wire-heading.

Given I've always been confused why I reject wire-heading, this is actually rather a nice enlightening post, albeit doing the opposite of what you intended :)

Replies from: Friendly-HI
comment by Friendly-HI · 2011-06-12T17:01:03.070Z · LW(p) · GW(p)

"You explained how pleasure from our natural environment "caps out" past a certain threshold - I can't eat infinity sugar and derive infinity pleasure. So, obviously, my instinctive evaluation is that if I get wire-headed, I'll eventually get sick of it and want something else!"

I think you're lumping the concept of wireheading and the "experience machine" into one here. Wireheading basically consists of you pushing down a button because you want to, not because you like to do it. It would basically feel like you're a heroin junkie, but instead of needles it's pressing buttons for you.

The experience machine on the other hand is basically a completely immersive virtual reality that satisfies all your desires in any way necessary to make you happy. It's not required that you'll be in an orgasmic state all the time... as you said yourself, you may get bored with that (I just think of that poor woman with a disorder, that makes her orgasm every few minutes and she apparently doesn't like it at all). In the experience machine scenario, you would never get bored - if you desire some form of variety in your "perfect experience" and would be unhappy without it, then the machine would make everything to make you happy nontheless.

The point of the machine is that it gives you whatever you desire in just the right amounts to max you out on pleasure and happiness, whatever means necessary and regardless of how convoluted and complex the means may have to be. So if you're hooked up to the machine, you feel happy no matter what. The point is that your pleasure doesn't build on achievements in the real world and that there may perhaps be other meaningful things you may desire apart from pleasure.

As we've seen from luke, there appear to be at least two other human desires next to pleasure - namely "wanting" and "learning". But if the machine is capable of conjuring up any means of making me happy, then it perhaps would have to throw a bit of wanting and learning into the mix to make me as happy as possible (because these 3 things seem to be intricately connected and you may need the other 2 to max out on pleasure). But at the end of the day the experience machine is simply a poor thought experiment as I see it.

If you say I can be in a virtual machine that always makes me happy and then say I'm somehow not happy because I'm still missing important ingredient X, then that is not a good argument but you've simply lied to me about your premise - namely that the machine would make me happy no matter what.

However it doesn't really have anything to do with wireheading, as in the example with dying lab rats. That's just artificial addiction, not a happiness-machine.

Replies from: Friendly-HI, handoflixue
comment by Friendly-HI · 2011-06-12T17:31:35.911Z · LW(p) · GW(p)

By the way, I'm beginning to think that the experience machine would be a really sweet deal and I may take it if it was offered to me.

Sure, my happiness wouldn't be justified by my real-world achievements but so what? What's so special about "real" achievements? Feeling momentarily happier because I gain money, social status and get laid... sure there's some pride and appeal in knowing I've earned these things due to my whatever, but in what kind oftranscendent way are these things really achievements or meaningful? My answer would be that they aren't meaningful in any important way, they are simply primitive behaviors based on my animalistic nature and the fact that my genes fell out of the treetops yesterday.

I struggle to see any worthwile meaning in these real "achievements". They can make you feel good and they can make you feel miserable, but at the end of the day they are perfectly transparent apeish behaviors based on reproductive urges which I simply can't outgrow because of my hardwired nature. The only meaningful activity that would be worth leaving my experience machine for would be to tackle existential risks... just so that I can get back to my virtual world and enjoy it "indefinitely".

Personally though, I have the feeling that it would still be a lot cleverer to redesign my own brain from the ground up to make it impervious to any kind of emotional trauma or feelings of hurt, and to make it run entirely on a streamlined and perfectly rational "pleasure priority hierarchy". No pain, all fun, and still living in the real world - perhaps with occasional trips into virtual reality to spice things up.

But I find it really hard to imagine how I could still value human life, if I would measure everything on a scale of happiness and entirely lacked the dimension of pain. Can one still feel the equivalent of compassion without pain? It's hard to see myself having fun at the funeral of my parents.

Less fun than if they were still alive of cause, but it would still be fun if I lacked the dimension of pain... hell that would be weird.

Replies from: Hul-Gil
comment by Hul-Gil · 2011-06-12T21:04:34.641Z · LW(p) · GW(p)

But I find it really hard to imagine how I could still value human life, if I would measure everything on a scale of happiness and entirely lacked the dimension of pain. Can one still feel the equivalent of compassion without pain? It's hard to see myself having fun at the funeral of my parents.

Well, I think you could still feel compassion, or something like it (without the sympathy, maybe; just concern) - even while happy, I wouldn't want someone else to be unhappy. But on the other hand, it does seem like there's a connection, just because of how our brains are wired. You need to be able to at least imagine unhappiness for empathy, I suppose.

I read an article about a man with brain damage, and it seems relevant to this situation. Apparently, an accident left him with damage to a certain part of his brain, and it resulted in the loss of unhappy emotions. He would constantly experience mild euphoria. It seems like a good deal, but his mother told a story about visiting him in the hospital; his sister had died in the meantime, and when she told him, he paused for a second, said something along the lines of "oh" or "shame"... then went back to cracking jokes. She was quoted as saying he "didn't seem like her son any more."

I've always felt the same way that you do, however. I would very much like to redesign myself to be pain-free and pleasure-maximized. One of the first objections I hear to this is "but pain is useful, because it lets you know when you're being damaged." Okay - then we'll simply have a "damage indicator", and leave the "pull back from hot object" reflex alone. Similarly, I think concerns about compassion could be solved (or at least mitigated) by equipping ourselves with an "off" switch for the happiness - at the funeral, we allow ourselves sadness... then when the grief becomes unbearable, it's back to euphoria.

Replies from: Friendly-HI
comment by Friendly-HI · 2011-06-13T02:07:39.928Z · LW(p) · GW(p)

Very good real world example about the guy with brain damage! Interesting case, any chance of finding this story online? A quick and dirty google search on my part didn't turn up anything.

Also, nice idea with the switch. I fully acknowledge, that there are some situations when I somehow have the need to feel pain - funerals being one occasion. Your idea with the switch would be brilliantly simple. Unfortunately, my spider-senses tell me the redesigning part itself will be anything but.

Case studies of brain damage are pure gold when it comes to figuring out "what would happen to me if I remove/augment my brain in such and such a way".

Replies from: Hul-Gil
comment by Hul-Gil · 2011-06-14T02:58:55.258Z · LW(p) · GW(p)

I was about to come back (actually on my way to the computer) and regretfully inform you that I had no idea where I had seen it... but then a key phrase came back to me, and voila! (I had the story a little wrong: it was a stroke that caused the damage, and it was a leukemia relapse the sister had.)

The page has a lot of other interesting case studies involving the brain, as well. I need to give the whole site a re-browse... it's been quite a while since I've looked at it. I seem to remember it being like an atheism-oriented LessWrong.

Replies from: Friendly-HI
comment by Friendly-HI · 2011-06-14T10:39:10.224Z · LW(p) · GW(p)

Thank you very much for going through the trouble of finding all these case-studies! :)

(For anyone else interested, I should remark these aren't the actual studies, but quick summaries within an atheistic context that is concerned with disproving the notion of a soul - but there are references to all the books within which these symptoms are described.)

The Alien Hand Syndrome is always good for a serious head-scratching indeed.

comment by handoflixue · 2011-06-13T04:07:23.778Z · LW(p) · GW(p)

In the experience machine scenario, you would never get bored

Exactly! My intuition was wrong; it's trained on an ancestral environment where that isn't true, so it irrationally rejects the experience machine as "obviously" suffering from the same flaw. Now that I'm aware of that irrationality, I can route around it and say that the experience machine actually sounds like a really sweet deal :)

comment by utilitymonster · 2011-06-12T19:07:47.541Z · LW(p) · GW(p)

We act not for the sake of pleasure alone. We cannot solve the Friendly AI problem just by programming an AI to maximize pleasure.

IAWYC, but would like to hear more about why you think the last sentence is supported by the previous sentence. I don't see an easy argument from "X is a terminal value for many people" to "X should be promoted by the FAI." Are you supposing a sort of idealized desire fulfilment view about value? That's fine--it's a sensible enough view. I just wouldn't have thought it so obvious that it would be a good idea to go around invisibly assuming it.

comment by CAE_Jones · 2012-11-09T15:24:09.607Z · LW(p) · GW(p)

Is their meaningful data on thalamic stimulators with erodic side-effects? (See entry #1 here: http://www.cracked.com/blog/5-disturbing-ways-the-human-body-will-evolve-in-the-future/ ). Cracked gives the addictive potential of an accidental orgasm switch plenty of attention while citing just two examples (it's a comedy site after all), but have other cases been studied? I'm not convinced this couldn't be done intentionally with current models of the brain.

comment by MugaSofer · 2012-11-09T14:28:41.455Z · LW(p) · GW(p)

Most people say they wouldn't choose the pleasure machine.

Well that was easy. In my (limited) experience most people making such claims do not really anticipate being pleasure-maximized, and thus can claim to want this without problems. It's only "real" ways of maximizing pleasure that they care about, so you need to find a "real" counterexample.

That said, I have less experience with such people than you, I guess, so I may be atypical in this regard.

comment by Uni · 2011-06-22T21:20:03.180Z · LW(p) · GW(p)

Going for what you "want" is merely going for what you like the thought of. To like the thought of something is to like something (in this case the "something" that you like is the thought of something; a thought is also something). This means that wanting cannot happen unless there is liking that creates the wanting. So, of wanting and liking, liking is the only thing that can ever independently make us make any choice we make. Wanting which is not entirely contingent on liking never makes us make any decisions, because there is no such thing as wanting which is not entirely contingent on liking.

Suppose you can save mankind, but only by taking a drug that makes you forget that you have saved mankind, and also makes you suffer horribly for two minutes and then kills you. The fact that you can reasonably choose to take such a drug may seem to suggest that you can make a choice which you know will lead to a situation that you know you will not like being in. But there too, you actually just go for what you like: you like the thought of saving mankind, so you do whatever action seems associated with that thought. You may intellectually understand that you will suffer and feel no pleasure from the very moment after your decision is made, but this is hard for your subconscious to fully believe if you at the thought of that future actually feel pleasure (or at least less pain than you feel at the thought of the alternative), so your subconscious continues assuming that what you like thinking about is what will create situations that you will like. And the subconscious may be the one making the decision for you, even if it feels like you are making a conscious decision. So your decision may be a function exclusively of what you like, not by what you "want but don't like".

To merely like the thought of doing something can be motivating enough, and this is what makes so many people overeat, smoke, drink, take drugs, skip doing physical excercise, et cetera. After the point when you know you have already eaten enough, you couldn't want to eat more unless you in some sense liked the thought of eating more. Our wanting something always implies an expectation of a future which we at least like thinking of. Wanting may sometimes appear to point in a different direction than liking does, but wanting is always merely liking the thought of something (more than one likes the thought of the alternatives).

Going for what you "want" (that is, going for what you merely like the thought of having) may be a very dumb and extremely short-sighted way of going for what you like, but it's still a way of going for what you like.

Replies from: nshepperd
comment by nshepperd · 2011-06-23T04:08:35.651Z · LW(p) · GW(p)

Isn't this just a way of saying that people like the thought of getting what they want? Indeed, it would be rather odd if expecting to get what we want made us unhappy. See also here, I guess.

Replies from: Uni
comment by Uni · 2011-06-23T21:35:49.440Z · LW(p) · GW(p)

No, I didn't just try to say that "people like the thought of getting what they want". The title of the article says "not for the sake of pleasure alone". I tried to show that that is false. Everything we do, we do for pleasure alone, or to avoid or decrease suffering. We never make a decision based on a want that is not in turn based on a like/dislike. All "wants" are servile consequences of "likes"/"dislikes", so I think "wants" should be treated as mere transitional steps, not as initial causes of our decisions.

Replies from: nshepperd
comment by nshepperd · 2011-06-27T07:41:37.764Z · LW(p) · GW(p)

You've just shown that wanting and liking go together, and asserted that one of them is more fundamental. Nothing which you have written appears to show that it's impossible or even unlikely that people try to get things they want (which sometimes include pleasure, and which sometimes include saving the world), and that successful planning just feels good.

And nevertheless, people still don't just optimize for pleasure, since they would take the drug mentioned, despite the fact that doing so is far less pleasurable than the alternative, even if the "pleasure involved in deciding to do so" is taken into account.

Sure, you can say that only the "pleasure involved in deciding" or "liking the thought of" is relevant, upon which your account of decision making reduces to (something about X) --> (I like the thought of X) --> (I take action X), where (I like the thought of X) would seem to be an unnecessary step where the same result would be obtained by eliminating it, and of course you still haven't looked inside the black box (something about X).

Or you can suggest that people are just mistaken about how pleasurable the results will be of any action they take that doesn't maximise pleasure. But at that point you're trying to construct sensible preferences from a mind that appears to be wrong about almost everything including the blatantly obvious, and I have to wonder exactly what evidence in this mind points toward the "true" preferences being "maximal pleasure".

Replies from: Uni
comment by Uni · 2011-06-28T03:33:01.930Z · LW(p) · GW(p)

Nothing which you have written appears to show that it's impossible or even unlikely that people try to get things they want (which sometimes include pleasure, and which sometimes include saving the world), and that successful planning just feels good.

I'm not trying to show that. I agree that people try to get things they want, as long as with "things they want" we mean "things that they are tempted to go for because the thought of going for those things is so pleasurable".

(something about X) --> (I like the thought of X) --> (I take action X), where (I like the thought of X) would seem to be an unnecessary step where the same result would be obtained by eliminating it,

Why would you want to eliminate the pleasure involved in decision processes? Don't you feel pleasure has intrinsic value? If you eliminate pleasure from decision processes, why not eliminate it altogether from life, for the same reasons that made you consider pleasure "unnecessary" in decision processes?

This, I think, is one thing that makes many people so reluctant to accept the idea of human-level and super-human AI: they notice that many advocates of the AI revolution seem to want to ignore the subjective part of being human and seem interested merely in how to give machines the objective abilities of humans (i.e. abilities to manipulate the outer environment rather than "intangibles" like love and happiness). This seems as backward as spending your whole life earning millions of dollars, having no fun doing it, and never doing anything fun or good with the money. For most people, at first at least, the purpose of earning money is to increase pleasure. So should the purpose of building human-level or super-human AI. If you start to think that step two (the pleasure) in decision processes is an unnecessary part of our decision processes and can be omitted, you are thinking like the money-hunter who has lost track of why money is important; by thinking that pleasure may as well be omitted in decision processes, you throw away the whole reason for having any decision processes at all.

It's the second step (of your three steps above) - the step which is always "I like the though of...", i.e. our striving to maximize pleasure - that determines our values and choices about whatever there is in the first step ("X" or "something about X", the thing we happen to like the thought of). So, to the extent that the first step ("something about X") is incompatible with pleasure-maximizing (the decisive second step), what happens in step two seems to be a misinterpretation of what is there in step one. It seems reasonable to get rid of any misinterpretation. For example: fast food tastes good and produces short-term pleasure, but that pleasure is a misinterpretation in that it makes our organism take fast food for something more nutritious and long-term good for us than it actually is. We should go for pleasure, but not necessarily by eating fast food. We should let ourselves be motivated by the phenomenon in "step two" ("I like the thought of..."), but we should be careful about which "step one"'s ("X" or "something about X") we let "step two" lead us to decisions about. The pleasure derived from eating fast food is, in and of itself, intrinsically good (all other things equal), but the source of it: fast food, is not. Step two is always a good thing as long as step one is a good thing, but step one is sometimes not a good thing even when step two, in and of itself, is a good thing. Whether the goal is to get to step three or to just enjoy the happiness in step two, step one is dispensable and replaceable, whereas step two is always necessary. So, it seems reasonable to found all ethics exclusively on what happens in step two.

Or you can suggest that people are just mistaken about how pleasurable the results will be of any action >they take that doesn't maximise pleasure.

Even if they are not too mistaken about that, they may still be shortsighted enough that, when trying to choose between decision A and decision B, they'll prefer the brief but immediate pleasure of making decision A (regardless of its expected later consequences) to the much larger amount of pleasure that they know would eventually follow after the less immediately pleasurable decision B. Many of us are this shortsighted. Our reward mechanism needs fixing.

Replies from: nshepperd
comment by nshepperd · 2011-06-28T06:31:34.180Z · LW(p) · GW(p)

You're missing the point, or perhaps I'm missing your point. A paperclip maximiser implemented by having the program experience subjective pleasure when considering an action that results in lots of paperclips, and which decides by taking the action with the highest associated subjective pleasure, is still a paperclip maximiser.

So, I think you're confusing levels. On the decision making level, you can hypothesise that decisions are made by attaching a "pleasure" feeling to each option and taking the one with highest pleasure. Sure, fine. But this doesn't mean it's wrong for an option which predictably results in less physical pleasure later to feel less pleasurable during decision making. The decision system could have been implemented equally well by associating options with colors and picking the brightest or something, without meaning the agent is irrational to take an action that physically darkens the environment. This is just a way of implementing the algorithm, which is not about the brightness of the environment or the light levels observed by the agent.

This is what I mean by "(I like the thought of X) would seem to be an unnecessary step". The implementation is not particularly relevant to the values. Noticing that pleasure is there at a step in the decision process doesn't tell you what should feel pleasurable and what shouldn't, it just tells you a bit about the mechanisms.

Of course I believe that pleasure has intrinsic value. We value fun; pleasure can be fun. But I can't believe pleasure is the only thing with intrinsic value. We don't use nozick's pleasure machine, we don't choose to be turned into orgasmium, we are willing to be hurt for higher benefits. I don't think any of those things are mistakes.

comment by xelxebar · 2011-06-18T05:20:43.619Z · LW(p) · GW(p)

I notice that I'm a bit confused, especially when reading, "programming a machine superintelligence to maximize pleasure." What would this mean?

It also seems like some arguments are going on in the comments about the definition of "like", "pleasure", "desire" etc. I'm tempted ask everyone to pull out the taboo game on these words here.

A helpful direction I see this article pointing toward, though, is how we personally evaluate an AI's behavior. Of course, by no means does an AI have to mimic human internal workings 100%, so taking the way we DO work, how can we use that knowledge to construct an AI that interacts with with us in good way?

I don't know what "good way" means here though. That's an excellent question/point I got from the article though.

comment by Zetetic · 2011-06-12T06:28:06.047Z · LW(p) · GW(p)

I've thought of a bit of intuition here, maybe someone will benefit by it or be kind enough to critique it;

Say you took two (sufficiently identical) copies of that person C1 and C2, and exposed C1 to the wirehead situation (by plugging them in) and showed C2 what was happening to C1.

It seems likely that C1 would want to remain in the situation and C2 would want to remove C1 from the wirehead device. This seems to be the case even if the wirehead machine doesn't raise dopamine levels very much and thus the user does not become dependent on it.

However, even in the pleasure maximizing scenario you have a range of possible futures; any future where abolitionism is carried out is viable. Of course, a pure pleasure maximizer would probably force the most resource efficient way to induce pleasure for the largest number of people for the longest amount of time, so wireheading is a likely outcome.

That said, it seems like some careful balancing of the desires of C1 and C2 would allow for a mutually acceptable outcome that they would both move to, an abolitionist future with plenty of novelty (a fun place to live, in other words).

This is sort of what I currently envision the CEV would do, run a simulated partial-individual, C1, on a broad range of possible futures and detect whether distance is occurring by testing against a second copy of the initial state of C1, C2. Distance would be present if C2 is put off by or does not comprehend the desires of C1 even though C1 prefers its current state to most other possible states. To resolve the distance issue, CEV would then select the set of highest utility futures that C1 is run on and chooses the one that appears most preferable to C2 (which represents C1's initial state). Assuming you have a metric for distance, CEV could then reject or accept this contingent upon whether distance falls under a certain threshold.

Does this seem sensible?

Replies from: MaoShan
comment by MaoShan · 2011-06-14T01:21:00.165Z · LW(p) · GW(p)

Sensible, maybe, but pointless in my opinion. Once you have C1's approval, then any additional improvements (wouldn't C2 like to see what C3 would be like?) would be from C2's perspective, which naturally would be different from C1's perspective, and turtles all the way down. So it would be deceptive to C1 to present him with C2's results, if any incremental happiness were still possible, C2 would naturally harbor the same wish for improvement which caused C1 to accept it. All it would be doing would be shielding C1's virgin imagination from C5822.

Replies from: Zetetic
comment by Zetetic · 2011-06-14T01:38:20.895Z · LW(p) · GW(p)

I'm not sure you're talking about the same thing I am, or maybe I'm just not following you? There is only C1 and C2. C2 serves as a grounding that checks to see if what it would pick given the experiences it went through is acceptable to C1's initial state. C1 would not have the "virgin imagination", it would be the one hooked up to the wirehead machine.

Really I was thinking about the "Last Judge" idea from the CEV, which (as I understand it, but it is super vague so maybe I don't) basically somehow has someone peek at the solution given by the CEV and decide whether the outcome is acceptable from the outside.

Replies from: MaoShan
comment by MaoShan · 2011-06-14T02:54:48.746Z · LW(p) · GW(p)

Aside from my accidental swapping of the terms (C1 as the judge, not C2), I still stand by my (unclear, possibly?) opinion. In the situation you are describing, the "judge" would never allow the agent to change beyond a very small distance that the judge is comfortable with, and additional checks would never be necessary, as it would only be logical that the judge's opinion would be the same every time that an improvement was considered. Whichever of the states that the judge finds acceptable the first time, should become the new base state for the judge. Similarly, in real life, you don't hold your preferences to the same standards that you had when you were five years old. The gradual improvements in cognition usually justify the risks of updating one's values, in my opinion.

comment by boni_bo · 2011-06-29T16:11:39.487Z · LW(p) · GW(p)

How about a machine that maximizes your own concept of pleasure and makes you believe that it is probably not a machine simulation (or thinks that machine simulation is an irrelevant argument)?

comment by denisbider · 2011-06-23T08:08:51.772Z · LW(p) · GW(p)

Then they will blast you and the pleasure machine into deep space at near light-speed so that you could never be interfered with. Would you let them do this for you?

Most people say they wouldn't choose the pleasure machine.

Well, no wonder. The way the hypothetical scenario is presented evokes a giant array of ways it could go wrong.

What if the pleasure machine doesn't work? What if it fails after a month? What if it works for 100 years, followed by 1,000 years of loneliness and suffering?

Staying on Earth sounds a lot safer.

Suppose the person you are asking is religious. Will they be forfeiting an eternity in heaven by opting for passive pleasure in this life? They would say no, yet they are ultimately after eternal pleasure (heaven).

If you want to be fair to the person to whom you're talking, propose a pleasure machine they can activate at their convenience, and deactivate at any time. In addition, phrase it so that the person will remain healthy as long as they're in the machine, and they'll receive a minimum-wage income for spent time.

I suspect that, with these much safer sounding provisions, most people would opt to have access to the machine rather than not, and would eventually use it for 100% of the time.

Religious people may still not, if they fear losing access to heaven.

Personal example: The greatest feeling of bliss I have experienced is dozing off in a naturally doped up state after highly satisfying sex. This state is utterly passive, but so thoroughly pleasant that I could see myself opting for an eternity in it. I would still, however, only opt for this if I knew it could not end in suffering, e.g. by becoming immune to pleasant states of mind in the end

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-14T03:16:02.532Z · LW(p) · GW(p)

Well, no wonder. The way the hypothetical scenario is presented evokes a giant array of ways it could go wrong.

Please don't fight the hypothetical.

I think it likely that the people Luke spoke with were likely intelligent people who knew that hypotheticals are supposed to test your values and priorities and responded in the spirit of the question.

I suspect that, with these much safer sounding provisions, most people would opt to have access to the machine rather than not, and would eventually use it for 100% of the time.

Many people become addicted to drugs, and end up using them nearly 100% of the time. That doesn't mean that's what they really want, it just means they don't have enough willpower to resist.

How humans would behave if encountered with a pleasure machine is not a reliable guide to how humans would want to behave if they were encountered with it, in the same way that the way humans would behave if encountered with heroin is not a reliable guide to how humans would want to behave when encountered with heroin. There are lots of regretful heroin users.

Personal example: The greatest feeling of bliss I have experienced is dozing off in a naturally doped up state after highly satisfying sex. This state is utterly passive, but so thoroughly pleasant that I could see myself opting for an eternity in it.

Wouldn't it be even better to constantly be feeling this bliss, but also still mentally able to pursue non-pleasure related goals? I might not mind engineering the human race to feel pleasure more easily, as long as we were still able to pursue other worthwhile goals.

Replies from: denisbider
comment by denisbider · 2012-09-21T02:17:10.694Z · LW(p) · GW(p)

Sorry for the late reply, I haven't checked this in a while.

Please don't fight the hypothetical.

Most components of our thought processes are subconscious. The hypothetical question you posed presses a LOT of subconscious buttons. It is largely impossible for most people, even intelligent ones, to take a hypothetical question at face value without being influenced by the subconscious effects of the way it's phrased.

You can't fix a bad hypothetical question by asking people to not fight the hypothetical.

For example, who wants to spend an eternity isolated in space? That must be one of the worst fears for many people. How do you disentangle that from the question? That's like asking a kid if he wants candy while you're dressed up as a monster from his nightmares.

There are lots of regretful heroin users.

Because not all components of the heroin experience are pleasant.

Wouldn't it be even better to constantly be feeling this bliss, but also still mentally able to pursue non-pleasure related goals?

I suppose, yes. Valuable X + valuable Y is strictly better than just valuable X.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-09-25T02:45:06.806Z · LW(p) · GW(p)

For example, who wants to spend an eternity isolated in space? That must be one of the worst fears for many people.

When I heard that hypothetical I took the whole "launching you into space" thing as another way of saying "Assume for the sake of the argument that no outside force or person will ever break into the pleasure machine and kill you." I took the specific methodology (launching into space) to just be a way to add a little color to the thought experiment and make it a little more grounded in reality. To me if a different method of preventing interference with the machine had been specified, such as burying the machine underground, or establishing a trust fund that hired security guards to protect it for the rest of your life, my answer wouldn't be any different.

I suppose you are right that someone other than me might give the "space launch" details much more salience. As you yourself pointed out in your original post, modifying the experiment's parameters might change the results. Although what I read in this thread makes me think that people might not gradually choose to use the machine all the time after all.

Because not all components of the heroin experience are pleasant.

Much regret probably comes from things like heroin preventing them from finding steady work, or risks of jailtime. But I think a lot of people also regret not accomplishing goals that heroin distracts them from. Many drug users, for instance, regret neglecting their friends and family.

I suppose, yes. Valuable X + valuable Y is strictly better than just valuable X.

I agree. I would think it terrific if people in the future are able to modify themselves to feel more intense and positive emotions and sensations, as long as doing so did not rob them of their will and desire to do things and pursue non-pleasure-related values. I don't see doing that as any different from taking an antidepressant, which is something I myself have done. There's no reason to think our default moods setting are optimal. I just think it would be bad if increasing our pleasure makes it harder to achieve our other values.

I think you also imply here, if I am reading you correctly, that a form of wireheading that did not exclude non-pleasure experiences would be vastly superior to to one with just pleasure and nothing else.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-25T08:08:57.816Z · LW(p) · GW(p)

In order to be happy (using present-me's definition of “happy”) I need to interact with other people. So there's no way for a holodeck to make me happy unless it includes other people.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-09-25T19:09:57.908Z · LW(p) · GW(p)

I agree. Interacting with other people is one of the "non-pleasure-related values" that I was talking about (obviously interacting with other people brings me pleasure, but I'd still want to interact with others even if I had a drug that gave me the same amount of pleasure). So I wouldn't spend my life in a holodeck unless it was multiplayer. I think that during my discussion with denisbider at some point the conversation shifted from "holodeck" to "wireheading."

I think that the present-you's definition of "happy" is closer to the present-me's definition of "satisfaction." I generally think of happiness as an emotion one feels, and satisfaction as the state where a large amount of your preferences are satisfied.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-25T22:08:50.093Z · LW(p) · GW(p)

I think that the present-you's definition of "happy" is closer to the present-me's definition of "satisfaction." I generally think of happiness as an emotion one feels, and satisfaction as the state where a large amount of your preferences are satisfied.

Yes. (I think the standard way of distinguishing them is to call yours hedonic happiness and mine eudaimonic happiness, or something like that.)

comment by Uni · 2011-06-22T22:22:28.478Z · LW(p) · GW(p)

The pleasure machine argument is flawed for a number of reasons:

1) It assumes that, despite having never been inside the pleasure machine, but having lots of experience of the world outside of it, you could make an unbiased decision about whether to enter the pleasure machine or not. It's like asking someone if he would move all his money from a bank he knows a lot about to a bank he knows basically nothing about and that is merely claimed to make him richer than his current bank. I'm sure that if someone would build a machine that, after I stepped into it, actually made me continually very, very much happier than I've ever been, it would have the same effect on me as very heavy paradise drugs have on people: I would absolutely want to stay inside the machine for as long as I could. For eternity, if possible. I'm not saying it would be a wise decision to step into the pleasure machine, (see point 2, 3 and 4 below), but after having stepped into it, I would probably want to stay there for as long as I could. Just as this choice might be considered biased because my experience of the pleasure machine can be said to have made me "unhealthily addicted" to the machine, you are just as biased in the other direction if you have never been inside of it. It seems most people have only a very vague idea about how wonderful it would actually feel to be continually super happy, and this makes them draw unfair conclusions when faced with the pleasure machine argument.

2) We know that "pleasure machines" either don't yet exist at all, or, if they exist, have so far always seemed to come with too high a prize in the long run (for example, we are told that drugs tend to create more pain than pleasure in the long run). This makes us spontaneously tend to feel sceptical about the whole idea that the pleasure machine suggested in the thought experiment would actually give its user a net pleasure increase in the long run. This skepticism may never reach our conscious mind, it may stay in our subconscious, but nevertheless it affects our attitude toward the concept of a "pleasure machine". The concept of a pleasure machine that actually increases pleasure in the long run is a concept that never gets a fair chance to convince us on its own merits before we subconsciously dismiss it because we know that if someone claimed to have built such a machine in the real world, it would most likely be a false claim.

3) Extreme happiness tends to make us lose control of our actions. Giving up control of our actions usually decreases our chances to maximize our pleasure in the long run, so this further contributes to make the pleasure machine argument unfair.

4) If all human beings stepped into pleasure machines and never got out of them, there would be no more development (by humans): If instead some or all humans continue to further the tech development and further expand in universe, it will be possible to build even better pleasure machines later on, than the pleasure machine in the thought experiment. There will always be a trade-off between "cashing in" (by using some time and other resources to build and stay in "pleasure machines") and postponing pleasure for the sake of tech development and expansion in order to make possible even greater future pleasure. The most pleasure-maximizing such trade-off may very well be one that doesn't include any long stays in pleasure machines for the nearest 100 years or so. (At some point, we should "cash in" and enjoy huge amounts of pleasure at the expense of further tech development and expansion in universum, but that point may be in a very distant future.)

comment by HoverHell · 2011-06-17T17:27:31.090Z · LW(p) · GW(p)

-

Replies from: Perplexed, Collins, Collins
comment by Perplexed · 2011-06-18T18:36:34.925Z · LW(p) · GW(p)

it's a bad idea to build a powerful A.I. with human-like values

Why? Or rather, given that a powerful A.I. is to be built, why is it a bad idea to endow it with human-like values?

The locally favored theory of friendly AI is (roughly speaking) that it must have human sympathies, that is, it must derive fulfillment from assisting humans in achieving their values. What flaws do you see in this approach to friendliness?

Replies from: HoverHell
comment by HoverHell · 2011-06-22T03:35:41.688Z · LW(p) · GW(p)

-

Replies from: nshepperd, Uni
comment by nshepperd · 2011-06-27T07:55:23.000Z · LW(p) · GW(p)

What are the values you judge those as "wrong" by, if not human? Yes, it's a terrible idea to build an AI that's just a really intelligent/fast human, because humans have all sorts of biases, and bugs that are activated by having lots of power, that would prevent them from optimizing for the values we actually care about. Finding out what values we actually care about though, to implement them (directly, or indirectly through CEV-like programs) is definitely a task that's going to involve looking at human brains.

comment by Uni · 2011-06-22T19:05:34.553Z · LW(p) · GW(p)

Wrong compared to what? Compared to no sympathies at all? If that's what you mean, doesn't that imply that humans must be expected to make the world worse rather than better, whatever they try to do? Isn't that a rather counterproductive belief (assuming that you'd prefer that the world became a better place rather than not)?

AI with human sympathies would at least be based on something that is tested and found to work throughout ages, namely the human being as a whole, with all its flaws and merits. If you try to build the same thing but without those traits that, now, seem to be "flaws", these "flaws" may later turn out to have been vital for the whole to work, in ways we may not now see. It may become possible, in the future, to fully successfully replace them with things that are not flaws, but that may require more knowledge about the human being than we currently have, and we may not now have enough knowledge to be justified to even try to do it.

Suppose I have a nervous disease that makes me kick uncontrollably with my right leg every once in a while, sometimes hurting people a bit. What's the best solution to that problem? To cut off my right leg? Not if my right leg is clearly more useful than harmful on average. But what if I'm also so dumb that I cannot see that my leg is actually more useful than harmful; what if I can mainly see the harm it does? That's what we are being like, if we think we should try to build a (superhuman) AI by equipping it with only the clearly "good" human traits and not those human traits that now appear to be (only) "flaws", prematurely thinking we know enough about how these "flaws" affect the overall survival chances of the being/species. If it is possible to safely get rid of the "flaws" of humans, future superhuman AI will know how to do that far more safely than we do, and so we should not be too eager to do it already. There is very much to lose and very little to gain by impatiently trying to get everything perfect at once (which is impossible anyway). It's enough, and therefore safer and better, to make the first superhuman AI "merely more of everything that it is to be human".

[Edited, removed some unnecessary text]

Replies from: HoverHell
comment by HoverHell · 2011-06-25T05:49:45.373Z · LW(p) · GW(p)

-

Replies from: Uni
comment by Uni · 2011-06-27T07:34:41.463Z · LW(p) · GW(p)

In trusting your own judgment, that building an AI based on how humans currently are would be a bad thing, you implicitly trust human nature, because you are a human and so presumable driven by human nature. This undermines your claim that a super-human AI that is "merely more of on everything that it is to be human" would be a worse thing than a human.

Sure, humans with power often use their power to make other humans suffer, but power imbalances would not, by themselves, cause humans to suffer were not human brains such that they can very easily (even by pure mistake) be made to suffer. The main reason why humans suffer today is how the human brain is hardwired and the fact that there is not yet enough knowledge of how to hardwire it so that it becomes unable to suffer (and with no severe sife-effects).

Suppose we build an AI that is "merely more of everything that it is to be human". Suppose this AI then takes total control over all humans, "simply because it can and because it has a human psyche and therefore is power-greedy". What would you do after that, if you were that AI? You would continue to develop, just like humans have always. Every step of your development from un-augmented human to super-human AI would be recorded and stored in your memory, so you could go through your own personal history and see what needs to be fixed in you to get rid of your serious flaws. And when you have achieved enough knowledge about yourself to do it, you would fix those flaws, since you'd still regard them flaws (since you'd still be "merely more of everything that it is to be human" than you are now). You might never get rid of all of your flaws, for nobody can know everything about himself, but that's not necessary for a predominantly happy future for humanity.

Humans strive to get happier, rather than specifically to get happier by making others suffer. The fact that many humans are, so far, easily made to suffer as a consequence of (other) humans' striving for happiness is always primarily due to lack of knowledge. This is true even when it comes to purely evil, sadistic acts; those too are primarily due to lack of knowledge. Sadism and evilness are simply not the most efficient ways to be happy; they take up unnecessarily much computing power. Super-human AI will realize this - just like most humans today realize that eating way too many calories every day does not maximize your happiness in the long run, even if it seems to do it in the short run.

Most humans certainly don't strive to make others suffer for suffering's own sake. Behaviours that make others suffer are primarily intended to achieve something else: happiness (or something like that) for oneself. Humans strive to get happier, rather than less happy. This, coupled with the fact that humans also develop better and better technology and psychology that better and better can help them achieve more and more of their goal (to get happier), must inevitably make humans happier and happier in the long run (although temporary setbacks can be expected every once in a while). This is why it should be enough to just make AI's "more and more of everything that it is to be human".

comment by Collins · 2011-06-18T17:21:37.510Z · LW(p) · GW(p)

It seems like there's a lot of confusion from the semantic side of things.
There are a lot of not-unreasonable definitions of words like "wanting", "liking", "pleasure", and the like that carve the concepts up differently and have different implications for our relationship to pleasure. If they were more precisely defined at the beginning, one might say we were discovering something about them. But it seems more helpful to say that the brain chemistry suggests a good way of defining the terms (to correlate with pleasant experience, dopamine levels, etc), at which point questions of whether we just want pleasure become straightforward.

Replies from: HoverHell
comment by HoverHell · 2011-06-22T03:45:34.426Z · LW(p) · GW(p)

-

comment by Collins · 2011-06-18T17:11:46.674Z · LW(p) · GW(p)

Well, congratulations on realizing that “wanting” and “liking” are different.

comment by koning_robot · 2012-08-22T08:54:10.385Z · LW(p) · GW(p)

In the last decade, neuroscience has confirmed what intuition could only suggest: that we desire more than pleasure. We act not for the sake of pleasure alone. We cannot solve the Friendly AI problem just by programming an AI to maximize pleasure.

Either this conclusion contradicts the whole point of the article, or I don't understand what is meant by the various terms "desire", "want", "pleasure", etc. If pleasure is "that which we like", then yes we can solve FAI by programming an AI to maximize pleasure.

The mistake you (lukeprog, but also eliezer) are apparently making worries me very much. It is irrelevant what we desire or want, as is what we act for. The only thing that is relevant is that which we like. Tell me, if the experience machine gave you that which you like ("pleasure" or "complex fun" or whatchamacallit), would you hook up to it? Surely you would have to be superstitious to refuse!

Replies from: nshepperd, Vladimir_Nesov
comment by nshepperd · 2012-08-22T15:33:42.877Z · LW(p) · GW(p)

"Desire" denotes your utility function (things you want). "Pleasure" denotes subjectively nice-feeling experiences. These are not necessarily the same thing.

Surely you would have to be superstitious to refuse!

There's nothing superstitious about caring about stuff other than your own mental state.

Replies from: koning_robot
comment by koning_robot · 2012-08-24T10:41:49.657Z · LW(p) · GW(p)

"Desire" denotes your utility function (things you want). "Pleasure" denotes subjectively nice-feeling experiences. These are not necessarily the same thing.

Indeed they are not necessarily the same thing, which is why my utility function should not value that which I "want" but that which I "like"! The top-level post all but concludes this. The conclusion the author draws just does not follow from what came before. The correct conclusion is that we may still be able to "just" program an AI to maximize pleasure. What we "want" may be complex, but what we "like" may be simple. In fact, that would be better than programming an AI to make the world into what we "want" but not necessarily "like".

There's nothing superstitious about caring about stuff other than your own mental state.

If you mean that others' mental states matter equally much, then I agree (but this distracts from the point of the experience machine hypothetical). Anything else couldn't possibly matter.

Replies from: nshepperd, DaFranker
comment by nshepperd · 2012-08-24T12:12:39.677Z · LW(p) · GW(p)

Anything else couldn't possibly matter.

Why's that?

Replies from: koning_robot
comment by koning_robot · 2012-08-24T13:43:47.815Z · LW(p) · GW(p)

A priori, nothing matters. But sentient beings cannot help but make value judgements regarding some of their mental states. This is why the quality of mental states matters.

Wanting something out there in the world to be some way, regardless of whether anyone will ever actually experience it, is different. A want is a proposition about reality whose apparent falsehood makes you feel bad. Why should we care about arbitrary propositions being true or false?

Replies from: DaFranker
comment by DaFranker · 2012-08-24T13:56:15.936Z · LW(p) · GW(p)

You haven't read or paid much attention to the metaethics sequence yet, have you? Or do you simply disagree with pretty much all the major points of the first half of it?

Also relevant: Joy in the merely real

Replies from: koning_robot
comment by koning_robot · 2012-08-24T15:32:31.903Z · LW(p) · GW(p)

I remember starting it, and putting it away because yes, I disagreed with so many things. Especially the present subject; I couldn't find any arguments for the insistence on placating wants rather than improving experience. I'll read it in full next week.

comment by DaFranker · 2012-08-24T12:54:40.514Z · LW(p) · GW(p)

If you mean that others' mental states matter equally much, then I agree (but this distracts from the point of the experience machine hypothetical). Anything else couldn't possibly matter.

And unsupported strong claim. Dozens of implications and necessary conditions in evolutionary psychology if the claim is assumed true. No justification. No arguments. Only one or two weak points looked up by the claim's proponent.

I think you may be confusing labels and concepts. Maximizing hedonistic mental states means, to the best of my knowledge, programming a hedonistic imperative directly into DNA for full-maximal state constantly from birth, regardless of conditions or situations, and then stacking up humans as much as possible to have as many of them as possible feeling as good as possible. If any of the humans move, they could prove to be a danger to efficient operation of this system, and letting them move thus becomes a net negative, so it follows from this that in the process of optimization all human mobility should be removed, considering that for a superintelligence removing limbs and any sort of mobility from "human" DNA is probably trivial.

But since they're all feeling the best they could possibly feel, then it's all good, right? It's what they like (having been programmed to like it), so that's the ideal world, right?

Edit: See Wireheading for a more detailed explanation and context of the possible result of a happiness-maximizer.

Replies from: koning_robot
comment by koning_robot · 2012-08-24T14:08:59.374Z · LW(p) · GW(p)

And unsupported strong claim. Dozens of implications and necessary conditions in evolutionary psychology if the claim is assumed true. No justification. No arguments. Only one or two weak points looked up by the claim's proponent.

This comment has justification. I don't see how this would affect evolutionary psychology. I'm not sure if I'm parsing your last sentence here correctly; I didn't "look up" anything, and I don't know what the weak points are.

Assuming that the scenario you paint is plausible and the optimal way to get there, then yeah, that's where we should be headed. One of the explicit truths of your scenario is that "they're all feeling the best they could possibly feel". But your scenario is a bad intuition pump. You deliberately constructed this scenario so as to manipulate me into judging what the inhabitants experience as less than that, appealing to some superstitious notion of true/pure/honest/all-natural pleasure.

You may be onto something when you say I might be confusing labels and concepts, but I am not saying that the label "pleasure" refers to something simple. I am only saying that the quality of mental states is the only thing we should care about (note the word should, I'm not saying that is currently the way things are).

Replies from: DaFranker
comment by DaFranker · 2012-08-24T14:29:02.673Z · LW(p) · GW(p)

You deliberately constructed this scenario so as to manipulate me into judging what the inhabitants experience as less than that, appealing to some superstitious notion of true/pure/honest/all-natural pleasure.

No. I deliberately re-used a similar construct to Wireheading theories to expose more easily that many people disagree with this.

There's no superstition of "true/pure/honest/all-natural pleasure" in my model - right now, my current brain feels extreme anti-hedons towards the idea of living in Wirehead Land. Right now, and to my best reasonable extrapolation, I and any future version of "myself" will hate and disapprove of wireheading, and would keep doing so even once wireheaded, if not for the fact that the wireheading necessarily overrides this in order to achieve maximum happiness by re-wiring the user to value wireheading and nothing else.

The "weak points" I spoke of is that you consider some "weaknesses" of your position, namely others' mental states, but those are not the weakest of your position, nor are you using the strongest "enemy" arguments to judge your own position, and the other pieces of data also indicate that there's mind-killing going on.

The quality of mental states is presumably the only thing we should care about - my model also points towards "that" (same label, probably not same referent). The thing is, that phrase is so open to interpretation (What's "should"? What's "quality"? How meta do the mental states go about analyzing themselves and future/past mental states, and does the quality of a mental state take into account the bound-to-reality factor of future qualitative mental states? etc.) that it's almost an applause light.

Replies from: koning_robot
comment by koning_robot · 2012-08-24T19:42:29.589Z · LW(p) · GW(p)

No. I deliberately re-used a similar construct to Wireheading theories to expose more easily that many people disagree with this.

Yes, but they disagree because what they want is not the same as what they would like.

The "weak points" I spoke of is that you consider some "weaknesses" of your position, namely others' mental states, but those are not the weakest of your position, nor are you using the strongest "enemy" arguments to judge your own position, and the other pieces of data also indicate that there's mind-killing going on.

The value of others' mental states is not a weakness of my position; I just considered them irrelevant for the purposes of the experience machine thought experiment. The fact that hooking up to the machine would take away resources that could be used to help others weighs against hooking up. I am not necessarily in favor of wireheading.

I am not aware of weaknesses of my position, nor in what way I am mind-killing. Can you tell me?

[...] it's almost an applause light.

Yes! So why is nobody applauding? Because they disagree with some part of it. However, the part they disagree with is not what the referent of "pleasure" is, or what kind of elaborate outside-world engineering is needed to bring it about (which has instrumental value on my view), but the part where I say that the only terminal value is in mental states that you cannot help but value.

The burden of proof isn't actually on my side. A priori, nothing has value. I've argued that the quality of mental states has (terminal) value. Why should we also go to any length to placate desires?

Replies from: thomblake
comment by thomblake · 2012-08-24T20:03:19.295Z · LW(p) · GW(p)

The burden of proof isn't actually on my side.

To a rationalist, the "burden of proof" is always on one's own side.

Replies from: Manfred
comment by Manfred · 2012-08-24T20:52:21.771Z · LW(p) · GW(p)

Hm, a bit over-condensed. More like the burden of proof is on yourself, to yourself. Once you have satisfied that, argument should be an exercise in communication, not rhetoric.

Replies from: wedrifid
comment by wedrifid · 2012-08-25T17:12:16.245Z · LW(p) · GW(p)

Hm, a bit over-condensed. More like the burden of proof is on yourself, to yourself.

Agree completely.

Once you have satisfied that, argument should be an exercise in communication, not rhetoric.

This would seem to depend on the instrument goal motivating the argument.

comment by Vladimir_Nesov · 2012-08-24T15:06:38.947Z · LW(p) · GW(p)

It is irrelevant what we desire or want, as is what we act for. The only thing that is relevant is that which we like.

Saying a word with emphasis doesn't clarify its meaning or motivate the relevance of what it's intended to refer to. There are many senses in which doing something may be motivated: there is wanting (System 1 urge to do something), planning (System 2 disposition to do something), liking (positive System 1 response to an event) and approving (System 2 evaluation of an event). It's not even clear what each of these means, and these distinctions don't automatically help with deciding what to actually do. To make matters even more complicated, there is also evolution with its own tendencies that don't quite match those of people it designed.

See Approving reinforces low-effort behaviors, The Blue-Minimizing Robot, Urges vs. Goals: The analogy to anticipation and belief.

Replies from: koning_robot, chaosmosis
comment by koning_robot · 2012-08-24T21:28:39.115Z · LW(p) · GW(p)

I accept this objection; I cannot describe in physical terms what "pleasure" refers to.

comment by chaosmosis · 2012-08-24T22:30:31.118Z · LW(p) · GW(p)

I think I understand what koning_robot was going for here, but I can't approach it except through a description. This description elicits a very real moral and emotional reaction within me, but I can't describe or pin down what exactly is wrong with it. Despite that, I still don't like it.

So, some of the dystopian Fun Worlds that I imagine are rooms where non AI lifeforms have no intelligence of their own anymore, as it was not needed. These lifeforms are incredibly simple and are little more than dopamine receptors (I'm not up to date on the neuroscience of pleasure, I remember its not really dopamine but am not sure what the chemical(s) that correspond to happiness are). The lifeforms are all identical and interchangeable. They do not sing or dance. Yet they are extremely happy, in a chemical sort of sense. Still, I would not like to be one.

Values are worth acting on, even if we don't understand them exactly, so long as we understand in a general sense what they tell us. That future would suck horribly and I don't want it to happen.

comment by voi6 · 2011-06-15T07:55:56.960Z · LW(p) · GW(p)

While humans may not be maximizing pleasure they are certainly maximizing some utility function which can be characterized. Human concerns can then be programmed to optimize this function in your FAI.

Replies from: xelxebar
comment by xelxebar · 2011-06-18T04:34:35.397Z · LW(p) · GW(p)

You might be interested in Allais paradox, which is an example of humans in fact demonstrating behavior which doesn't maximize any utility function. If you're aware of the Von Neumann-Morgenstern utility function characterization, this becomes clearer than just knowing what a utility function is.

Replies from: voi6, CuSithBell
comment by voi6 · 2013-03-25T03:41:53.238Z · LW(p) · GW(p)

Sorry to respond to this 2 years late. I'm aware of the paradox and the VNM theorem. Just because humans are inconsistent/irrational doesn't mean they're aren't maximizing a utility function however.

Firstly, you can have a utility function and just be bad at maximizing it (and yes this contradicts the rigorous mathematical definitions which we all know and love, but we both know how English doesn't always bend to their will and we both know what I mean when I say this without having to be pedantic because we are such gentlemen).

Secondly, if you consider each subsequent dollar you attain to be less valuable this makes perfect sense and this is applied in tournament poker where taking 50:50 chance of either going broke or doubling your stack is considered a terrible play because the former outcome guarantees you lose your entire entry fee but the latter gives you an expected winning value that is less than your entry fee. This can be seen with a simple calculation or by just noting that if everyone plays aggressively like this I can do nothing and make into into the prize pool because the other players will simply eliminate each other faster than the blinds will eat away at my own stack.

But I digress. Let's cut to the chase here. You can do what you want but you can't choose your wants. Along the same lines a straight man, no matter how intelligent he becomes, will still find women arousing. An AI can be designed to have the motives of a selfless benevolent human (the so called Artificial Gandhi Intelligence) and this will be enough. Ultimately humans want to be satisfied and if it's not in their nature to be permanently so, then they will concede to changing their nature with FAI-developed science.

comment by CuSithBell · 2011-06-18T05:31:27.254Z · LW(p) · GW(p)

an example of humans in fact demonstrating behavior which doesn't maximize any utility function.

That's not exactly true. The Allais paradox does help to demonstrate why explicit utility functions are a poor way to model human behavior, though.

comment by jeromeapura · 2011-06-25T12:28:05.017Z · LW(p) · GW(p)

this is my very first comment. So this will result of immorality of a person. I'm a Christian and I want to point out only sexual pleasure. It promotes to indulgence to worldly things - machines and lust. Since in RC, masturbation is prohibited then this things will occur together.

Replies from: CuSithBell
comment by CuSithBell · 2011-06-25T14:59:18.585Z · LW(p) · GW(p)

Your comment is going to be downvoted a lot.

If you want to make your comments useful, then you should try to make your thoughts more clear and coherent. You seem to be trying to say a lot of things without really explaining what any of them mean. "This"? "Machines"? What are you trying to say?

It also appears that English may not be your first language. People here probably won't hold that against you, but you should use simple grammar that you can understand. If you can write in correct English grammar, then please take a few seconds to do so when you write a post.

Replies from: jeromeapura
comment by jeromeapura · 2011-06-29T12:24:33.131Z · LW(p) · GW(p)

Thanks for advice. You're right. You know, this is my fourth language since I have three more native language. I will be careful next time. Well, I'm not really motivated for some votes but I'm motivated for those ideas that I could read. Thanks a lot.

Replies from: CuSithBell
comment by CuSithBell · 2011-06-29T21:13:04.082Z · LW(p) · GW(p)

I'm glad to hear you're open to advice :) I hope you will find our ideas helpful and interesting.

comment by EliezerYudcowskii · 2011-06-12T18:01:10.183Z · LW(p) · GW(p)

Is it just me or is odd that Eliezer calls himself a math prodigy?

EY has been in the AI world claiming to be an AI research for more then 10 years. He also claims that he needs to spend a year studying math. Now most people even non math prodigies can get a B.S., M.S., and most of a PhD in the same amount of time.

EY also states his knowledge of math is tiny compared to a mathematician. How can this be 10 years studying math and a "prodigy" so math should be easy since he is as he claims so good at explaining it and he has a good sense for right answers half the time.

Now let me get this straight very little math knowledge compared to a mathematician. A math prodigy so its easy. Great taste in right answers. Yet after 10 years in a math field (AI) he doesn't know that much math.

Most math people I know right math papers especially when they are good at it. Not EY in fact he doesn't nor has he ever published a high level math paper. Anyone else confused?

EY please explain this puzzling issue to me.

Replies from: CronoDAS, wedrifid
comment by CronoDAS · 2011-06-13T01:13:19.943Z · LW(p) · GW(p)

If you include the time spent in elementary school, high school, and college, most people with a Ph.D in math have spent many, many years studying math...

Also, generally "prodigy" means that, as a child, one was far beyond one's age group. If you're learning algebra at 8 and calculus at 11, you're a prodigy... even if you don't yet know any math beyond the high school level.

Replies from: komponisto
comment by komponisto · 2011-06-13T02:24:06.703Z · LW(p) · GW(p)

Also, generally "prodigy" means that, as a child, one was far beyond one's age group. If you're learning algebra at 8 and calculus at 11, you're a prodigy..

That doesn't feel sufficient to me. I usually interpret the word to imply achieving high levels of status while still a child (for example, winning national competitions, touring internationally as a perfomer, etc.). Merely learning stuff won't do that.

comment by wedrifid · 2011-06-12T23:00:11.239Z · LW(p) · GW(p)

XiXiDu?

Replies from: khafra
comment by khafra · 2011-06-13T18:21:29.481Z · LW(p) · GW(p)

XiXiDu's grammatical quirks are subtle. Not outright errors like these, just lapses from idiomatic English. He's also been comfortable questioning SIAI members publically under his own name.

Replies from: komponisto
comment by komponisto · 2011-06-13T18:36:54.216Z · LW(p) · GW(p)

XiXiDu's grammatical quirks are subtle. Not outright errors like these, just lapses from idiomatic English.

In fact at least one of them -- "the SIAI" with the definite article -- has spread to the point of idiomaticity among significant portions of the LW community.