[SEQ RERUN] Not for the Sake of Happiness (Alone)

post by MinibearRex · 2011-11-03T04:29:45.039Z · LW · GW · Legacy · 30 comments

Contents

30 comments

Today's post, Not for the Sake of Happiness (Alone) was originally published on 22 November 2007. A summary (taken from the LW wiki):

 

Tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Truly Part of You, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

30 comments

Comments sorted by top scores.

comment by DanielLC · 2011-11-03T04:44:05.484Z · LW(p) · GW(p)

For what it's worth, I value happiness alone (though not my happiness in particular).

Replies from: amcknight, None, MinibearRex, Logos01, lessdazed
comment by amcknight · 2011-11-03T19:14:10.960Z · LW(p) · GW(p)

The funny thing is you probably don't even know what happiness is. Do you not value pleasure, contentment, joy, or satisfaction? None of these things might even turn out to be single things on closer inspection (like Jade).

Replies from: DanielLC
comment by DanielLC · 2011-11-03T21:03:16.817Z · LW(p) · GW(p)

I don't understand.

I don't know exactly what happiness is, but I'm pretty certain it's something like the partial derivative of desires with respect to beliefs i.e. you're happy if you start wanting what's going on more. It might be the dot product of desires and beliefs i.e. you believe your desires are fulfilled.

comment by [deleted] · 2011-11-03T19:58:11.971Z · LW(p) · GW(p)

You sure about that? You could be sure, but lets say you I told you that in 5 years you would become demented. This dementia would not make you unhappy, in fact it would make you slightly happier and your condition would not make any person unhappier. A very artificial situation but still. Would you still consider it a good thing that you would become demented?

Replies from: DanielLC
comment by DanielLC · 2011-11-03T21:05:54.309Z · LW(p) · GW(p)

The idea of being demented makes me somewhat unhappy, which could certainly cause me to choose unhappiness over dementia, but that's a statement of my desires, not my moral beliefs. Morally, dementia would be better.

Replies from: None
comment by [deleted] · 2011-11-03T23:24:22.959Z · LW(p) · GW(p)

The idea of being demented makes me somewhat unhappy, which could certainly cause me to choose unhappiness over dementia

If we changed the condition to 10 seconds (instead of 5 years) would that make you chose dementia, for sure?

but that's a statement of my desires, not my moral beliefs. Morally, dementia would be better.

By Morally I assume that you mean something you should do? But how did you come to that conclusion, that it is morally to chose dementia (happiness) and how come you deem it morally to care about others happiness? (I scenically my questions are not conceived as utter nonsense.)

Replies from: DanielLC
comment by DanielLC · 2011-11-03T23:33:55.867Z · LW(p) · GW(p)

If we changed the condition to 10 seconds (instead of 5 years) would that make you chose dementia, for sure?

I think so. I'm not certain why.

But how did you come to that conclusion, that it is morally to chose dementia (happiness) and how come you deem it morally to care about others happiness?

It's better because I'm happier. It might be somewhat bad for Present!me (who feels bad making that decision), but I assume Future!me's happiness will make up for that.

and how come you deem it morally to care about others happiness?

There's nothing moral about caring about others' happiness. It's their happiness itself that is moral. Happiness is good.

comment by MinibearRex · 2011-11-03T16:39:36.694Z · LW(p) · GW(p)

So what's your response to the pill question?

Replies from: DanielLC
comment by DanielLC · 2011-11-03T20:54:48.972Z · LW(p) · GW(p)

I'd take a pill to make me happy. The exact kind of joy is irrelevant.

comment by Logos01 · 2011-11-03T06:51:28.001Z · LW(p) · GW(p)

I prefer to place value on liberty and excellence over happiness. That is; the breadth of available options for individuals to self-determine.

I would rather be twice as capable while half as happy than I would be twice as happy while half as capable.

Replies from: DanielLC
comment by DanielLC · 2011-11-03T16:39:05.553Z · LW(p) · GW(p)

Excellence?

I suspect a lot of people consider liberty important because they like it. I don't. I very much prefer my choices being made for me. If someone gave me more freedom, I wouldn't like that. Could they really be said to be doing me a favor? Or is it better that way, and the fact that I'm against it doesn't matter?

Replies from: Logos01
comment by Logos01 · 2011-11-03T17:17:00.777Z · LW(p) · GW(p)

Excellence?

Excellence as in, "prowess", "capability", "competence", "skillfulness", "strength".

I very much prefer my choices being made for me. [...] If someone gave me more freedom, I wouldn't like that. Could they really be said to be doing me a favor?

Would you agree that while you would prefer to have your choices made for you, you would strongly prefer to have some say in who makes those choices?

I ask this question as a way to attempt to reveal that we're focusing on two different things with the notion of 'freedom'. You associate "freedom" with "range of choices". I associate "freedom" with "range of outcomes". Normally, these are indistinguishable from one another. But there are practical cases where they aren't. For example: a voluntary slave need only make one choice: who is his master?

Replies from: None, DanielLC
comment by [deleted] · 2011-11-03T23:37:26.364Z · LW(p) · GW(p)

I ask this question as a way to attempt to reveal that we're focusing on two different things with the notion of 'freedom'. You associate "freedom" with "range of choices". I associate "freedom" with "range of outcomes".

Wow I don't know if it was your intention but you just made the most concise/elegant distinction between libertarian free will (outcome) and Compatibilism free will (choice), Bravo!

But then have to ask: by range of outcomes do you mean expected range of outcomes or genuine range of outcomes (real in the sens that not even Laplace's demon could know the outcome for sure?.)

Replies from: Logos01
comment by Logos01 · 2011-11-04T00:09:55.720Z · LW(p) · GW(p)

Wow I don't know if it was your intention but you just made the most concise/elegant distinction between libertarian free will (outcome) and Compatibilism free will (choice), Bravo!

That's rather interesting, since I myself am a compatibilist and a physicalist. My phrasing was not meant to be an argument for libertinism over compatibilism / determinism, and in fact the definition of freedom as being associated with a greater range of available outcomes is entirely compatible with, well, compatibilism.

(real in the sens that not even Laplace's demon could know the outcome for sure?.)

I do not ascribe to the notion that the universe is wholly deterministic anyhow, so Laplace's demon would simply be too confused... although maybe he'll know something we don't.

to answer you more directly, I don't know that there's a material difference between "expected range of outcomes" and "genuine range of outcomes", as I was speaking in the abstract anyhow.

Replies from: None
comment by [deleted] · 2011-11-09T19:06:19.126Z · LW(p) · GW(p)

But then what is the difference between "range of choices" and "expected range of outcomes"?

Replies from: Logos01
comment by DanielLC · 2011-11-03T21:00:14.877Z · LW(p) · GW(p)

I'd want it to be someone who makes good choices, since that will make me happier. Other than that, choosing who is just another choice I'd wish to avoid.

I don't want a range of outcomes. I want a good outcome.

Are you trying to figure out what makes me happy, or whether or not I care about freedom on moral grounds? If freedom did make me happy, I'd just talk about a hypothetical person who preferred slavery. I already told you I only find happiness morally relevant.

Replies from: Logos01
comment by Logos01 · 2011-11-04T00:02:26.708Z · LW(p) · GW(p)

I don't want a range of outcomes. I want a good outcome.

These are synonymous when we must remain agnostic as to what each individual would select as a "good outcome" for his or her self.

Are you trying to figure out what makes me happy, or whether or not I care about freedom on moral grounds?

No. My argument is one of practical utility, not of moral virtue. If we expand universally the range of available outcomes then the number of "good outcomes" increases for each individual because each individual is more likely to have access to the things he or she actually wants as an individual.

Replies from: DanielLC
comment by DanielLC · 2011-11-04T00:31:51.287Z · LW(p) · GW(p)

If we expand universally the range of available outcomes then the number of "good outcomes" increases for each individual because each individual is more likely to have access to the things he or she actually wants as an individual.

Are you saying that freedom is an instrumental value, and that we actually agree on terminal values?

Replies from: Logos01, Prismattic
comment by Logos01 · 2011-11-04T01:25:49.922Z · LW(p) · GW(p)

Are you saying that freedom is an instrumental value, and that we actually agree on terminal values?

I would be more inclined to say that if you prefer to be happy then you should have the freedom -- the option -- to be happy.

So I don't know that we agree on that -- as I would not prefer to be "happy" (in fact, I worry very much about becoming content and as a result sliding into complacency; I believe dissatisfaction with the now is an integral element of what makes me personally a "worthwhile" human being) -- but I do know that my belief in freedom as currently expressed means that just because I want to be one way does not mean that I am asserting that all people should wind up like me.

Diversity of individual outcomes in order to allow individuals to seek out and obtain their individual preferences (in a manner that does not directly impede the ability of others to do the same) is (or is close to) an intrinsic good.

Replies from: DanielLC
comment by DanielLC · 2011-11-04T01:37:35.459Z · LW(p) · GW(p)

So, freedom is an instrumental value, but happiness is not the terminal value?

It sounds like your terminal value is preference fulfillment or something to that extent.

Replies from: Logos01
comment by Logos01 · 2011-11-04T01:42:50.696Z · LW(p) · GW(p)

So, freedom is an instrumental value, but happiness is not the terminal value?

I'm not sure that the mere fact that something is a terminal value prevents it from also being an instrumental value. Perhaps I might agree with the notion that "maintaining high instrumental value is a terminal value" -- though I haven't really put deep thought into that one. I'll have to consider it.

It sounds like your terminal value is preference fulfillment or something to that extent.

Passively, yes.

comment by Prismattic · 2011-11-04T01:09:01.107Z · LW(p) · GW(p)

Possibly relevant

Replies from: DanielLC
comment by DanielLC · 2011-11-04T01:14:17.401Z · LW(p) · GW(p)

Is that a yes?

Edit: Whoops. I didn't notice that you weren't the person I was originally talking to.

The Link is irrelevant. It's about instrumental values. I was talking about terminal values. I'm not sure what Logos01 was talking about, but if it is instrumental values, this isn't so much a debate as a mutual misunderstanding, and not much is relevant.

comment by lessdazed · 2011-11-03T13:20:25.523Z · LW(p) · GW(p)

How are happiness and unhappiness weighed against each other, to become a single value?

Is there a strict boundary between emotions, or a sliding scale among them all?

Replies from: DanielLC
comment by DanielLC · 2011-11-03T16:34:22.144Z · LW(p) · GW(p)

How are happiness and unhappiness weighed against each other, to become a single value?

I consider unhappiness negative happiness. If you want to do what you're currently doing more, you're happy. The more it makes you want to do it, the happier you are. If it makes you want to do it more by a negative amount, it's negative happiness.

comment by [deleted] · 2011-11-03T19:38:34.829Z · LW(p) · GW(p)

I'm not sure what I value, not sure if I could reduce all my values to pleasure and pain, might be but:

My biggest beef with (psychological) hedonism that it seams somewhat incoherent if you think that personal identity can be explained in terms of a narrative center of gravity: Why should I care about future mes'? I'm assuming that you have to act to maximize pleasure - pain in at any given time, you are not allowed to say "well this gives me more pleasure - pain in the long run", because if you do, you have sneaked in the value "future me is also important"/"maintaining personal identity is important" through the back door..

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-03T19:53:26.005Z · LW(p) · GW(p)

While I basically agree with you about hedonism, I think you're being a bit too glib about your reason. Even someone who takes a pill that will give them a minute of ecstasy followed by death is caring about their "future self"; after all, the experience of putting a pill in their mouths is hardly worth pursuing for its own sake. This notion that future-me and present-me are sharply distinguishable requires clearer delineation to be useful.

Replies from: None
comment by [deleted] · 2011-11-03T20:08:58.559Z · LW(p) · GW(p)

But then I don't think going from future me to for example my children is so far. Edit: crossed out "not" before so)

Though I have encountered people who hold to "strong psychological hedonism".

-- "But wouldn't complete headwireing, the way you describe it kill the person? No memories no coherent thoughts just bliss-stasis."

-- "Well you only want to maintain the person because in this very moment it gives you pleasure/less pain."