Not for the Sake of Happiness (Alone)

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-22T03:19:34.000Z · LW · GW · Legacy · 108 comments

Contents

108 comments

When I met the futurist Greg Stock some years ago, he argued that the joy of scientific discovery would soon be replaced by pills that could simulate the joy of scientific discovery.  I approached him after his talk and said, "I agree that such pills are probably possible, but I wouldn't voluntarily take them."

And Stock said, "But they'll be so much better that the real thing won't be able to compete.  It will just be way more fun for you to take the pills than to do all the actual scientific work."

And I said, "I agree that's possible, so I'll make sure never to take them."

Stock seemed genuinely surprised by my attitude, which genuinely surprised me.

One often sees ethicists arguing as if all human desires are reducible, in principle, to the desire for ourselves and others to be happy.  (In particular, Sam Harris does this in The End of Faith, which I just finished perusing - though Harris's reduction is more of a drive-by shooting than a major topic of discussion.)

This isn't the same as arguing whether all happinesses can be measured on a common utility scale - different happinesses might occupy different scales, or be otherwise non-convertible.  And it's not the same as arguing that it's theoretically impossible to value anything other than your own psychological states, because it's still permissible to care whether other people are happy.

The question, rather, is whether we should care about the things that make us happy, apart from any happiness they bring.

We can easily list many cases of moralists going astray by caring about things besides happiness.  The various states and countries that still outlaw oral sex make a good example; these legislators would have been better off if they'd said, "Hey, whatever turns you on."  But this doesn't show that all values are reducible to happiness; it just argues that in this particular case it was an ethical mistake to focus on anything else.

It is an undeniable fact that we tend to do things that make us happy, but this doesn't mean we should regard the happiness as the only reason for so acting.  First, this would make it difficult to explain how we could care about anyone else's happiness - how we could treat people as ends in themselves, rather than instrumental means of obtaining a warm glow of satisfaction.

Second, just because something is a consequence of my action doesn't mean it was the sole justification.  If I'm writing a blog post, and I get a headache, I may take an ibuprofen.  One of the consequences of my action is that I experience less pain, but this doesn't mean it was the only consequence, or even the most important reason for my decision.  I do value the state of not having a headache.  But I can value something for its own sake and also value it as a means to an end.

For all value to be reducible to happiness, it's not enough to show that happiness is involved in most of our decisions - it's not even enough to show that happiness is the most important consequent in all of our decisions - it must be the only consequent.  That's a tough standard to meet.  (I originally found this point in a Sober and Wilson paper, not sure which one.)

If I claim to value art for its own sake, then would I value art that no one ever saw?  A screensaver running in a closed room, producing beautiful pictures that no one ever saw?  I'd have to say no.  I can't think of any completely lifeless object that I would value as an end, not just a means.  That would be like valuing ice cream as an end in itself, apart from anyone eating it.  Everything I value, that I can think of, involves people and their experiences somewhere along the line.

The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.

The value of scientific discovery requires both a genuine scientific discovery, and a person to take joy in that discovery.  It may seem difficult to disentangle these values, but the pills make it clearer.

I would be disturbed if people retreated into holodecks and fell in love with mindless wallpaper.  I would be disturbed even if they weren't aware it was a holodeck, which is an important ethical issue if some agents can potentially transport people into holodecks and substitute zombies for their loved ones without their awareness.  Again, the pills make it clearer:  I'm not just concerned with my own awareness of the uncomfortable fact.  I wouldn't put myself into a holodeck even if I could take a pill to forget the fact afterward.  That's simply not where I'm trying to steer the future.

I value freedom:  When I'm deciding where to steer the future, I take into account not only the subjective states that people end up in, but also whether they got there as a result of their own efforts.  The presence or absence of an external puppet master can affect my valuation of an otherwise fixed outcome.  Even if people wouldn't know they were being manipulated, it would matter to my judgment of how well humanity had done with its future.  This is an important ethical issue, if you're dealing with agents powerful enough to helpfully tweak people's futures without their knowledge.

So my values are not strictly reducible to happiness:  There are properties I value about the future that aren't reducible to activation levels in anyone's pleasure center; properties that are not strictly reducible to subjective states even in principle.

Which means that my decision system has a lot of terminal values, none of them strictly reducible to anything else.  Art, science, love, lust, freedom, friendship...

And I'm okay with that.  I value a life complicated enough to be challenging and aesthetic - not just the feeling that life is complicated, but the actual complications - so turning into a pleasure center in a vat doesn't appeal to me.  It would be a waste of humanity's potential, which I value actually fulfilling, not just having the feeling that it was fulfilled.

108 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Caledonian2 · 2007-11-22T03:41:30.000Z · LW(p) · GW(p)

Far too few people take the time to wonder what the purpose and function of happiness is.

Seeking happiness as an end in itself is usually extremely destructive. Like pain, pleasure is a method for getting us to seek out or avoid certain behaviors, and many of these behaviors had consequences whose properties could be easily understood in terms of the motivators. (Things are more complicated now that we're not living in the same world we evolved in.)

Instead of reasoning about goals, most people just produce complex systems of rationalizations to justify their desires. That's usually pretty destructive, too.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-22T04:00:55.000Z · LW(p) · GW(p)

Far too few people take the time to wonder what the purpose and function of happiness is.

You're talking as if this purpose were a property of happiness itself, rather than something that we assign to it. As a matter of historical fact, the evolutionary function of happiness is quite clear. The meaning that we assign to happiness is an entirely separate issue.

Seeking happiness as an end in itself is usually extremely destructive.

Because, er, it makes people unhappy?

comment by J. · 2007-11-22T04:26:39.000Z · LW(p) · GW(p)

One often sees ethicists arguing that all desires are in principle reducible to the desire for happiness? How often? If you're talking about philosopher ethicists, in general you see them arguing against this view.

comment by Caledonian2 · 2007-11-22T05:32:31.000Z · LW(p) · GW(p)
You're talking as if this purpose were a property of happiness itself, rather than something that we assign to it.

There are all sorts of realities that you cannot dictate at will. Purpose can be defined evolutionarily, and function is not a property that we assign. Do you 'assign' the function of your pancreas to it, or does it simply carry it out on its own?

Because, er, it makes people unhappy?

No, because it makes them dead. Or destroys them in a host of more subtle ways.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-22T06:06:10.000Z · LW(p) · GW(p)

Yes, J, I very often see this. By strict coincidence, for example, I was reading this by Shermer just now, and came across:

"I believe that humans are primarily driven to seek greater happiness, but the definition of such is completely personal and cannot be dictated and should not be controlled by any group. (Even so-called selfless acts of charity can be perceived as directed toward self-fulfillment--the act of making someone else feel good, makes us feel good. This is not a falsifiable statement, but it is observable in people's actions and feelings.) I believe that the free market--and the freer the better--is the best system yet devised for allowing all individuals to achieve greater levels of happiness."

Michael Shermer may or may not believe that all values reduce to happiness, but he is certainly "arguing as if" they do. Not every mistake has to be made primarily by professional analytic philosophers for it to be worth discussing.

comment by Unknown · 2007-11-22T06:11:47.000Z · LW(p) · GW(p)

Actually, seeking merely subjective happiness without any other greater purpose does often tend to make people unhappy. Or even if they manage to become somewhat happy, they will usually become even happier if they seek some other purpose as well.

One reason for this is that part of what makes people happy is their belief that they are seeking and attaining something good; so if they think they are seeking something better than happiness, they will tend to be happier than if they were seeking merely happiness.

Of course this probably wouldn't apply to a pleasure machine; presumably it is possible in principle to maximize subjective happiness without seeking any other goal. But like Eliezer, I wouldn't see this as particularly desirable.

comment by Harold · 2007-11-22T06:51:30.000Z · LW(p) · GW(p)

I don't think a 'joy of scientific achievement' pill is possible. One could be made that would bathe you in it for a while, but your mental tongue would probe for your accomplishment and find nothing. Maybe the pill could just turn your attention away and carry you along, but I doubt it. Vilayanur Ramachandran gave a TED talk about the cause of people's phantom limbs - your brain detects an inconsistency and gets confused and induces pain. Something similar might prevent an 'achievement' pill from having the intended effect.

comment by James_Bach · 2007-11-22T07:22:42.000Z · LW(p) · GW(p)

I'm nervous about the word happiness because I suspect it's a label for a basket of slippery ideas and sub-idea feelings. Still, something I don't understand about your argument is that when you demonstrate that for you happiness is not a terminal value you seem to arbitrarily stop the chain of reasoning. Terminating your inquiry is not the same as having a terminal value.

If you say you value something and I know that not everyone values that thing, I naturally wonder why you value it. You say it's a terminal value, but when I ask myself why you value it if someone else doesn't, I say to myself "it must make him happy to value that". In that sense, happiness may be a word we use as a terminal value by definition, not by evidence-- a convention like saying QED at the end of a proof. In the old days the terminal value was often "God wills it so", but with the invention of humanism in the middle ages, pursuit of happiness was born.

In the case where someone seems to be working against what they say makes them happy, that just means there are different kinds or facets or levels of happiness. Happiness is complex, but if there are no reasons beyond the final reason for taking an action, then as a conceptual convention the final reason must be happiness.

Now I will argue a little against that. What I've said up to now is based on the assumption that humans are teleonomic creatures with free will. But I think we are actually NOT such creatures. We do not exist to fulfill a purpose. So the concept of happiness, defined as it is, is a story that is pasted onto us, by us, so that we can pretend to have an ethereal conscious existence. I propose that the truth is ultimately that we do what we do because of the molecules and energy state that we possess, within the framework of our environment and the laws of physics.

I could say that eating makes me happy and that's why I do it, or I could say that the deeper truth is that my brain is constructed to feel happy about eating. I eat because of that mechanism, not because of the "happiness", which doesn't actually exist. We make up the story of happiness not because it makes us happy to do so, but because we are compelled to do so by our physical nature.

In the words of Jessica Rabbit, I'm just drawn that way.

I normally wouldn't take the scientific happiness pill because I seem to be constructed to enjoy feeling that my state of mind is substantially a product of my ongoing thoughts, not chemicals. To inject chemicals to change my thoughts is literally a form of suicide, to me. It takes the unique thought pattern that is ME, kills it, and replaces it with a thought pattern identical in some ways to anyone else who takes the pill. People alive are unique; death is the ultimate conformity and conformity a kind of death.

But the happiness illusion is complex enough that I may under some circumstances say yes to that pill and have that little suicide.

Replies from: Polymeron
comment by Polymeron · 2011-03-24T12:38:06.145Z · LW(p) · GW(p)

I actually think that happiness is reducible to a clear and defined definition.

Happiness is a positive, gradual feedback mechanism that is context dependent. The context is the belief that your desires (most and strongest) are being fulfilled. Misery is the inverse negative feedback for thwarted desires).

If you give an AI these mechanisms, then it experiences happiness and misery, regardless of what you call them or how they manifest.

Replies from: FAWS
comment by FAWS · 2011-03-24T13:16:47.450Z · LW(p) · GW(p)

I think you are grossly oversimplifying unless you actually define either happiness or desires in terms of the other, in which case one of those doesn't conform to the normal usage of the word. People don't necessarily know what will make them happy, desires don't necessarily conform to what people believe makes them happy or what actually will make them happy, and you can be happy without desires being fulfilled (whatever desires are unfulfilled will be the strongest at that moment, you desire food when you are hungry, not when you are sated). Fulfilling a desire that you previously believed would make you happy can easily make you less happy than you were before.

Replies from: Polymeron
comment by Polymeron · 2011-03-24T13:36:21.790Z · LW(p) · GW(p)

That people don't know what will make them happy does not invalidate what I said. They could well have desires they are not fully aware of, or are not aware of their current strength.

Nor does there need to be a coupling in time between happiness and desire. Happiness is not an immediate feedback mechanism like pleasure; it is gradual. You can be happy for fulfilling a desire you had, and that feeling persists - for a time.

If fulfilling a desire you had makes you less happy, it is because either: a. You have lost the desire between the time you had it and the time it became fulfilled b. Other desires (more and stronger) have been thwarted c. A combination of the two.

Can you bring an example of this mechanism working differently?

Replies from: FAWS, Marius
comment by FAWS · 2011-03-24T14:00:23.529Z · LW(p) · GW(p)

That people don't know what will make them happy does not invalidate what I said. They could well have desires they are not fully aware of, or are not aware of their current strength.

How do (partially) unaware desires figure into anything if what matters is "the belief that your desires are being fulfilled"? And if you infer anything about desires from lack of happiness I don't think you have successfully reduced anything. I'd agree there seems to be some connection between happiness and fulfilling desires but it certainly doesn't look like a simple, solved problem to me.

Can you bring an example of this mechanism working differently?

I think winning the lottery is the usual example. Or take early retirement.

comment by Marius · 2011-03-24T14:11:58.239Z · LW(p) · GW(p)

That people don't know what will make them happy does not invalidate what I said. They could well have desires they are not fully aware of, or are not aware of their current strength.

If you say that happiness comes from fulfilling desires, and that we can be unaware of your desires (or their strength), how can we measure those desires or their strength? Is it simply a matter of getting you drunk and asking? Making you take an implicit attitudes test? If we can only measure a desire by the happiness its fulfillment brings you, you have just set up a circular argument.

FAWS' lottery example is a good one. By any reasonable account of desires, most lottery winners strongly desire to win the lottery, and then start spending the money on satisfying their other desires. Yet by several recent accounts of happiness, winning the lottery appears to correlate poorly (or even negatively) with happiness.

Replies from: None, Polymeron
comment by [deleted] · 2011-03-24T14:32:18.006Z · LW(p) · GW(p)

winning the lottery appears to correlate poorly (or even negatively) with happiness.

I volunteer to test this claim.

comment by Polymeron · 2011-03-27T08:19:32.640Z · LW(p) · GW(p)

Desire measurement is an interesting problem in and of itself. Desires are drivers for behavior, so presumably to measure the strength of desires you'd need to observe which of them prevails in changing behavior, in light of belief. I suspect some form of neurological test could also be devised, but I don't currently know of one.

As for lottery - note that I have avoided using "long-term" as a quantifier on happiness as a feedback mechanism. It is gradual, but not particularly long-term. Saying that it isn't a desire-fulfillment feedback mechanism because a year after winning the lottery you're not happy, is like saying that pain isn't a damage-sense feedback mechanism because a year after burning your hand on a stove you're not still yanking it back.

Every feedback mechanism has its window time for impact; this one is no different. In the short term, winning the lottery tends to make people jump with glee and feel very happy. That we intuitively (and mistakenly) expect this happiness to last into the long term is a fact about us, not about happiness.

Replies from: Marius
comment by Marius · 2011-03-27T17:42:40.934Z · LW(p) · GW(p)

If you define desire in terms of behavior, satisfying desires would simply mean "succeeding at the tasks you elect to perform". Presumably this has something to do with happiness, but it misses a whole lot. In particular, many people express great sorrow/regret at the thought of things they didn't ever attempt, but which they wish they had. To say "you must not have wanted it" would be bizarre.

You are dismissing the lottery counterexample too easily. I don't want to win the lottery to hear my name on tv, I want to win because I expect to use the money to more easily satisfy large numbers of desires over the next several years. If happiness from winning the lottery is transitory (as it appears to be), despite the long-term nature of the desires it helps fulfill, then happiness must involve much more than merely satisfying one's desires.

Replies from: Polymeron
comment by Polymeron · 2011-03-27T20:48:56.328Z · LW(p) · GW(p)

I disagree with both your points.

You can fail or succeed at tasks you elect to perform regardless of the strength of your desire. And you can definitely have competing desires. If people didn't attempt something, it's not that they didn't want it; they simply had competing desires - to avoid risk, to avoid embarrassment, etc. etc. People are not made up of one single driver at any given point in time.

Regarding the lottery - it is true that people expect to have their desires fulfilled by the money. But what you're not bringing into account is habituation - the desires people develop are very dependent on their condition; a starving person would be incredibly happy to find half a slice of bread to eat, but an ordinary person would usually not think too much of it. In fact, an ordinary person staying at a hotel and told they'd only get half a slice of bread for dinner would be upset. The different condition sets different expectations; desires are formed and lost all the time. So in your lottery example, the process is:

Person wins lots of money -> Becomes very happy -> Buys stuff they wanted -> Remains somewhat happy -> Becomes habituated to the now easily-acquired pleasures -> Establishes a new baseline -> No longer derives happiness from the continuation of the situation. Whereas any newly introduced stress the situation brings (e.g. lots of people asking you for money) reduces happiness, unless and until you become habituated to that as well.

You say that happiness involves "much more" than "merely" satisfying one's desires, but I don't see what that could include. Can you think of a situation where you become happy by an event even though you don't care whether or not it has come to pass, nor care about its consequences? I can't think of such.

Replies from: Marius
comment by Marius · 2011-03-27T21:13:03.047Z · LW(p) · GW(p)

You misunderstood the first point. I did not claim you succeed at tasks you are good at. I claimed that if you define desire by "what you do", and simultaneously believe that "satisfying your desires -> happiness", then succeeding at the tasks you attempt would cause happiness. Yet that is an incomplete descriptor of happiness.

Additionally, I obviously agree people have competing desires. But this makes it impossible to use "what I did" as a measurement of "what I want". For instance, if I want to run but don't, it may be due to laziness (which is hardly a "desire for slack"), fear (which is not merely a "desire to avoid risk or embarrassment"), etc.

Your lottery description is inconsistent with other accomplishments and pleasures. For instance, people who marry [the right person] do not simply become habituated to the new pleasures and establish a new baseline. People with good or bad jobs do not become entirely habituated to those jobs - they derive happiness and unhappiness from them every day. The lottery is a different story from these, and you'll need to come up with a better explanation as to why it is different. My explanation is that we derive happiness from earning success, but not from being given it arbitrarily, and that regardless of one's desires human nature tends to behave that way.

This is my first counterexample to your puzzle: regardless of whether one has a desire to have to earn success (and most people desire not to have to earn it), we are made happy by earning success. Other examples: we are made happy by hard work (even unsuccessful hard work), by being punished when we deserve it, by putting on a smile (even against one's will), and by many other things we don't desire and some that we try to avoid.

Replies from: Polymeron
comment by Polymeron · 2011-03-27T22:08:11.749Z · LW(p) · GW(p)

Thank you; you've made some very good points that deserve a proper reply. However it's getting late here and I will need more energy go over this properly. I'll definitely consider this.

As a quick opener, because I think there's an open point here: It seems to me that all emotions serve as behavioral feedback mechanisms. But even if I am mistaken on that, and/or happiness is not desire fulfillment feedback, what would you think its evolutionary role is? It's clearly not an arbitrary component. Not to make the fallacy that any explanation is better than no explanation, I would nevertheless be interested in playing off this hypothesis against something other than a null model - a competing explanation. Can you offer one?

Replies from: Marius
comment by Marius · 2011-03-28T03:23:39.326Z · LW(p) · GW(p)

I agree that emotions do serve as behavioral feedback mechanisms, but that's not all they do. They have complex social roles, among other things, including signaling, promotion of trust, promotion of empathy, etc. This social role is probably just as important in the case of happiness as the marker of "needs satisfied". In the case of grief, the social role is probably far more important than any feedback role. In addition to these roles, happiness contains an element of contentedness: "you are at a local maximum, and would be better off staying at this local maximum than risking matters to satisfy more needs". Thus, many slaves are content until they see the chance at freedom. There is a joy in great/beautiful/religious things that science currently lacks a good explanation for. There may be many other roles for happiness, as well.

Replies from: Polymeron
comment by Polymeron · 2011-03-28T16:02:31.377Z · LW(p) · GW(p)

I have to agree that happiness (and other emotions) have come to have a strong signaling component. I'm now even more interested than before about the mechanism by which it operates - just what triggers this emotion. I've also been thinking quite a bit about grief, which didn't fit as a pure feedback mechanism (otherwise you'd expect to have the same emotion for a person going away for life and that person dying), and your comments on that finally drove the point home.

I will need to consider all this further and revise my hypothesis. Thanks again for the insight!

comment by Constant2 · 2007-11-22T08:14:17.000Z · LW(p) · GW(p)

Eliezer, the exchange with Greg Stock reminds me strongly of Nozick's experience machine argument, and your position agrees with Nozick's conclusion.

comment by Constant2 · 2007-11-22T08:30:32.000Z · LW(p) · GW(p)

One does, in real life, hear of drugs inducing a sense of major discovery, which disappears when the drug wears off. Sleep also has a reputation for producing false feelings of discovery. Some late-night pseudo-discovery is scribbled down, and in the morning it turns out to be nothing (if it's even legible).

I have sometimes wondered to what extent mysticism and "enlightenment" (satori) is centered around false feelings of discovery.

An ordinary, commonly experienced, non-drug-induced false feeling with seeming cognitive content is deja vu.

Replies from: Jonni
comment by Jonni · 2011-09-06T17:25:19.727Z · LW(p) · GW(p)

It looks like you're saying drug-induced discovery always turns out to be wrong when sobriety returns. I think this is a generalisation.

Psychoactive drugs induce atypical thinking patterns. Sometimes this causes people to have true insights that they would not have achieved sober. Sometimes people come to false conclusions, whether they're on drugs or not.

comment by douglas · 2007-11-22T08:43:24.000Z · LW(p) · GW(p)

Eliezer, if we reduce every desire to "happiness" than haven't we just defined away the meaning of the word? I mean love and the pursuit of knowledge and watching a scary movie are all rather different experiences. To say that they are all about happiness-- well then, what wouldn't be? If everything is about happiness, then happiness doesn't signify anything of meaning, does it?

James, are you purposefully parodying the materialist philosophy based on the disproved Newtonian physics?

comment by douglas · 2007-11-22T08:55:15.000Z · LW(p) · GW(p)

Constant-- deja vu is not always necessarily contentless. See the work of Ian Stevenson. Mystical experiences are not necessarily centered around anything false-- see "The Spiritual Brain", by Beauregard (the neuroscientist who has studied these phenomena more than any other researcher.)

comment by Toby_Ord2 · 2007-11-22T11:41:48.000Z · LW(p) · GW(p)

Eliezer,

There is potentially some confusion on the term 'value' here. Happiness is not my ultimate (personal) end. I aim at other things which in turn bring me happiness and as many have said, this brings me more happiness than if I aimed at it. In this sense, it is not the sole object of (personal) value to me. However, I believe that the only thing that is good for a person (including me) is their happiness (broadly construed). In that sense, it is the only thing of (personal) value to me. These are two different senses of value.

Psychological hedonists are talking about the former sense of value: that we aim at personal happiness. You also mentioned that others ('psychological utilitarians', to coin a term) might claim that we only aim at the sum of happiness. I think both of these are false, and in fact probably no-one solely aims at these things. However, I think that the most plausible ethical theories are variants of utilitarianism (and fairly sophisticated ones at that), which imply that the only thing that makes an individual's life go well is that individual's happiness (broadly construed).

You could quite coherently think that you would fight to avoid the pill and also that if it were slipped in your drink that your life would (personally) go better. Of course the major reason not to take it is that your real scientific breakthroughs benefit others too, but I gather that we are supposed to be bracketing this (obvious) possibility for the purposes of this discussion, and questioning whether you would/should take it in the absence of any external benefits. I'm claiming that you can quite coherently think that you wouldn't take it (because that is how your psychology is set up) and yet that you should take it (because it would make your life go better). Such conflicts happen all the time.

My experience in philosophy is that it is fairly common for philosophers to expouse psychological hedonism, though I have never heard anyone argue for psychological utilitarianism. You appear to be arguing against both of these positions. There is a historical tradition of arguing for (ethical) utilitarianism. Even there, the trend is strongly against it these days and it is much more common to hear philosophers arguing that it is false. I'm not sure what you think of this position. From your comments above, it makes it look like you think it is false, but that may just be confusion about the word 'value'.

Replies from: None
comment by [deleted] · 2013-10-02T19:53:44.973Z · LW(p) · GW(p)

I'm claiming that you can quite coherently think that you wouldn't take it (because that is how your psychology is set up) and yet that you should take it (because it would make your life go better).

What use is a system of "morality" which doesn't move you?

Such conflicts happen all the time.

Often, for me at least, when something I want to do conflicts with what I know is the right thing to do, I feel sad when I don't do the right thing. I would feel almost no remorse, if any, about not taking the pill.

comment by Doug_S. · 2007-11-22T18:13:03.000Z · LW(p) · GW(p)

If I admitted that I found the idea of being a "wirehead" very appealing, would you think less of me?

Replies from: None
comment by [deleted] · 2011-08-04T22:25:20.801Z · LW(p) · GW(p)

No.

comment by Cynical_Masters_Student · 2007-11-22T20:56:36.000Z · LW(p) · GW(p)

So how about anti depressants (think SSRI à la Prozac)? They might not be Huxley's soma or quite as convincing as the pill described in the post, but still, they do simulate something that may be considered happiness. And I'm told it also works for people who aren't depressed. Or for that matter, a whole lot of other drugs such as MDMA.

comment by Cynical_Masters_Student2 · 2007-11-22T20:58:13.000Z · LW(p) · GW(p)

Thinking about it, "simulate" is entirely the wrong word, really. If they really work, they do achieve something along the lines of happiness and do not just simulate it. Sorry about the doublepost.

comment by michael_vassar3 · 2007-11-22T23:46:01.000Z · LW(p) · GW(p)

Toby, I think you should probably have mentioned Derek Parfit as a reference when stating that "I'm claiming that you can quite coherently think that you wouldn't take it (because that is how your psychology is set up) and yet that you should take it (because it would make your life go better). Such conflicts happen all the time.", as the claim needs substantialy background to be obvious, but as I'm mentioning him here you don't need to any more.

comment by TGGP4 · 2007-11-22T23:57:29.000Z · LW(p) · GW(p)

Robin Hanson seems to take the simulation argument seriously. If it is the case that our reality is simulated, then aren't we already in a holodeck? So then what's so bad about going from this holodeck to another?

Replies from: Mister_Tulip
comment by Mister_Tulip · 2014-02-22T22:56:26.024Z · LW(p) · GW(p)

I agree with your basic point, but question why our reality being simulated is a necessary part of it. As long as it's functionally indistinguishable from a simulation, shouldn't the question of whether it actually is one be irrelevant?

comment by Wei_Dai2 · 2007-11-23T01:29:28.000Z · LW(p) · GW(p)

I agree with Eliezer here. Not all values can be reduced to desire for happiness. For some of us, the desire not to be wireheaded or drugged into happiness is at least as strong as the desire for happiness. This shouldn't be a surprise since there were and still are pyschoactive substances in our environment of evolutionary adaptation.

I think we also have a more general mechanism of aversion towards triviality, where any terminal value that becomes "too easy" loses its value (psychologically, not just over evolutionary time). I'm guessing this is probably because many of our terminal values (art, science, etc.) exist because they helped our ancestors attract mates by signaling genetic superiority. But you can't demonstrate genetic superiority by doing something easy.

Toby, I read your comment several times, but still can't figure out what distinction you are trying to draw between the two senses of value. Can you give an example or thought experiment, where valuing happiness in one sense would lead you to do one thing, and valuing it in the other sense would lead you to do something else?

Michael, do you have a more specific reference to something Parfit has written?

comment by Constant2 · 2007-11-23T02:08:58.000Z · LW(p) · GW(p)

So then what's so bad about going from this holodeck to another?

The idea that this whole universe including us is simulated is that we ourselves are part of the simulation. Since we are and we know we are conscious, then we know that the simulated beings can be (and very likely are) conscious if they seem so. If they are, then they are "real" in an important sense, maybe the most important sense. They are not mere mindless wallpaper.

I think in order to make the simulation argument work, the simulation needs to be unreal, the inhabitants other than the person being fooled must have no inner reality of their own. Because if they have an inner reality, then in an important sense they are real and so the point of the thought experiment is lost.

Replies from: wizzwizz4
comment by wizzwizz4 · 2019-06-13T16:08:15.645Z · LW(p) · GW(p)

I think most readers will have taken that interpretation for granted. The simulations are not indistinguishable from real people, but the person in the simulation is fooled sufficiently to not pry.

comment by TGGP4 · 2007-11-23T02:47:27.000Z · LW(p) · GW(p)

I fail to understand how the "mindless wallpaper" of the next level of simulation must be "unreal" while our simulated selves "are and we know we are conscious". They cannot be unreal merely because they are simulations because in the thought-experiment we ourselves are simulations but, according to you, still real.

comment by Constant2 · 2007-11-23T04:44:53.000Z · LW(p) · GW(p)

I fail to understand how the "mindless wallpaper" of the next level of simulation must be "unreal" while our simulated selves "are and we know we are conscious". They cannot be unreal merely because they are simulations because in the thought-experiment we ourselves are simulations but, according to you, still real.

No, you completely misunderstood what I said. I did not say that the "mindless wallpaper" (scare quotes) of the next level must be unreal. I said that in order for the philosophical thought experiment to make the point it's being used to make the mindless wallpaper (not scarequotes - this is the actual term Eliezer used) needs to be assumed mindless. In real life, I fully expect a simulated person to have an internal self, to be real in the sense of having consciousness. But what I fully expect it totally irrelevant.

We're talking philosophical stories. Are you familiar with the story about another planet that has a substance XYZ that is just like water but has a different chemical composition from water? Well, in real life, I fully expect that there is no such substance. But in order for the thought experiment to make the philosophical point it's being used to make we need to grant that there is such a substance. Same thing with the mindless wallpaper. We must assume mindlessness, or else the thought experiment just doesn't work.

If you want to be totally stubborn on this point, then fine, we just need to switch to a different thought experiment to make the same point. The drug that induces the (mistaken) feeling that the drugged person has achieved a scientific discovery doesn't suffer from that problem. Of course, if you want to be totally stubborn about the possibility of such a drug, we'll just have to come up with another thought experiment.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-23T05:37:50.000Z · LW(p) · GW(p)

TGGP, the presumption is that the sex partners in this simulation have behaviors driven by a different algorithm, not software based on the human mind, software which is not conscious but is nonetheless capable of fooling a real person embedded in the simulation. Like a very advanced chatbot.

"Simulation" is a silly term. Whatever is, is real.

comment by Tom_McCabe2 · 2007-11-23T06:41:02.000Z · LW(p) · GW(p)

""Simulation" is a silly term. Whatever is, is real."

This is true, but "simulation" is still a useful word; it's used to refer to a subset of reality which attempts to resemble the whole thing (or a subset of it), but is not causally closed. "Reality", as we use the word, refers to the whole big mess which is causally closed.

comment by Toby_Ord2 · 2007-11-23T11:33:46.000Z · LW(p) · GW(p)

Wei, yes my comment was less clear than I was hoping. I was talking about the distinction between 'psychological hedonism' and 'hedonism' and I also mentioned the many person versions of these theories ('psychological utilitarianism' and 'utilitarianism'). Lets forget about the many person versions for the moment and just look at the simple theories.

Hedonism is the theory that the only thing good for each individual is his or her happiness. If you have two worlds, A and B and the happiness for Mary is higher in world A, then world A is better for Mary. This is a theory of what makes someone's life go well, or to put it another way, about what is of objective value in a person's life. It is often used as a component of an ethical theory such as utilitarianism.

Psychological hedonism is the theory that people ultimately aim to increase their happiness. Thus, if they can do one of two acts, X and Y and realise that X will increase their happiness more than Y, they will do X. This is not a theory of what makes someone's life go well, or a theory of ethics. It is merely a theory of psychological motivation. In other words, it is a scientific hypothesis which says that people are wired up so that they are ultimately pursuing their own happiness.

There is some connection between these theories, but it is quite possible to hold one and not the other. For example, I think that hedonism is true but psychological hedonism is false. I even think this can be a good thing since people get more happiness when not directly aiming at it. Helping your lover because you love them leads to more happiness than helping them in order to get more happiness. It is also quite possible to accept psychological hedonism and not hedonism. You might think that people are motivated to increase their happiness, but that they shouldn't be. For example, it might be best for them to live a profound life, not a happy one.

Each theory says that happiness is the utlimate thing of value in a certain sense, but these are different senses. The first is about what I would call actual value: it is about the type of value that is involved in a 'should' claim. It is normative. The second is about what people are actually motivated to do. It is involved in 'would' claims.

Eliezer has shown that he does care about some of the things that make him happy over and above the happiness they bring, however he asked:

'The question, rather, is whether we should care about the things that make us happy, apart from any happiness they bring.'

Whether he would do something and whether he should are different things, and I'm not satisfied that he has answered the latter.

comment by g · 2007-11-23T11:57:30.000Z · LW(p) · GW(p)

Toby, what are your grounds for thinking that (ethical) hedonism is true, other than that happiness appears to be something that almost everyone wants? Is it something you just find so obvious you can't question it, or are there reasons that you can describe? (The obvious reason seems to me to be "We can produce something that's at least roughly right this way, and it's nice and simple". Something along those lines?)

comment by Toby_Ord2 · 2007-11-23T17:42:53.000Z · LW(p) · GW(p)

g, you have suggested a few of my reasons. I have thought quite a lot about this and could write many pages, but I will just give an outline here.

(1) Almost everything we want (for ourselves) increases our happiness. Many of these things evidently have no intrinsic value themselves (such as Eliezer's Ice-cream case). We often think we want them intrinsically, but on closer inspection, if we really ask whether we would want them if they didn't make us happy we find the answer is 'no'. Some people think that certain things resist this argument by having some intrinsic value even without contirbuting to happiness. I am not convinced by any of these examples and have an alternative explanation as to my opponents' views: they are having difficulty really imagining the case without any happiness accruing.

(2) I think that our lives cannot go better based on things that don't affect our mental states (such as based on what someone else does behind closed doors). If you accept this, that our lives are a function of our mental states, then happiness (broadly construed) seems the best explanation of what it is about our mental states that makes a possible life more valuable than another.

(3) I have some sympathy with preference accounts, but they are liable to count too many preferences, leading to double counting (my wife and I each prefer the other's life to go better even if we never find out, so do we count twice as much as single people?) and preferences based on false beliefs (to drive a ferrari because they are safer). Once we start ruling out the inappropriate preference types and saying that only the remaining ones count, it seems to me that this just leads back to hedonism.

Note that I'm saying that I think happiness is the only factor in determining whether a life goes well in a particular sense, this needn't be the same as the most interesting life or the most ethical life. Indeed, I think the most ethical life is the one that leads to the greatest sum of happiness across all lifes (utilitarianism). I'm not completely convinced of any of this, but am far more convinced than I am by any rival theories.

comment by Wei_Dai2 · 2007-11-23T21:37:45.000Z · LW(p) · GW(p)

Toby, how do you get around the problem that the greatest sum of happiness across all lifes probably involves turning everyone into wireheads and putting them in vats? Or in an even more extreme scenario, turning the universe into computers that all do nothing but repeatedly runs a program that simulates a person in an ultimate state of happiness. Assuming that we have access to limited resources, these methods seem to maximize happiness for a given amount of resources.

I'm sure you agree that this is not something we do want. Do you think that it is something we should want, or that the greatest sum of happiness across all lifes can be achieved in some other way?

comment by Drake · 2007-11-24T22:14:41.000Z · LW(p) · GW(p)

In a slogan, one wants to be both happy and worthy of happiness. (One needn't incorporate Kant's own criteria of worthiness to find his formulation useful.)

comment by Richard_Hollerith2 · 2007-11-25T04:50:28.000Z · LW(p) · GW(p)

No slogans :)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-25T05:09:51.000Z · LW(p) · GW(p)

NO SLOGANS! NO SLOGANS! NO SLOGANS!

comment by Benquo · 2007-11-25T07:00:01.000Z · LW(p) · GW(p)

Drake, what do you mean by worthy of happiness. How does that formulation differ, for example, from my desire to both be happy and continue to exist as myself? (It seems to me like the latter desire also explains the pro-happiness anti-blissing-out attitude.)

comment by Richard_Hollerith2 · 2007-11-25T12:25:23.000Z · LW(p) · GW(p)

To the stars!

comment by Ben_Jones · 2007-11-27T11:53:42.000Z · LW(p) · GW(p)

"The pills make it clearer."

You said it big man.

comment by Robin_Brandt · 2008-01-29T16:14:07.000Z · LW(p) · GW(p)

I value many things intrinsically! This may make me happy or not, but I don´t rely on the feelings of possible happiness when I make decisions. I see intrinsic value in happiness itself, but also as a means for other values, such as art, science, beauty, complexity, truth etc, wich I often value even more than hapiness. But sentient life may be the highest value. Why would we accept happiness as our highest terminal value when it is just a way to make living organisms do certain things. Ofcourse it feels good and is important, but it is still rather arbitary. I think theese things as rather important if we don´t want to end up wireheaded. Complexity/Beauty may be my second highest value after sentience, hapiness may only come as a third thing, then maybe truth and logic... Well, I will write more about this later...

comment by Tim_Tyler · 2008-03-06T17:20:54.000Z · LW(p) · GW(p)

According to the theory of evolution, organisms can be expected to have approximately one terminal value - which is - very roughly speaking - making copies of their genomes. There /is/ intragenomic conflict, of course, but that's a bit of a detail in this context.

Organisms that deviate very much from this tend to be irrational, malfunctioning or broken.

The idea that there are some values not reducible to happiness does not prove that there are "a lot of terminal values".

Happiness was never God's utitily function in the first place. Happiness is just a carrot.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-03-06T17:55:28.000Z · LW(p) · GW(p)

A common misconception, Tim. See Evolutionary Psychology.

comment by Tim_Tyler · 2008-03-07T18:33:43.000Z · LW(p) · GW(p)

It seems like a vague reply - since the supposed misconception is not specified.

The "Evolutionary Psychology" post makes the point that values reside in brains, while evolutionary causes lie in ancestors. So, supposedly, if I attribute goals to a petunia, I am making a category error.

This argument is very literal-minded. When biologists talk about plants having the goal of spreading their seed about, it's intended as shorthand. Sure they /could/ say that the plant's ancestors exhibitied differential reproductive success in seed distribution, and that explains the observed seed distribution adaptations, but it's easier to say that the plant wants to spread its seeds about. Everyone knows what you really mean - and the interpretation that the plant has a brain and exhibits intentional thought is ridiculous.

Richard Dawkins faced a similar criticism a lot, with his "selfish" genes. The number of times he had to explain that this was intended as a metaphor was enormous.

comment by Richard_Hollerith2 · 2008-03-07T23:29:30.000Z · LW(p) · GW(p)

Happiness is just a carrot.

And reproductive fitness is just a way to add intelligent agents to a dumb universe that begin with a big bang. Now that the intelligent agents are here, I suspect the universe no longer needs reproductive fitness.

comment by Z._M._Davis · 2008-03-07T23:48:17.000Z · LW(p) · GW(p)

Tim, if you understand that the "values" of evolution qua optimization process are not the values of the organisms it produces, what was the point of your 12:20 PM comment of March 6? "Terminal values" in the post refers to the terminal values of organisms. It is, as Eliezer points out, an empirical fact that people don't consciously seek to maximize fitness or any one simple value. Sure, that makes us "irrational, malfunctioning or broken" by the metaphorical standards of some metaphorical personification of evolution, but I should think that's rather besides the point.

comment by Tim_Tyler · 2008-03-08T19:10:39.000Z · LW(p) · GW(p)

Brains are built by genes. Those brains that reflect the optimisation target of the genes are the ones that will become ancestors. So it is reasonable - on grounds of basic evolutionary biology - to expect that human brains will generate behaviour resulting in the production of babies - thus reflecting the target of the optimisation process that constructed them.

In point of fact, human brains /do/ seem to be pretty good at making babies. The vast majority of their actions can be explained on these grounds.

That is not to say that people will necessarily consciously seek to maximize their expected fitness. People lie to themselves about their motives all the time - partly in order to convincingly mislead others. Consciousness is more like the PR department than the head office.

Of course, not all human brains are maximizing their expected fitness very well. I'm not claiming that nuns and preists are necessarily maximizing their expected fitness. The plasticity of brains is useful - but it means that they can get infected by memes, who may not have their owner's best interests at heart. Such infected minds sometimes serve the replication of their memes - rather than their onwner's genes. Such individuals may well have a complex terminal value, composed of many parts. However, those are individuals who - from the point of view of their genes - have had their primary utility function hijacked - and thus are malfunctioning or broken.

comment by Richard_Hollerith2 · 2008-03-08T19:49:45.000Z · LW(p) · GW(p)

Is maximizing your expected reproductive fitness your primary goal in life, Tim?

When you see others maximizing their expected reproductive fitness, does that make you happy? Do you approve? Do you try to help them when you can?

comment by Tim_Tyler · 2008-03-08T20:00:00.000Z · LW(p) · GW(p)

More details of my views on the subject can be found here.

Biology doesn't necessarily predict that organisms should help each other, or that the success of others should be viewed positively - especially not if the organisms are rivals and compete for common resources.

comment by anonymous17 · 2008-03-08T20:45:15.000Z · LW(p) · GW(p)

More details of my views on the subject can be found here.

With the rise of "open source biology" in the coming decades, you'll probably be able to sequence your own non-coding DNA and create a pack of customized cockroaches. Here are your Nietzschean uebermensch: they'll share approx. 98% of your genome and do a fine job of maximizing your reproductive fitness.

comment by Richard_Hollerith2 · 2008-03-08T23:31:37.000Z · LW(p) · GW(p)

Customized cockroaches are far from optimal for Tim because Tim understands that the most powerful tool for maximizing reproductive fitness is a human-like consciousness. "Consciousness" is Tim's term; I would have used John Stewart's term, "skill at mental modelling." Thanks for the comprehensive answer to my question, Tim!

comment by Tim_Tyler · 2008-03-08T23:39:33.000Z · LW(p) · GW(p)

Re: genetic immortality via customized cockroaches:

Junk DNA isn't immortal. It is overwritten by mutations, LINEs and SINEs, etc. In a geological eyeblink, the useless chromosomes would be simply deleted - rendering the proposal ineffective.

comment by UnclGhost · 2011-01-02T05:15:44.114Z · LW(p) · GW(p)

Sam Harris expands on his view of morality in his recent book The Moral Landscape, but it hardly addresses this question at all. I attended a talk he gave on the book and when an audience member asked whether it would be moral to just give everyone cocaine or some sort of pure happiness drug, Harris basically said "maybe."

comment by HoverHell · 2011-05-04T11:27:59.970Z · LW(p) · GW(p)

-

comment by Grognor · 2011-09-29T03:42:22.464Z · LW(p) · GW(p)

In the agonizing process of reading all the Yudkowsky Less Wrong articles, this is the first one I have had any disagreement with whatsoever.

This is coming from a person who was actually convinced by the biased and obsolete 1997 singularity essay by Yudkowsky.

Only, it's not so much a disagreement as it is a value differential. I don't care the processes by which one achieves happiness. The end results are what matter, and I'll be damned if I accept having one less hedon or one less utilon out there because of a perceived value in working toward them rather than automatically gaining them. It sounds to me like expecting victims of depression to work through it and experience the joy of overcoming depression, instead of, say, our hypothetical pill that just cures their depression. It is a sadness that nothing like that exists.

At the risk of (further) lowering my own status, I'll also say that I really really really do wish the "do anything" Star Trek Holodecks were here. Now, it might matter to me that simulated oral sex is not from a real person who made that decision on her evolution-based human terms, but that is another matter of utilons.

Edited to add: perhaps worth noting is that I would have accepted the deal given by the Superhappies in Three Worlds Collide, though I might have tried to argue that the "having humans eat babies as well" thing is not necessary, even knowing I probably would not succeed.

Replies from: DSimon, None
comment by DSimon · 2011-10-25T02:01:21.848Z · LW(p) · GW(p)

Since you're differentiating utilons from hedons, doesn't that kind of follow the thrust of the article? That is, the point that the OP is arguing against is that utilons are ultimately the same thing as hedons; that all people really want is to be happy and that everything else is an instrumental value towards that end.

Your example of the perfect anti-depressant is I think somewhat misleading; the worry when it comes to wire-heading is that you'll maximize hedons to the exclusion of all other types of utilon. Curing depression is awesome not only because it increases net hedons, but also because depression makes it hard to accomplish anything at all, even stuff that's about whole other types of utilons.

Replies from: Grognor, momothefiddler
comment by Grognor · 2011-10-25T04:46:05.943Z · LW(p) · GW(p)

The subject in detail is too complicated to bother with in this comment thread because it is discussed in much greater detail elsewhere, so I'll just bring up two things.

1) In the last month I've been thinking pretty darned carefully and am now really really unsure whether I'd accept the Superhappies' deal and am frankly glad I'll never have to make that choice.

2) Some of my own desires are bad, and if I were to take a pill that completely eliminated those desires, I would. The idea that what humanity wants right now is what it really wants is definitely not certain, as most certainly uncertain as uncertainties get. So the real question is, why does our utility function act the way it does? There was no purpose for it and if we can agree on a way to change it, we should change it, even if that means

other types of utilon

go extinct.

Replies from: DSimon
comment by DSimon · 2011-10-25T13:59:03.082Z · LW(p) · GW(p)

The idea that what humanity wants right now is what it really wants is definitely not certain

Strongly agreed! But that's why the gloss for CEV talks about stuff like what we would ideally want if we were smarter and knew more.

comment by momothefiddler · 2012-05-04T21:32:22.064Z · LW(p) · GW(p)

The basic point of the article seems to be "Not all utilons are (reducible to) hedons", which confuses me from the start. If happiness is not a generic term for "perception of a utilon-positive outcome", what is it? I don't think all utilons can be reduced to hedons, but that's only because I see no difference between the two. I honestly don't comprehend the difference between "State A makes me happier than state B" and "I value state A more than state B". If hedons aren't exactly equivalent to utilons, what are they?

An example might help: I was arguing with a classmate of mine recently. My claim was that every choice he made boiled down to the option which made him happiest. Looking back on it, I meant to say it was the option whose anticipation gave him the most happiness, since making choices based on the result of those choices breaks causality. Anyway, he argued that his choices were not based on happiness. He put forth the example that, while he didn't enjoy his job, he still went because he needed to support his son. My response was that while his reaction to his job as an isolated experience was negative, his happiness from {job + son eating} was more than his happiness from {no job + son starving}.

I thought at the time that we were disagreeing about basic motivations, but this article and its responses have caused me to wonder if, perhaps, I don't use the word 'happiness' in the standard sense.

Giving a hyperbolic thought excercise: If I could choose between all existing minds (except mine, to make the point about relative values) experiencing intense agony for a year and my own death, I think I'd be likely to choose my death. This is not because I expect to experience happiness after death, but because considering the state of the universe in the second scenario brings me more happiness than considering the state of the universe in the first. As far as I can tell, this is exactly what it means to place a higher value on the relative pleasure and continuing functionality of all-but-one mind than on my own continued existence.

To anyone who argues that utilons aren't exactly equivalent to hedons (either that utilons aren't hedons or that utilons are reducible to hedons), please explain to me what you (and my sudden realisation that you exist allows me to realise you seem amazingly common) think happiness is.

Replies from: DSimon
comment by DSimon · 2012-05-06T00:31:28.985Z · LW(p) · GW(p)

Consider the following two world states:

  1. A person important to you dies.
  2. They don't die, but you are given a brain modification that makes it seem to you as though they had.

The hedonic scores for 1 and 2 are identical, but 2 has more utilons if you value your friend's life.

Replies from: momothefiddler
comment by momothefiddler · 2012-05-06T01:19:02.945Z · LW(p) · GW(p)

The hedonic scores are identical and, as far as I can tell, the outcomes are identical. The only difference is if I know about the difference - if, for instance, I'm given a choice between the two. At that point, my consideration of 2 has more hedons than my consideration of 1. Is that different from saying 2 has more utilons than 1?

Is the distinction perhaps that hedons are about now while utilons are overall?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-06T02:05:13.845Z · LW(p) · GW(p)

Talking about "utilons" and "hedons" implies that there exists some X such that, by my standards, the world is better with more X in it, whether I am aware of X or not.

Given that assumption, it follows that if you add X to the world in such a way that I don't interact with it at all, it makes the world better by my standards, but it doesn't make me happier. One way of expressing that is that X produces utilons but not hedons.

Replies from: momothefiddler
comment by momothefiddler · 2012-05-06T02:21:15.799Z · LW(p) · GW(p)

I would not have considered utilons to have meaning without my ability to compare them in my utility function.

You're saying utilons can be generated without your knowledge, but hedons cannot? Does that mean utilons are a measure of reality's conformance to your utility function, while hedons are your reaction to your perception of reality's conformance to your utility function?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-06T03:20:32.084Z · LW(p) · GW(p)

I'm saying that something can make the world better without affecting me, but nothing can make me happier without affecting me. That suggests to me that the set of things that can make the world better is different from the set of things that can make me happy, even if they overlap significantly.

Replies from: momothefiddler
comment by momothefiddler · 2012-05-06T03:26:29.858Z · LW(p) · GW(p)

That makes sense. I had only looked at the difference within "things that affect my choices", which is not a full representation of things. Could I reasonably say, then, that hedons are the intersection of "utilons" and "things of which I'm aware", or is there more to it?

Another way of phrasing what I think you're saying: "Utilons are where the utility function intersects with the territory, hedons are where the utility function intersects with the map."

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-06T03:30:34.799Z · LW(p) · GW(p)

I'm not sure how "hedons" interact with "utilons".
I'm not saying anything at all about how they interact.
I'm merely saying that they aren't the same thing.

Replies from: momothefiddler
comment by momothefiddler · 2012-05-06T03:45:15.731Z · LW(p) · GW(p)

Oh! I didn't catch that at all. I apologize.

You've made an excellent case for them not being the same. I agree.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-06T03:53:21.546Z · LW(p) · GW(p)

Cool. I thought it was confusing you earlier, but perhaps I misunderstood.

Replies from: momothefiddler
comment by momothefiddler · 2012-05-06T04:00:58.880Z · LW(p) · GW(p)

It was confusing me, yes. I considered hedons exactly equivalent to utilons.

Then you made your excellent case, and now it no longer confuses me. I revised my definition of happiness from "reality matching the utility function" to "my perception of reality matching the utility function" - which it should have been from the beginning, in retrospect.

I'd still like to know if people see happiness as something other than my new definition, but you have helped me from confusion to non-confusion, at least regarding the presence of a distinction, if not the exact nature thereof.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-06T05:55:31.589Z · LW(p) · GW(p)

(nods) Cool.

As for your proposed definition of happiness... hm.

I have to admit, I'm never exactly sure what people are talking about when they talk about their utility functions. Certainly, if I have a utility function, I don't know what it is. But I understand it to mean, roughly, that when comparing hypothetical states of the world Wa and Wb, I perform some computation F(W) on each state such that if F(Wa) > F(Wb), then I consider Wa more valuable than Wb.

Is that close enough to what you mean here?

And you are asserting, definitionally, that if that's true I should also expect that, if I'm fully aware of all the details of Wa and Wb, I will be happier in Wa.

Another way of saying this is that if OW is the reality that I would perceive in a world W, then my happiness in Wa is F(OWa). It simply cannot be the case, on this view, that I consider a proposed state-change in the world to be an improvement, without also being such that I would be made happier by becoming aware of that state-change actually occurring.

Am I understanding you correctly so far?

Further, if I sincerely assert about some state change that I believe it makes the world better, but it makes me less happy, it follows that I'm simply mistaken about my own internal state... either I don't actually believe it makes the world better, or it doesn't actually make me less happy, or both.

Did I get that right? Or are you making the stronger claim that I cannot in point of fact ever sincerely assert something like that?

Replies from: momothefiddler
comment by momothefiddler · 2012-05-06T12:54:10.948Z · LW(p) · GW(p)

I understand it to mean, roughly, that when comparing hypothetical states of the world Wa and Wb, I perform some computation F(W) on each state such that if F(Wa) > F(Wb), then I consider Wa more valuable than Wb.

That's precisely what I mean.

Another way of saying this is that if OW is the reality that I would perceive in a world W, then my happiness in Wa is F(OWa). It simply cannot be the case, on this view, that I consider a proposed state-change in the world to be an improvement, without also being such that I would be made happier by becoming aware of that state-change actually occurring.

Yes

Further, if I sincerely assert about some state change that I believe it makes the world better, but it makes me less happy, it follows that I'm simply mistaken about my own internal state... either I don't actually believe it makes the world better, or it doesn't actually make me less happy, or both. Did I get that right? Or are you making the stronger claim that I cannot in point of fact ever sincerely assert something like that?

Hm. I'm not sure what you mean by "sincerely", if those are different. I would say if you claimed "X would make the universe better" and also "Being aware of X would make me less happy", one of those statements must be wrong. I think it requires some inconsistency to claim F(Wa+X)>F(Wa) but F(O(Wa+X))F2(O(Wa)), which is relatively common (Pascal's Wager comes to mind).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-06T14:33:36.466Z · LW(p) · GW(p)

What I mean by "sincerely" is just that I'm not lying when I assert it.
And, yes, this presumes that X isn't changing F.
I wasn't trying to be sneaky; my intention was simply to confirm that you believe F(Wa+X)>F(Wa) implies F(O(Wa+X))<F(O(Wa)), and that I hadn't misunderstood something.
And, further, to confirm that you believe that you believe that if F(W) gives the utility of a world-state for some evaluator, then F(O(W)) gives the degree to which that world-state makes that evaluator happy. Or, said more concisely: that H(O(W)) == F(O(W)) for a given observer.

Hm.

So, I agree broadly that F(Wa+X)>F(Wa) implies F(O(Wa+X))<F(O(Wa)). (Although a caveat: it's certainly possible to come up with combinations of F() and O() for which it isn't true, so this is more of an evidentiary implication than a logical one. But I think that's beside our purpose here.)

H(O(W)) = F(O(W)), though, seems entirely unjustified to me. I mean, it might be true, sure, just as it might be true that F(O(W)) is necessarily equal to various other things. But I see no reason to believe it; it feels to me like an assertion pulled out of thin air.

Of course, I can't really have any counterevidence, the way the claim is structured.

I mean, I've certainly had the experience of changing my mind about whether X makes the world better, even though observing X continues to make me equally happy -- that is, the experience of having F(Wa+X) - F(Wa) change while H(O(Wa+X)) - H(O((Wa)) stays the same -- which suggests to me that F() and H() are different functions... but you would presumably just say that I'm mistaken about one or both of those things. Which is certainly possible, I am far from incorrigible either about what makes me happy and I don't entirely understand what I believe makes the world better.

I think I have to leave it there. You are asserting an identity that seems unjustified to me, and I have no compelling reason to believe that it's true, but also no definitive grounds for declaring it false.

Replies from: momothefiddler
comment by momothefiddler · 2012-05-06T14:54:30.345Z · LW(p) · GW(p)

I believe you to be sincere when you say

I've certainly had the experience of changing my mind about whether X makes the world better, even though observing X continues to make me equally happy -- that is, the experience of having F(Wa+X) - F(Wa) change while H(O(Wa+X)) - H(O((Wa)) stays the same

but I can't imagine experiencing that. If the utility of a function goes down, it seems my happiness from seeing that function must necessarily go down as well. This discrepancy causes me to believe there is a low-level difference between what you consider happiness and what I consider happiness, but I can't explain mine any farther than I already have.

I don't know how else to say it, but I don't feel I'm actually making that assertion. I'm just saying: "By my understanding of hedony=H(x), awareness=O(x), and utility=F(x), I don't see any possible situation where H(W) =/= F(O(W)). If they're indistinguishable, wouldn't it make sense to say they're the same thing?"

Edit: formatting

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-06T15:51:20.812Z · LW(p) · GW(p)

I agree that if two things are indistinguishable in principle, it makes sense to use the same label for both.

It is not nearly as clear to me that "what makes me happy" and "what makes the world better" are indistinguishable sets as it seems to be to you, so I am not as comfortable using the same label for both sets as you seem to be.

You may be right that we don't use "happiness" to refer to the same things. I'm not really sure how to explore that further; what I use "happiness" to refer to is an experiential state I don't know how to convey more precisely without in effect simply listing synonyms. (And we're getting perilously close to "what if what I call 'red' is what you call 'green'?" territory, here.)

Replies from: momothefiddler
comment by momothefiddler · 2012-05-07T00:37:13.004Z · LW(p) · GW(p)

Without a much more precise way of describing patterns of neuron-fire, I don't think either of us can describe happiness more than we have so far. Having discussed the reactions in-depth, though, I think we can reasonably conclude that, whatever they are, they're not the same, which answers at least part of my initial question.

Thanks!

comment by [deleted] · 2013-10-02T19:59:04.228Z · LW(p) · GW(p)

I don't have any objection to you wireheading yourself. I do object to someone forcibly wireheading me.

comment by blacktrance · 2013-04-19T23:55:56.755Z · LW(p) · GW(p)

this would make it difficult to explain how we could care about anyone else's happiness - how we could treat people as ends in themselves, rather than instrumental means of obtaining a warm glow of satisfaction

And why should we actually treat people as "ends in themselves"? What's bad about treating everything except one's own happiness as instrumental?

comment by sjmp · 2013-05-15T19:56:07.150Z · LW(p) · GW(p)

Taking it a bit further from a pill: if we could trust AI to put whole of the humanity into matrix like state, and keep the humanity alive in that state longer than humanity itself could survive living in real world, while running a simulation of life with maximum happiness in each brain until it ran out of energy, would you advocate it? I know I would, and I don't really see any reason not to.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-15T20:37:01.001Z · LW(p) · GW(p)

Can you say more about what you anticipate this maximally happy existence looking like?

Replies from: sjmp
comment by sjmp · 2013-05-15T20:51:45.854Z · LW(p) · GW(p)

Far be it for me to tell anyone what maximallly happy existence is. I'm sure AI with full understanding of human physiology can figure that out.

I would venture to guess that it would not include constant stream of events the person undergoing the simulation would write on a paper under the title happy stuff, but some minor setbacks might be included for perspective, maybe even a big event like cancer which the person under simulation would manage to overcome?

Or maybe it's the person under simulation sitting in empty white space while the AI maximally stimulates the pleasure centers of the brain until heat death of the universe.

Replies from: TheOtherDave, None
comment by TheOtherDave · 2013-05-15T21:04:23.079Z · LW(p) · GW(p)

OK, thanks.

comment by [deleted] · 2013-05-15T21:10:15.880Z · LW(p) · GW(p)

This suggestion might run into trouble if the 'maximally happy state' should have necessary conditions which exclude being in a simulation. Suppose being maximally happy meant, I donno, exploring and thinking about the universe their lives with other people. Even if you could simulate this perfectly, just the fact that it was simulated would undermine the happiness of the participants. It's at least not obviously true that you're happy if you think you are.

Replies from: sjmp
comment by sjmp · 2013-05-15T21:41:26.809Z · LW(p) · GW(p)

I don't really see how that could be the case. For the people undergoing the simulation, everything would be just as real as this current moment is to you and me. How can there be a condition for maximally happy sate that excludes being in simulation, when this ultra advaced AI is in fact giving you the exact same nerve signals that you would get if you'd experience things in simulation in real life?

comment by [deleted] · 2019-12-20T02:21:52.064Z · LW(p) · GW(p)

If I claim to value art for its own sake, then would I value art that no one ever saw? A screen­saver run­ning in a closed room, pro­duc­ing beau­tiful pic­tures that no one ever saw? I’d have to say no. I can’t think of any com­pletely life­less ob­ject that I would value as an end, not just a means. That would be like valu­ing ice cream as an end in it­self, apart from any­one eat­ing it. Every­thing I value, that I can think of, in­volves peo­ple and their ex­pe­riences some­where along the line.

I'm commenting to register disagreement. I was really surprised by this. I routinely visit art galleries when traveling and some of the art I appreciate the most are not the famous ones. The walls of my home and the desktop background on my computer have artwork that I picked because I like it. It just makes me happy, there is no other reason than that.

That art appreciation is very personal is a normalized opinion in the art world too. I think you're the outlier on this Eliezer, at least according to my anecdata.

Edit: It occurs to me that maybe Eliezer is including himself (and, I guess, the artist) in his accounting of "people." My argument was that it is not necessary for other people besides the viewer (and, implicitly, the artist) to appreciate art for it to have value. A single person valuing the art is enough. If Eliezer agrees with this then I think we're on the same page. I don't know what it would mean for a random rock on the moon (or better yet, ʻOumuamua) which no one has ever seen or will ever see to be considered "art." Art does require a sentient mind to appreciate it for it to have value.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-12-20T07:04:12.339Z · LW(p) · GW(p)

How does any of what you’ve said disagree with what Eliezer said, though…? Everything you’re saying seems completely consistent with the bit you quoted, and the post in general.

EDIT:

That art ap­pre­ci­a­tion is very per­sonal is a nor­mal­ized opinion in the art world too. I think you’re the out­lier on this Eliezer, at least ac­cord­ing to my anec­data.

But Eliezer didn’t say anything to contradict the view that art appreciation is personal.

Replies from: None
comment by [deleted] · 2019-12-21T03:00:02.170Z · LW(p) · GW(p)

I went back and edited my comment before seeing your reply. I originally interpreted "that no one ever saw" in Eliezer's question "would I value art that no one ever saw?" as meaning no one else (besides Eliezer and, presumably, the artist who made the work).

I see now that maybe he meant "art that no human being has ever seen at all." Which resolves the conflict, but seems like an implicitly contradictory supposition.

comment by Ian Corral (ian-corral) · 2020-06-02T21:22:12.000Z · LW(p) · GW(p)

Sounds a lot like splitting hairs since each consequence you list still has the same outcome, pleasure/happiness. So why not skip over it all?

comment by Pato Lubricado (pato-lubricado) · 2021-05-20T03:07:55.830Z · LW(p) · GW(p)

Funny, the other day I was thinking of this, but from the other side: What if we've already taken the pill?

Imagine Morpheus comes to you and reveals that the world we live in is fake, and most of the new science is simulated to make it more fun. Real mechanics is just a polished version of Newton's (pretty much Lagrangian/Hamiltonian mechanics). There is no such thing as a speed limit in the universe. Instantaneous travel to every point of the universe is possible, and has already been done. No aliens either (not that it would be impossible, we just happen to be the first). Quantum mechanics isn't real either. The more accurate model of the atom is Thomson's, and there isn't much more beyond that. Chemistry is not very different from our own, somehow (sorry, I don't know chemistry). 
People live without much difficulty or suffering, there aren't any wars, and people are not so irrational. But everything (and everyone) is so heart-wrenchingly boring that we mostly decided to go live more exciting lives.

So... Blue pill, or red pill?

Replies from: EniScien
comment by EniScien · 2022-05-14T09:58:26.886Z · LW(p) · GW(p)

Red pill. When immersed in virtuality, I would not erase my memory of reality. Unless, of course, it is assumed that "we are from a true simple and boring universe" cannot play games either. Well, don't you think that there is too much suffering in the world? Although the very idea of ​​a simpler universe is interesting.

Replies from: pato-lubricado
comment by Pato Lubricado (pato-lubricado) · 2022-05-18T05:08:12.013Z · LW(p) · GW(p)

I'm not sure I understand what you're trying to say. Do you think Morpheus may be lying, and/or that this world is so bad that a boring one is better? In that case you're free to go see the real world, but you're free to come back to this world (or another simulated world that you like better) at any time. It's more exciting if you think that the blue/red pill decision is one-in-a-lifetime, but more realistically it'd be set up so that you can go in and out as you please (with the obvious caveats of "you don't remember the real world", like in dreams), with the same role as a movie or a game.

Okay, let me flesh it out a bit more: There isn't much to do or discover or fix or protect. Everything is pretty much done. All universal truths are in a 500-page book, which you first read in full at age 15, and understood completely at age 19. Any societal change/revolution would demonstrably bring forth more suffering than happiness. History is 200 years old, and not much has happened since year 157, when we finished mapping the human brain. Some of the first humans are still alive and can tell you what happened just like I can tell you what I had for lunch (or you can experience it yourself in you want). You can understand another human being in full after a 10-minute session of mind-sharing, and the differences between human minds are subjectively of about 10% at most. You can have sex with whoever you want to (after mind-sharing, of course), but natural sex is about as pleasurable as a chocolate cookie. A Friendly AI was built, and it decided to shut itself down because helping us would only deplete the last reserves of fun in the universe, and it didn't want to make us dumber or orgasmium. In short, any goal that you can think of, is either provably undesirable, or it can be accomplished in one day at most. More intelligence doesn't help. We were the Singularity. To us, the world is a medium-dificulty Minesweeper board, that has already been solved. The game wasn't difficulty-scaled to the player, but this time it went into the "too easy" direction (like that one Eliezer short story, but for everything).

What do you even do all day in such a world? What everyone does is superstimulate their brain in various ways. And if you put all those superstimuli together, you get the most popular experience: Earth. Full of dangers, pleasures and mysteries. Kill or be killed. Amass wealth or starve to death. Overthrow cruel governments, or live a life of oppression. Mind-blowing sex awaits you, but you will need to seduce these impossibly sexy people... by using only your words! Take the craziness up to 11 with drugs! Discover profoundly weird science. Explore this vast, unknown universe that you can never visit in full. What's behind the horizon? Who knows! Let's find out! The clock is running! You have 80 years.

Replies from: EniScien
comment by EniScien · 2022-05-18T10:54:07.553Z · LW(p) · GW(p)

I don't seem to quite understand what you're trying to say either. Are you suggesting that my ideas about my values ​​are not correct, and in fact, in the outside world, what best satisfies the values ​​of the outer me is immersion in the Earth with complete oblivion? If so, then it is not clear what the question of choosing between the red and blue pill is, because since I am here, I have already chosen the blue one. P.S. I have a feeling that you're assuming the same fallacy here as the theists (forgot the name) when you assume that our world is the maximum optimization for human values, which is the best we could have. Although it is not. And if it is optimized for the values ​​of the people of the outside world, and not ours, then how can we draw conclusions with our values. (I'm not trying to use some kind of manipulative technique, I'm just expressing how I feel)

Replies from: pato-lubricado
comment by Pato Lubricado (pato-lubricado) · 2022-07-06T04:51:12.360Z · LW(p) · GW(p)

(Sorry for taking so long between replies - my account logs out automatically and I never remember to log back in)

Yes, that's close to what I'm saying. When watching a movie, we have the ability to "almost-forget" the real world to become immersed in it. In red-pill world, you can do this but cranked up to 11, you literally forget everything so that it all feels way more exciting, whether good or bad. You retain all memories afterwards. And yes, outer-you already chose, but I consider inner-you to be a different person, so the question is still meaningful for me (both will merge if you redpill, like when you wake up from a dream, so it's not suicide either).

But yeah, it's less of a serious question that needs an answer, and more of an existential horror story. It conveys the idea of the world not being balanced like a videogame, but in an unusual direction. We usually struggle with the endlessness of the obstacles that are in the way of our goals; but imagining that all the obstacles suddenly end and all the goals are trivially reachable, like activating god-mode in a game, is a different kind of terrifying.

comment by Misha Ulyanov · 2021-12-28T03:46:39.003Z · LW(p) · GW(p)

I think we all strive to benefit. Happiness is just one of the possible components of benefit. There are other components, for example, knowledge of the truth.

Replies from: molybdaenmornell
comment by Molybdaenmornell (molybdaenmornell) · 2023-03-08T22:43:07.718Z · LW(p) · GW(p)

Which raises the question: Is the latter an instrumental or a terminal value? Or does it vary?

comment by Molybdaenmornell (molybdaenmornell) · 2023-03-08T22:58:09.340Z · LW(p) · GW(p)

"I value freedom:  When I'm deciding where to steer the future, I take into account not only the subjective states that people end up in, but also whether they got there as a result of their own efforts."

I am somewhat the same but must recognise that it is possible that, were I to be forced into pure bliss, I would not want to go back. My value set may shift or reveal itself to not be what I thought it was. (I think it is possible, maybe even normal, to be somewhat mistaken about which values one lives by.) In fact, it seems exceedingly plausible to me that I value freedom the more because I know what it means to be defenceless and in danger. I would care far less about having a back door out of anything if I perceived no threats.

And that begs the question which values am I to consider 'right': The ones I live by now or the ones I think it likely I would have in, for want of a better word, paradise?

comment by David Spohr (david-spohr) · 2023-10-26T21:18:50.517Z · LW(p) · GW(p)

I would suggest to consider the more abstract concept of "well-being", which contains both happiness and freedom. That's the steel-manned form of the consequentialist`s moral cornerstone.