My main problem with utilitarianism

post by taw · 2009-04-17T20:26:26.304Z · LW · GW · Legacy · 84 comments

It seems that in the rationalist community there's almost universal acceptance of utilitarianism as basics of ethics. The version that seems most popular goes something like this:

There are a few obivous problems here, that I won't be bothering with today:

But my main problem is that there's very little evidence getting utilons is actually increasing anybody's happiness significantly. Correlation might very well be positive, but it's just very weak. Giving people what they want is just not going to make them happy, and not giving them what they want is not going to make them unhappy. This makes perfect evolutionary sense - an organism that's content with what it has will fail in competition with one that always wants more, no matter how much it has. And organism that's so depressed it just gives up will fail in competition with one that just tries to function the best it can in its shabby circumstances. We all had extremely successful and extremely unsuccessful cases among our ancestors, and the only reason they are on our family tree was because they went for just a bit more or respectively for whatever little they could get.

Modern economy is just wonderful at mass producing utilons - we have orders of magnitude more utilons per person than our ancestors - and it doesn't really leave people that much happier. It seems to me that the only realistic way to significantly increase global happiness is directly hacking happiness function in brain - by making people happy with what they have. If there's a limit in our brains, some number of utilons on which we stay happy, it's there only because it almost never happened in our evolutionary history.

There might be some drugs, or activities, or memes that increase happiness without dealing with utilons. Shouldn't we be focusing on those instead?

84 comments

Comments sorted by top scores.

comment by smoofra · 2009-04-17T21:04:32.092Z · LW(p) · GW(p)

We should give everybody as much utilions as we can

Not at all. We're all just trying to maximize our own utilions. My utility function has a term int it for other people's happiness. Maybe it has a term for other people's utilions (I'm not sure about that one though). But when I say I want to maximize utility, I'm just maximizing one utility function: mine. Consideration for others is already factored in.

In fact I think you're confusing two different topics: decision theory and ethics. Decision theory tells us how to get more of what we want (including the happiness of others). Decision theory takes the utility function as a given. Ethics is about figuring out the what the actual content of our utility functions is, especially as it concerns our interactions with others, and our obligations towards them.

Replies from: Nick_Tarleton, Matt_Simpson
comment by Nick_Tarleton · 2009-04-17T21:42:26.946Z · LW(p) · GW(p)

Not at all. We're all just trying to maximize our own utilions. My utility function has a term int it for other people's happiness. Maybe it has a term for other people's utilions (I'm not sure about that one though). But when I say I want to maximize utility, I'm just maximizing one utility function: mine. Consideration for others is already factored in.

Seconded. It seems to me that what's universally accepted is that rationality is maximizing some utility function, which might not be the sum/average of happiness/preference-satisfaction of individuals. I don't know if there's a commonly-used term for this. "Consequentialism" is close and is probably preferable to "utilitarianism", but seems to actually be a superset of the view I'm referring to, including things like rule-consequentialism.

comment by Matt_Simpson · 2009-04-18T04:53:16.832Z · LW(p) · GW(p)

Not at all. We're all just trying to maximize our own utilions. My utility function has a term int it for other people's happiness. Maybe it has a term for other people's utilions (I'm not sure about that one though). But when I say I want to maximize utility, I'm just maximizing one utility function: mine. Consideration for others is already factored in.

Thirded. I would add that my utility function need not have a term for your utility function in it's entirety. If you intrinsically like murdering small children, there's no positive term in my utility function for that. Not all of your values matter to me.

comment by steven0461 · 2009-04-17T20:43:52.980Z · LW(p) · GW(p)
  • You're mostly criticizing preference utilitarianism (with the preferences being uninformed preferences at that), which is far from the only possible utilitarianism and not (I think) held by all that many people here.
  • It's not a given that only happiness matters (on the face of it, this is false).
  • "Utilitarianism could be total or average" isn't an argument against the disjunction of total and average utilitarianism.
comment by CronoDAS · 2009-04-19T00:19:06.825Z · LW(p) · GW(p)

This post seems reflect a confabulation between "utilons" and "wealth", as well as a confabulation between "utilons" and happiness.

We have orders of magnitude more wealth per person than our ancestors. We are not particularly good at turning wealth into happiness. This says very, very little about how good we are at achieving any goals that we have that are unrelated to happiness. For example, the world is far less dangerous than it used to be. Even taking into account two world wars, people living in the twentieth century were far less likely to die a violent death than people living hundreds of years before that. Infant mortality has decreased dramatically, and average life expectancy has been increased. Even if we haven't managed to buy happier life with our wealth, we've definitely managed to buy more life.

comment by Scott Alexander (Yvain) · 2009-04-17T21:24:47.418Z · LW(p) · GW(p)
  1. It seems that it is possible to compare happiness of two different people; ie I can say that giving the cake to Mary would give her twice as much happiness as it would give Fred. I think that's all you need to counter your first objection. You'd need something much more formal if you were actually trying to calculate it out rather than use it as a principle, but as far as I know no one does this.

  2. This is a big problem. I personally solve it by not using utilitarianism on situations that create or remove people. This is an inelegant hack, but it works.

  3. This is why I said I am a descriptive emotivist but a normative utilitarian. The fact that people don't act in accordance with a system doesn't mean the system isn't moral. I'd be pretty dubious of any moral system that said people were actually doing everything right.

  4. Yeah, tell me about it. Right now I'm thinking that a perfectly rational person has no essential discounts, but ends up with a very hefty discount because she can't make future plans with high effectiveness. For example, investing all my money now and donating the sum+interest to charity in a thousand years only works if I'm sure both the banking system and human suffering will last a millennium.

"Utilons don't make people happier" is a weird way of putting things. It sounds to me a lot like "meters don't make something longer." If you're adding meters to something, and it's not getting longer, you're using the word "meter" wrong.

I don't know much about academic consequentialism, but I'd be surprised if someone hadn't come up with the idea of the utilon x second, ie adding a time dimension and trying to maximize utilon x seconds. If giving someone a new car only makes them happier for the first few weeks, then that only provides so many utilon x seconds. If getting married makes you happier for the rest of your life, well, that provides more utilon x seconds. If you want to know whether you should invest your effort in getting people more cars or getting them into relationships, you'll want to take that into account.

Probably an intelligent theory of utilon x seconds would end up looking completely different from modern consumer culture. Probably anyone who applied it would also be much much happier than a modern consumer. If people can't calculate what does and doesn't provide them with utilon x seconds, they either need to learn to do so, ask someone who has learned to do so to help manage their life, or resign themselves to being less than maximally happy.

I have a feeling this is very different from the way economists think about utility, but that's not necessarily a bad thing.

Replies from: steven0461, Nick_Tarleton
comment by steven0461 · 2009-04-17T21:38:02.635Z · LW(p) · GW(p)

This is confusing the issue. Utility, which is an abstract thing measuring preference satisfaction, is not the same thing as happiness, which is a psychological state.

Replies from: mattnewport, Eliezer_Yudkowsky
comment by mattnewport · 2009-04-17T21:46:03.906Z · LW(p) · GW(p)

It's a pretty universal confusion. Many people when asked what they want out of life will say something like 'to be happy'. I suspect that they do not exactly mean 'to be permanently in the psychological state we call happiness' though, but something more like, 'to satisfy my preferences, which includes, but is not identical with, being in the psychological state of happiness more often than not'. I actually think a lot of ethics gets itself tied up in knots because we don't really understand what we mean when we say we want to be happy.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-17T22:01:05.440Z · LW(p) · GW(p)

True, but even so, thinking about utilon-seconds probably does steer your thoughts in a different direction from thinking about utility.

Replies from: steven0461
comment by steven0461 · 2009-04-17T22:04:36.032Z · LW(p) · GW(p)

So let's call them hedon-seconds instead.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-04-17T22:29:34.529Z · LW(p) · GW(p)

The terminology here is kind of catching me in between a rock and a hard place.

My entire point is that the "utility" of "utilitarianism" might need more complexity than the "utility" of economics, because if someone thinks they prefer a new toaster but they actually wouldn't be any happier with it, I don't place any importance on getting them a new toaster. IANAEBAFAIK economists' utility either would get them the new toaster or doesn't really consider this problem.

...but I also am afraid of straight out saying "Happiness!", because if you do that you're vulnerable to wireheading. Especially with a word like "hedon" which sounds like "hedonism", which is very different from the "happiness" I want to talk about.

CEV might help here, but I do need to think about it more.

Replies from: Matt_Simpson, Nick_Tarleton, steven0461
comment by Matt_Simpson · 2009-04-18T05:03:35.821Z · LW(p) · GW(p)

My entire point is that the "utility" of "utilitarianism" might need more complexity than the "utility" of economics, because if someone thinks they prefer a new toaster but they actually wouldn't be any happier with it, I don't place any importance on getting them a new toaster. IANAEBAFAIK economists' utility either would get them the new toaster or doesn't really consider this problem.

Agreed. For clarity, the economist's utility is just preference sets, but these aren't stable. Morality's utility is what those preference sets would look like if they reflected what we would actually value, given that we take everything into account. I.e., Eliezer's big computation. Utilitarianism's utility, in the sense that Eliezer is a utilitarian, is the terms of the implied utility function we have (i.e., the big computation) that refers to the utility functions of other agents.

Using "utility" to refer to all of these things is confusing. I choose to call economist's utility functions preference sets, for clarity. And, thus, economic actors maximize preferences, but not necessarily utility. Perhaps utilitarianism's utility - the terms in our utility function for the values of other people - can be called altruistic utility, again, for clarity.

ETA: and happiness I use to refer to a psychological state - a feeling. Happiness, then, is nice, but I don't want to be happy unless it's appropriate to be happy. Your mileage may vary with this terminology, but it helps me keep things straight.

comment by Nick_Tarleton · 2009-04-17T23:21:33.665Z · LW(p) · GW(p)

the "utility" of "utilitarianism" might need more complexity than the "utility" of economics

My rough impression is that "utilitarianism" is generally taken to mean either hedonistic or preference utilitarianism, but nothing else, and that we should be saying "consequentialism".

CEV might help here, but I do need to think about it more.

I think the "big computation" perspective in The Meaning of Right is sufficient.

Or if you're just looking for a term to use instead of "utility" or "happiness", how about "goodness" or "the good"? (Edit: "value", as steven suggests, is better.)

Replies from: steven0461
comment by steven0461 · 2009-04-17T23:25:42.763Z · LW(p) · GW(p)

My rough impression is that "utilitarianism" is generally taken to mean either hedonistic or preference utilitarianism, but nothing else, and that we should be saying "consequentialism".

My impression is that it doesn't need to be pleasure or preference satisfaction; it can be anything that could be seen as "quality of life" or having one's true "interests" satisfied.

Or if you're just looking for a term to replace "utility", how about "goodness" or "the good"?

Or "value".

comment by steven0461 · 2009-04-17T22:35:32.057Z · LW(p) · GW(p)

I agree we should care about more than people's economic utility and more than people's pleasure.

"eudaimon-seconds", maybe?

comment by Nick_Tarleton · 2009-04-17T23:10:05.134Z · LW(p) · GW(p)

I don't know much about academic consequentialism, but I'd be surprised if someone hadn't come up with the idea of the utilon x second, ie adding a time dimension and trying to maximize utilon x seconds. If giving someone a new car only makes them happier for the first few weeks, then that only provides so many utilon x seconds. If getting married makes you happier for the rest of your life, well, that provides more utilon x seconds. If you want to know whether you should invest your effort in getting people more cars or getting them into relationships, you'll want to take that into account.

This is one reason I say my notional utility function is defined over 4D histories of the entire universe, not any smaller structures like people.

comment by mattnewport · 2009-04-17T20:51:40.366Z · LW(p) · GW(p)

It seems that in the rationalist community there's almost universal acceptance of utilitarianism as basics of ethics.

I'd be interested to know if that's true. I don't accept utilitarianism as a basis for ethics. Alicorn's recent post suggests she doesn't either. I think quite a few rationalists are also libertarian leaning and several critiques of utilitarianism come from libertarian philosophies.

Replies from: Alicorn
comment by Alicorn · 2009-04-17T20:58:37.322Z · LW(p) · GW(p)

Suggests? I state it outright (well, in a footnote). Not a consequentialist over here. My ethical views are deontic in structure, although they bear virtually no resemblance to the views of the quintessential deontologist (Kant).

Replies from: mattnewport
comment by mattnewport · 2009-04-17T21:01:28.406Z · LW(p) · GW(p)

I did think twice over using 'suggests' but I just threw in the link to let you speak for yourself. Thanks for clarifying :)

Replies from: conchis
comment by conchis · 2009-04-17T21:19:45.024Z · LW(p) · GW(p)

Additional data point: not a utilitarian either.

FWIW: fairly committed consequentialist. Most likely some form of prioritarian, possibly a capability prioritarian (if that even means anything); currently harboring significant uncertainty with regard to issues of population ethics.

Replies from: Alicorn
comment by Alicorn · 2009-04-17T21:27:03.525Z · LW(p) · GW(p)

Person-affecting consequentialisms are pretty nice about population ethics.

Replies from: conchis
comment by conchis · 2009-04-17T23:33:27.689Z · LW(p) · GW(p)

Yeah, that's the way I tend, but John Broome has me doubting whether I can get everything I want here.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2009-04-18T22:28:47.168Z · LW(p) · GW(p)

Conchis, take a look at Krister Bykvist's paper, "The Good, the Bad and the Ethically Neutral" for a convincing argument that Broome should embrace a form of consequentialism.

(As an aside, the paper contains this delightful line: "My advice to Broome is to be less sadistic.")

Replies from: conchis
comment by conchis · 2009-04-21T12:44:05.174Z · LW(p) · GW(p)

Thanks for the link.

As far as I can tell, Bykvist seems to be making an argument about where the critical level should be set within a critical-level utilitarian framework rather than providing an explicit argument for that framework. (Indeed, the framework is one that Broome appears to accept already.)

The thing is, if you accept critical-level utilitarianism you've already given up the intuition of neutrality, and I'm still wondering whether that's actually necessary. In particular, I remain somewhat attracted to a modified version of Dasgupta's "relative betterness" idea, which Broome discusses in Chapter 11 of Weighing Lives. He seems to accept that it performs well against our intuitions (indeed, arguably better his own theory), but ultimately rejects it as being undermotivated. I still wonder whether such motivation can be provided.

(Of course, if it can't, then Bykvist's argument is interesting.)

comment by PhilGoetz · 2009-04-17T21:54:07.406Z · LW(p) · GW(p)

Do you want to be a wirehead?

Replies from: CronoDAS, taw
comment by CronoDAS · 2009-04-17T21:56:40.711Z · LW(p) · GW(p)

I do. Very much so, in fact.

Replies from: outlawpoet
comment by outlawpoet · 2009-04-17T22:22:12.092Z · LW(p) · GW(p)

It's fairly straightforward to max out your subjective happiness with drugs today, why wait?

Replies from: AllanCrossman
comment by AllanCrossman · 2009-04-17T22:23:41.517Z · LW(p) · GW(p)

It's fairly straightforward to max out your subjective happiness with drugs today, why wait?

Is it? What drugs?

Replies from: outlawpoet
comment by outlawpoet · 2009-04-17T23:15:38.025Z · LW(p) · GW(p)

Well, that's an interesting question. If you wanted to just feel maximum happiness in a something like your own mind, you could take the strongest dopamine and norepinephrin reuptake inhibitors you could find.

If you didn't care about your current state, you could get creative, opioids to get everything else out of the way, psychostimulants, deliriants. I would need to think about it, I don't think anyone has ever really worked out all the interactions. It would be easy to achieve a extremely high bliss, but some interactions work would be required to figure out something like a theoretical maximum.

The primary thing in the way is the fact that even if you could find a way to prevent physical dependency, the subject would be hopelessly psychologically addicted, unable to function afterwards. You'd need to stably keep them there for the rest of their life expectancy, you couldn't expect them to take any actions or move in and out of it.

Depending on the implementation, I would expect wireheading to be much the same. Low levels of stimulation could potentially be controlled, but using to get maximum pleasure would permanently destroy the person. Our architecture isn't built for it.

Replies from: Lawliet
comment by Lawliet · 2009-04-17T23:40:49.627Z · LW(p) · GW(p)

Current drugs will only give you a bit of pleasure before wrecking you in some way or another.

CronoDAS should be doing his best to stay alive, his current pain being a down payment on future real wireheading.

Replies from: ciphergoth, outlawpoet
comment by Paul Crowley (ciphergoth) · 2009-04-18T01:28:57.747Z · LW(p) · GW(p)

Current drugs will only give you a bit of pleasure before wrecking you in some way or another.

Some current drugs, like MDMA, are extremely rewarding at a very low risk.

Replies from: timtyler, Pablo_Stafforini
comment by timtyler · 2009-04-18T16:29:37.231Z · LW(p) · GW(p)

"Probably the gravest threat to the long-term emotional and physical health of the user is getting caught up in the criminal justice system."

comment by Pablo (Pablo_Stafforini) · 2009-04-18T22:40:12.215Z · LW(p) · GW(p)

MDMA is known to be neurotoxic. It's definitely not the way to attain maximum happiness in the long run, unless your present life expectancy is very short indeed.

Replies from: loqi
comment by loqi · 2009-04-18T23:00:17.782Z · LW(p) · GW(p)

MDMA is known to be neurotoxic.

I think that is incorrect. Please substantiate.

Replies from: CronoDAS, Pablo_Stafforini
comment by CronoDAS · 2009-04-18T23:37:46.138Z · LW(p) · GW(p)

From the same page cited by timtyler above:

Ever more alarming animal studies conducted over a decade by George Ricaurte, a neurotoxicologist at John Hopkins University School of Medicine, suggest that taking high and/or frequent doses of MDMA causes damage to the terminals of serotonin axons in the brain. Cerebrospinal fluid 5-hydroxyindoleacetic acid (5-HIAA), serotonin's major metabolite which serves as a marker of central serotonin (5-hydroxytryptamine, 5-HT) neural function, may be lower in human MDMA users than in putatively matched controls. The number of serotonin transporter sites, structural protein elements on the presynaptic outer axonal membrane that recycle the released neurotransmitter, may be reduced too. Long-term MDMA-induced changes in the availability of the serotonin transporter may be reversible; but it is unclear whether recovery is complete. Currently the balance of neurochemical and neuroanatomical evidence, and functional measures of serotonin neurons, suggests that it is imprudent to take MDMA or other ring-substituted methamphetamine derivatives without also taking neuroprotective precautions. Arguably, it is best to take MDMA infrequently and reverently or not at all - Dr Shulgin once suggested a maximum of four times a year.

Yes, the page goes on to describe reasons to be skeptical of the studies, but I think that I don't want to risk it - and I don't know how to get the drugs anyway, especially not in a reasonably pure form. I've also made a point of refusing alcoholic beverages even when under significant social pressure to consume them; my family medical history indicates that I may be at unusually high risk for alcoholism, and I would definitely describe myself as having an "addictive personality", assuming such a thing exists.

Replies from: loqi
comment by loqi · 2009-04-19T00:11:48.797Z · LW(p) · GW(p)

Ah, thanks for the relevant response. I was carelessly assuming a stronger definition of neurotoxicity along the lines of the old 80s propaganda ("one dose of MDMA = massive brain damage").

comment by Pablo (Pablo_Stafforini) · 2009-04-18T23:37:10.292Z · LW(p) · GW(p)

The most recent meta-analysis acknowledges that "the evidence cannot be considered definitive", but concludes:

To date, the most consistent findings associate subtle cognitive, particularly memory, impairments with heavy ecstasy use.

For practical purposes, this lingering doubt makes little difference. Hedonists are well-advised to abstain from taking Ecstasy on a regular basis even if they assign, say, a 25% chance to the hypothesis that MDMA is neurotoxic.

I myself believe that positive subjective experience ("happiness", in one of its senses) is the only thing that ultimately matters, and would be the first to advocate widespread use of ecstasy in the absence of concerns about its adverse effects on the brain.

--

Gouzoulis-Mayfrank, E.; Daumann, Neurotoxicity of methylenedioxyamphetamines (MDMA; ecstasy) in humans: how strong is the evidence for persistent brain damage?, J. Addiction. 101(3):348-361, March 2006.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-19T12:03:25.610Z · LW(p) · GW(p)

Yes, I think the sort of "eight pills every weekend" behavour that is sometimes reported is definitely inadvisable. However, there are escalating hazards and diminishing returns; it seems to me that the costs/benefits analysis looks quite the other way for infrequent use. The benefits extend beyond the immediate experience of happiness.

comment by outlawpoet · 2009-04-17T23:50:31.140Z · LW(p) · GW(p)

It depends on what you mean by wrecking. Morphine, for example, is pretty safe. You can take it in useful, increasing amounts for a long time. You just can't ever stop using it after a certain point, or your brain will collapse on itself.

This might be a consequence of the bluntness of our chemical instruments, but I don't think so. We now have much more complicated drugs that blunt and control physical withdrawal and dependence, like Subutex and so forth, but the recidivism and addiction numbers are still bad. Directly messing with your reward mechanisms just doesn't leave you a functioning brain afterward, and I doubt wireheading of any sophistication will either.

comment by taw · 2009-04-17T23:06:50.983Z · LW(p) · GW(p)

My preference function advices me against becoming a wirehead, but I would be much happier if I did it. Obviously. And it's not really a binary choice.

comment by mattnewport · 2009-04-17T20:48:36.160Z · LW(p) · GW(p)

Modern economy is just wonderful at mass producing utilons - we have orders of magnitude more utilons per person than our ancestors - and it doesn't really leave people that much happier.

Current research suggests it does:

The facts about income and happiness turn out to be much simpler than first realized:

1) Rich people are happier than poor people.

2) Richer countries are happier than poorer countries.

3) As countries get richer, they tend to get happier.

Replies from: taw, Alicorn
comment by taw · 2009-04-17T23:05:16.085Z · LW(p) · GW(p)

It's true that my critique would be a lot weaker if Easterlin paradox turned out to be false, but neither me nor Easterlin are anywhere close to being convinced about that. It would surprise me greatly (in <1% chance sense) if it turned out to be so.

1 is obviously predicted by the hedonic treadmill, so it's not surprising. And as far as I know there's very little evidence for 2 and 3 - there might be some tiny effect, but if it was strong then either everybody today would feel ecstatic all the time, or our ancestors 200 years ago would all feel suicidal all the time, neither of which is the case.

Replies from: mattnewport
comment by mattnewport · 2009-04-17T23:47:19.761Z · LW(p) · GW(p)

The research I linked claims to be evidence for 2 and 3. I'd say it's not irrefutable evidence but it's more than 'very little'. Do you take issue with specific aspects of the research?

There seems to be a certain amount of politics tied up in happiness research. Some people prefer to believe that improved material wealth has no correlation with happiness because it fits better with their political views, others prefer to believe that improved material wealth correlates strongly with happiness. I find the evidence that there is a correlation persuasive, but I am aware that I may be biased to view the evidence favourably because it is more convenient if it is true in the context of my world view.

comment by Alicorn · 2009-04-17T21:01:19.585Z · LW(p) · GW(p)

This could be partly a comparison effect. It's possible that rich people are happier than poor people because they compare themselves to poor people, and the denizens of rich countries are happier than the denizens of the Third World because they can likewise make such a comparison. A country that's gaining wealth is gaining countries-that-it's-better-than and shrinking the gap between countries that are still wealthier. If wealth were fairly distributed, it's arguable if we'd have much to show for some flat increase in everyone's wealth, handed out simultaneously and to everyone.

Replies from: mattnewport
comment by mattnewport · 2009-04-17T21:09:35.698Z · LW(p) · GW(p)

It's certainly possible but the research doesn't seem to suggest that:

If anything, Ms. Stevenson and Mr. Wolfers say, absolute income seems to matter more than relative income. In the United States, about 90 percent of people in households making at least $250,000 a year called themselves “very happy” in a recent Gallup Poll. In households with income below $30,000, only 42 percent of people gave that answer. But the international polling data suggests that the under-$30,000 crowd might not be happier if they lived in a poorer country.

Also:

Two days back I noted that happiness inequality today is at much lower levels than in earlier decades, despite rising income inequality. What lies behind these trends?

More research is needed but the current research doesn't really support the comparison explanation.

Replies from: conchis
comment by conchis · 2009-04-18T00:10:13.228Z · LW(p) · GW(p)

More research is needed but the current research doesn't really support the comparison explanation.

I think you're over-interpreting the results of a single (and as far as I'm aware as-yet-non-peer-reviewed) paper.

Cross-country studies are suggestive, but as far as I'm concerned the real action is in micro data (and especially in panel studies tracking the same individuals over extended periods of time). These have pretty consistently found evidence of comparison effects in developed countries. (The state of play is a little more complicated for transition and developing countries.)

A good overview is:

  • Clark, Frijters and Shields (2008) "Relative Income, Happiness, and Utility" Journal of Economic Literature 46(1): 95-144. (Earlier version on SSRN here)

For what it's worth, my read of the micro data is that it generally doesn't support the "money doesn't make people happy" hypothesis either. Money does matter, though in many cases rather less than some other life outcomes.

Replies from: mattnewport
comment by mattnewport · 2009-04-18T00:22:54.347Z · LW(p) · GW(p)

My claim would be that if the poorest country in the world could be brought up to the standard of living of the US and the rest of the world could have its standard of living increased so as to maintain the same relative inequality, then (to a first approximation) every individual in the world would find their happiness either increased or unchanged. I don't know if anyone would go so far as to claim otherwise but it sometimes seems that some people would dispute that claim.

Replies from: conchis
comment by conchis · 2009-04-18T00:34:03.370Z · LW(p) · GW(p)

Agreed.

comment by Psychohistorian · 2009-04-18T23:15:50.383Z · LW(p) · GW(p)

But my main problem is that there's very little evidence getting utilons is actually increasing anybody's happiness significantly.

If you give someone more utilons, and they do not get happier, you're doing it wrong by definition. Conversely, someone cannot get happier without acquiring more utilons by definition.

You've rejected a straw man. You're probably right to reject said straw man, but it doesn't relate to utilitarianism.

Replies from: CronoDAS
comment by CronoDAS · 2009-04-19T00:00:33.162Z · LW(p) · GW(p)

If you give someone more utilons, and they do not get happier, you're doing it wrong by definition. Conversely, someone cannot get happier without acquiring more utilons by definition.

Utilons are not equivalent to happiness. Utilons are basically defined as "whatever you care about," while happiness is a specific brain state.

For example, I don't want people to be tortured. If you save someone else from torture and don't tell me about it, you've given me more utilons without increasing my happiness one bit.

The converse is true as well - you can make someone happier without giving them utilons. From what I know of Eliezer, if you injected him with heroin, you'd make him (temporarily) happier, but I doubt you'd have given him any utilons.

Beware arguing by definition. Especially when your definition is wrong.

Replies from: mattnewport, Psychohistorian
comment by mattnewport · 2009-04-19T00:27:06.277Z · LW(p) · GW(p)

You caution against arguing by definition and yet claim definitions that are not universally agreed on as authoritative. There is genuine confusion over some of these definitions, it's useful to try and clarify what you mean by the words but you should refrain from claiming that is the meaning. For example, contrary definitions of happiness (it's not just a brain state):

Happiness

state of well-being characterized by emotions ranging from contentment to intense joy

Good luck; good fortune; prosperity.

An agreeable feeling or condition of the soul arising from good fortune or propitious happening of any kind; the possession of those circumstances or that state of being which is attended enjoyment; the state of being happy; contentment; joyful satisfaction; felicity; blessedness.

good fortune; pleasure; contentment; joy.

Wikipedia

Philosophers and religious thinkers have often defined happiness in terms of living a good life, or flourishing, rather than simply as an emotion. Happiness in this older sense was used to translate the Greek Eudaimonia, and is still used in virtue ethics.

I don't think it's uncontroversial to claim that utilons can be increased by actions you don't know about either.

The definitions really are at issue here and there are relevant differences between commonly used definitions of happiness.

comment by Psychohistorian · 2009-04-19T17:46:15.769Z · LW(p) · GW(p)

My understanding of utilitarian theory is that, at the highest meta level, every utilitarian theory is unified by the central goal of maximizing happiness, though the definitions, priorities, and rules may vary.

If this is true, "Utilitarianism fails to maximize happiness" is an illegitimate criticism of the meta-theory. It would be saying, "Maximizing happiness fails to maximize happiness," which is definitionally impossible.

Since the meta-theory is "Maximize happiness," you can't say that the meta-theory fails to maximize happiness, only that specific formulations do, which is absolutely legitimate. The original author appears to be criticizing a specific formulation while he claims to be criticizing the meta-theory. That was my original point, and I did not make it clearly enough.

I used "by definition" precisely because I had just read that article. I'm clearly wrong because apparently the definition of utilons is controversial. I simply think of them as a convenient measurement device for happiness. If you have more utilons, you're that much happier, and if you have fewer, you're that much less happy. If buying that new car doesn't increase your happiness, you derive zero utilons from it.

To my knowledge, that's a legitimate and often-used definition of utilon. I could be wrong, in which case my definition is wrong, but given the fact that another poster takes issue with your definition, and that the original poster implicitly uses yet another definition, I really don't think mine can be described as "wrong." Though, of course, my original assertion that the OP is wrong by definition is wrong.

comment by Alicorn · 2009-04-17T20:41:48.365Z · LW(p) · GW(p)

This reminds me of a talk by Peter Railton I attended several years ago. He described happiness as a kind of delta function: we are as happy as our difference from our set point, but we drift back to our set point if we don't keep getting new input. Increasing one's set point will make one "happier" in the way you seem to be using the word, and it's probably possible (we already treat depressed people, who have unhealthily low set points and are resistant to more customary forms of experiencing positive change in pleasure).

Replies from: Bongo, steven0461
comment by Bongo · 2009-04-19T09:45:29.909Z · LW(p) · GW(p)

So happiness is the difference between your set point of happiness and your current happiness? Looks circular.

comment by steven0461 · 2009-04-17T20:51:39.682Z · LW(p) · GW(p)

What do you / did he mean by delta function? Dirac delta and Kronecker delta don't seem to fit.

Replies from: Alicorn, PhilGoetz
comment by Alicorn · 2009-04-17T20:55:56.505Z · LW(p) · GW(p)

Delta means, in this case, change. We are only happy if we are constantly getting happier; we don't get to recycle utilons.

Replies from: gjm, steven0461
comment by gjm · 2009-04-17T23:15:46.501Z · LW(p) · GW(p)

Making explicit something implicit in steven0461's comment: the term "delta function" has a technical meaning, and it doesn't have anything to do with what you're describing. You might therefore prefer to avoid using that term in this context.

(The "delta function" is a mathematical object that isn't really even a function; handwavily it has f(x)=0 when x isn't 0, f(x) is infinite when x is 0, and the total area under the graph of f is 1. This turns out to be a very useful gadget in some areas of mathematics, and one can turn the handwaving into actual mathematics at some cost in complexity. When handwaving rather than mathematics is the point, one sometimes hears "delta function" used informally to denote anything that starts very small, rapidly becomes very large, and then rapidly becomes very small again. Traffic at a web site when it gets a mention in some major media outlet, say. That's the "Dirac delta" Steven mentioned; the "Kronecker delta" is a function of two variables that's 1 when they're equal and 0 when they aren't, although most of the time when it's used it's actually denoting something hairier than that. This isn't the place for more details.)

comment by steven0461 · 2009-04-17T21:00:26.042Z · LW(p) · GW(p)

This doesn't make logical sense if both these words "happy" mean the same thing, so we should use different words for both.

Replies from: Alicorn
comment by Alicorn · 2009-04-17T21:03:29.837Z · LW(p) · GW(p)

We only occupy a level of happiness/contentment above our individual, natural, set points as long as we are regularly satisfying previously unsatisfied preferences. When that stream of satisfactions stops, we gradually revert to that set point.

Replies from: steven0461
comment by steven0461 · 2009-04-17T21:05:58.581Z · LW(p) · GW(p)

OK, so the point is happiness depends on the time derivative of preference satisfaction rather than on preference satisfaction itself?

Replies from: Alicorn
comment by Alicorn · 2009-04-17T21:07:59.200Z · LW(p) · GW(p)

If I knew what "time derivative" meant, I might agree with you.

Replies from: steven0461
comment by steven0461 · 2009-04-17T21:09:45.322Z · LW(p) · GW(p)

Amount of change per unit of time, basically.

Replies from: Alicorn
comment by Alicorn · 2009-04-17T21:27:52.376Z · LW(p) · GW(p)

Then yes, that's exactly it.

comment by PhilGoetz · 2009-04-17T23:08:51.541Z · LW(p) · GW(p)

You can think of happiness as the derivative of utility. (Caution: That is making more than just a mathematical claim.)

comment by PhilGoetz · 2009-04-17T23:16:50.205Z · LW(p) · GW(p)

I think it's pretty clear we should have a term in our social utility function that gives value to complexity (of the universe, of society, of the environment, of our minds). That makes me more than just a preference utilitarian. It's an absolute objective value. It may even, with interpretation, be sufficient by itself.

Replies from: steven0461, Yvain, Nick_Tarleton, timtyler
comment by steven0461 · 2009-04-17T23:18:40.819Z · LW(p) · GW(p)

There are specific things that I value that are complex, and in some cases I value them more the more complex they are, but I don't think I value complexity as such. Complexity that doesn't hit some target is just randomness, no?

comment by Scott Alexander (Yvain) · 2009-04-17T23:21:58.290Z · LW(p) · GW(p)

Can you explain that a little better?

It seems to me like if you define complexity in any formal way, you'll end up tiling the universe with either random noise, fractals, or some other extremely uninteresting system with lots and lots of variables.

I always thought that our love of complexity is a side-effect of the godshatter, ie there's no one thing that will interest us. Solve everything else, and the desire for complexity disappears. You might convince me otherwise by defining "complexity" more rigorously.

Replies from: Bongo, timtyler
comment by Bongo · 2009-04-19T09:51:58.095Z · LW(p) · GW(p)

It seems to me like if you define complexity in any formal way, you'll end up tiling the universe with either random noise, fractals, or some other extremely uninteresting system with lots and lots of variables.

Could complexity-advocates reply to this point specifically? Either to say why they don't actually want this or to admit that they do. I'm confused.

comment by timtyler · 2009-04-18T16:16:40.598Z · LW(p) · GW(p)

Living systems do produce complex, high-entropy states, as a matter of fact. Yes, that leads to universal heat death faster if they keep on, but - so what?

Replies from: thomblake
comment by thomblake · 2009-04-18T19:55:30.067Z · LW(p) · GW(p)

Someone whose name escapes me has argued that this is why living systems exist - the universe tends towards maximum entropy, and we're the most efficient way of getting there. Let's see how much energy we can waste today!

Replies from: timtyler
comment by timtyler · 2009-04-18T21:18:26.385Z · LW(p) · GW(p)

There's a few of us. My pages on the topic:

http://originoflife.net/gods_utility_function/ http://originoflife.net/bright_light/

See also:

http://en.citizendium.org/wiki/Life/Signed_Articles/John_Whitfield

The main recent breakthrough of our understanding in this areas is down to Dewar - and the basic idea goes back at least to Lotka, from 1922.

comment by Nick_Tarleton · 2009-04-17T23:26:59.275Z · LW(p) · GW(p)

It's an absolute objective value.

What are you trying to say? Preference-satisfaction is exactly as absolute and objective a value as complexity; it's just one that happens to explicitly depend on the contents of people's minds.

Replies from: timtyler
comment by timtyler · 2009-04-18T16:20:56.829Z · LW(p) · GW(p)

Now I don't know what you are trying to say. Saying that preferences are values is tautological - "preferences" and "values" are synonyms in this kind of discussion.

Replies from: conchis, Nick_Tarleton
comment by conchis · 2009-04-18T16:43:55.419Z · LW(p) · GW(p)

One of the things that currently frustrates me most about this site is the confusion that seems to surround the use of words like value, preference, happiness, and utility. Unfortunately, these words do not have settled, consistent meanings, even within literatures that utilize them extensively (economics is a great example of this; philosophy tends to be better, though still not perfect). Nor does it seem likely that we will be able to collectively settle on consistent usages across the community. (Indeed, some flexibility may even be useful.)

Given that, can we please stop insisting that others' statements are wrong/nonsensical/tautological etc. simply on the basis that they aren't using our own preferred definitions. If something seems not to make sense to you, consider extending some interpretative charity by (a) considering whether it might make sense given alternative definitions; and/or (b) asking for clarification, before engaging in potentially misguided criticisms.

EDIT: By way of example here, many people would claim that things can be valuable, independently of whether anyone has a preference for them. You may not think such a view is defensible, but it's not obviously gibberish, and if you want to argue against it, you'll need more than a definitional fiat.

Replies from: timtyler
comment by timtyler · 2009-04-18T18:59:21.133Z · LW(p) · GW(p)

Whoa! Hold your horses! I started out by saying: "I don't know what you are trying to say." Clarify definitions away - if that is the problem - which seems rather unlikely.

Replies from: conchis
comment by conchis · 2009-04-18T20:04:08.887Z · LW(p) · GW(p)

Saying that preferences are values is tautological...

seemed more like an assertion than an attempt to seek clarification, but I apologize if I misinterpreted your intention.

The EDIT was supposed to be an attempt to clarify. Does the claim I made there make sense to you?

Replies from: timtyler
comment by timtyler · 2009-04-18T21:23:13.925Z · LW(p) · GW(p)

FWIW, to my way of thinking, we can talk about hypothetical preferences just about as easily as hypothetical values.

Replies from: conchis
comment by conchis · 2009-04-18T21:38:45.098Z · LW(p) · GW(p)

I'm afraid that I don't understand the relevance of this to the discussion. Could you expand?

comment by Nick_Tarleton · 2009-04-18T16:52:11.176Z · LW(p) · GW(p)

Read what I said: "preference-satisfaction is... a value", not "preferences are values". The point is that the extent to which people's preferences are satisfied is just as objective a property of a situation as the amount of complexity present.

Replies from: PhilGoetz, timtyler
comment by PhilGoetz · 2009-04-20T14:57:12.372Z · LW(p) · GW(p)

The point is that the extent to which people's preferences are satisfied is just as objective a property of a situation as the amount of complexity present.

The preferences can be anything. If I claim that complexity should be one of the preferences, for me and for everyone, that's an objective claim - "objective" in the sense "claiming an objective value valid for all observers, rather than a subjective value that they can choose arbitrarily". It's practically religious. It's radically different from saying "people satisfy their preferences".

"The extent to which people's preferences are satisfied" is an objective property of a situation. But that has nothing to do with what I said; it's using a different meaning of the word "objective".

comment by timtyler · 2009-04-18T18:56:51.012Z · LW(p) · GW(p)

Trying for a sympathetic interpretation - I /think/ you must be talking about the preferences of a particular individual, or an average human - or something like that.

In general, preference-satisfaction is not specific - in the way that maximising complexity is (for some defined metric of complexity) - because the preferences could be any agent's preferences - and different agents can have wildly different preferences.

Replies from: conchis, Nick_Tarleton
comment by conchis · 2009-04-18T20:06:24.071Z · LW(p) · GW(p)

the preferences could be any agent's preferences

Preference-satisfaction in this context is usually considered as an aggregate (usually a sum or an average) of the degree to which all individuals' preferences are satisfied (for some defined metric of satisfaction).

comment by Nick_Tarleton · 2009-04-18T21:28:03.206Z · LW(p) · GW(p)

The preferences of the people in the situation being evaluated.

comment by timtyler · 2009-04-18T16:14:57.052Z · LW(p) · GW(p)

That's the theory, as I understand it:

http://originoflife.net/gods_utility_function/