Nonlinear perception of happiness

post by Jan_Kulveit · 2018-01-08T09:04:15.314Z · LW · GW · 14 comments

Contents

  Conjecture:  Human perception of happiness has a nonlinear form, that is, with a linear increase of the raw happiness there is nonlinear increase of the perceived happiness.
  Implications
  Conjecture: the raw quantity is often the more useful when aggregating.
  Implications to ethics
  Example
  Experimental tests
  Conclusion
None
14 comments

Epistemic status: Speculative.

tl;dr: Perception of hapiness is related to some "raw" happiness by an equivalent of a psychophysics law. The "raw" quantity should be used when aggregating. Far-reaching implications for utility calculation would follow.

A body of research seeks to understand happiness and measure it quantitatively. Often the measurements use tools such as Oxford Happiness Inventory, Subjective Happiness Scale, Panas Scale, etc. What these instruments have in common is they measure a perception, and the scales used are linear.

A proposal: let's make a distinction between the perception of happiness, which is measured in this way, and a hypothetical raw happiness. While we cannot measure such quantity in practice, we can at least imagine how it would be measured in a thought experiment – e.g., by an outside observer who has complete access to the mental states of beings, and has some algorithmic way how to determine happiness of mental states.

Given this distinction, we may then ask, how would a human perception of happiness be related to such raw quantity?

Conjecture: Human perception of happiness has a nonlinear form, that is, with a linear increase of the raw happiness there is nonlinear increase of the perceived happiness.

One candidate for the relation can be the widely known psychophysics law, Weber–Fechner law, stating the subjective sensations are proportional to the logarithm of the stimuli intensity. This models light intensity and the perceived difference in weight. It was also proposed the logarithmic perception applies to more indirect senses, e.g. sense of time intervals. It seems plausible it would hold for the perception of quantities like wealth: if we measure the perception of wealth by asking people to rate their wealth on a scale from 0 to 10, we would get log(monetary value).

Now - what if this holds also for the sense of happiness, as used in philosophy and utilitarian calculus? Specifically, we may propose the percieved happiness related to the raw hapiness H as

where is an unknown proportionality constant.
(While I chose happiness, the argument would be the same for related or similar quantities, such as well-being.)

Implications

It may seem such logarithmic rescaling is just an irrelevant change of scale. However, when we aggregate a quantity over many people, there are significant differences between using the raw quantity and the perception.

Conjecture: the raw quantity is often the more useful when aggregating.

This can be easily seen in case of physical quantities, like weight. If we want to calculate total weight carried by a group of people, or total illumination created by a group of celestial objects, we can not simply add the perceived weights or perceived intensities, but we must first recover the raw quantity of stimuli and only latter sum or integrate. Same holds for averaging.

Per analogiam, we should neither sum nor average happiness of people as measured by various linear scales, but we should try to recover the “raw” quantity and only then do the averaging. (And possibly do the log transformation afterwards)

Implications to ethics

As various some of the utilitarian normative ethical theories suggest we should attempt to maximize quantities like happiness or well-being, the difference between counting the raw happiness ("hedons) or aggregating the perceptions leads to different results. While in the non-log-corrected happiness calculus, we would integrate over beings and time the percepts of happiness directly, in the exponentially corrected version the integral has the form

where is the percieved hapiness of a beeing in time , summed over all beings and time, and is the unknown constant.

Example

We can see how this changes the conclusion on the example of a famous philosophical problem: the “repugnant conclusion” problem (Parfit, D., Reasons and Persons.)

In its classical formulation, the repugnant conclusions is: “For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living”

If we use some measure of the “raw” happiness or “quality of life”, the exponential step makes the “much larger” population size hardly feasible. Then, while the conclusion is still technically true in a sense, the paradox is resolved for all practical purposes by taking into account the resource demanded by such populations.

As an illustrative comparison with some numbers: we can imagine an open-ended subjetive quality of life scale where 1 means life of no quality, life with happiness 1.1 is just worth living, moderately happy life can be rated 5 and a life with very high quality rated 10. Then, if we take the base of the logarithmic scale to be e, the “much larger population” in the original formulation would have to be more than 240 billion people to be equal to be better than the original population. Most likely the resource cost of existence of such an immense population would be many times greater than of the original population, even if lives barely worth living are cheaper than high quality life.

Stated in other way - in the real world we are always solving a constrained optimization problem, where resources to create more souls are not exactly unlimited. In such situations relevant for reality, the question is “what is the best population given the limited resources”. Optimizing the "raw hapiness" resolves the paradox for most practical purposes.

Similarly, the use of raw happiness would affect many other questions in moral philosophy.

Experimental tests

While in presence it does not seem feasible to test whether the raw happiness is more fundamental than the perception, it at least seems possible to observe if people's preferences are broadly consistent with the view. In a possible experimental setup one part of the participants would reveal their preferences by choosing between options like “five nice dinners, or one day of skiing in the mountains” and in the other part rate the experiences on a linear scale. From the former part we should be able to convert the joyful value of all the experiences to a single unit (“hedons”), and then compare the value of the experiences in hedons to the values assigned to them on the linear scale. Our prediction is the dependence would be approximately logarithmic.

Conclusion

It seems plausible the often measured perceptions of hapiness is related to a hypotethical quantity, raw hapiness, by some non-linear relation, e.g. logarithmically. Using the raw quantity when calculating aggregates and averages of hapiness over many people could be a better way of aggregation. This would have broad implications in utilitarian ethics, medical ethics, population ethics, and many other fields where aggregates of hapiness or similar quantities are used. (Similarly for aggregation over time)

14 comments

Comments sorted by top scores.

comment by Dagon · 2018-01-08T19:50:30.529Z · LW(p) · GW(p)

Counter-theory: there is no "raw" happiness. There are multiple distinct "raw" inputs (current and historic dopamine levels, multiple kinds of pain, etc.) which contribute with various correlations to perceived happiness, and these components have a fairly complex relationship among each other which result in perceived happiness. Further, experienced perceived happiness has a different equation than remembered perceived happiness or projected future perceived happiness.

I'm a big fan of adding logarithms as the first shot at improvement when something in nature seems nonlinear. But you really need to have candidate measures of the inputs BEFORE you try to guess at the shape.

Separately, I'm suspicious of attempts to aggregate this sort of thing - there's a value system embedded in the aggregation, and no apriori reason to prefer to aggregate any (or all) of the inputs over aggregating the output. What is it you're actually trying to optimize (in operational, measurable terms)?

Replies from: Jan_Kulveit
comment by Jan_Kulveit · 2018-01-08T20:31:33.749Z · LW(p) · GW(p)

Re: counter-theory - would you also argue there is no "raw" wealth? To me it seems the argument is broadly the same - there are many distinct inputs, some with hard-to-determine value, some with complex realtionships. (A fresh Harvard graduates beeing "poorer" than farmers in Nepal due to loans, etc.). Still, the aggregate concept is useful and in practice is often quantified.

I would argue you can guess at the shape by observing how some sort of "addition" operation on the inputs changes the output. (Like you can have fairly complex function F with many inputs, which is then perceived through some non-linear lens, like P(F()) ... you can still guess at P() from partial derivative of P(F()), if you assume e.g. F() is nothing worse than some sort of polynomial )

Re: suspicion. This sort of aggregation goes on in many places when making decisions. Apparently the sort of problem this propsal hints at is usually not reflected at all, and the aggregation goes on by simply averaging the perceptions.

comment by Qiaochu_Yuan · 2018-01-08T20:15:24.614Z · LW(p) · GW(p)

I have several objections to this which I imagine will be standard among anyone who's read the Sequences, but since they haven't been stated yet I might as well state them for the record.

Identifying any function of happiness with utility seems clearly wrong to me. Humans clearly value lots of things other than happiness. Whatever utility is it shouldn't be so easy to calculate.

Given that, the version of utilitarianism you've described is called total utilitarianism. This also seems clearly wrong to me; I think it doesn't make sense even as an approximation. I don't think there's any reason to think that "true utility" is given by a sum over humans any more than it's given by a sum over human cells. That is, to a first approximation, I think that "true utility" includes lots of complicated interaction terms among humans that aren't captured by any sum over individual humans alone, in the same way that I think that the "true utility" of an individual human includes lots of complicated interaction terms among their cells (like the interactions among their brain cells making up their mind) that aren't captured by any sum over individual cells alone.

Replies from: Jan_Kulveit
comment by Jan_Kulveit · 2018-01-08T21:06:58.601Z · LW(p) · GW(p)

The point of this is the relation between percieved happiness and some conjured "raw " happiness. The implications to various ethical systems are just that, implications, and the inclusion of this was not meant as an endorsement. I dont want to argue for utilitarianism here, but I hope we agree some forms of utilitarianism are obviously relevant, and used in practice.

I'm a bit confused by "identifying any function of happiness with utility seems clearly wrong to me" : Do you propose the actual utility function as you understand it, has no relation to happiness at all?

Replies from: dxu
comment by dxu · 2018-01-08T23:48:52.516Z · LW(p) · GW(p)
I'm a bit confused by "identifying any function of happiness with utility seems clearly wrong to me" : Do you propose the actual utility function as you understand it, has no relation to happiness at all?

I believe what Qiaochu is saying is that not that happiness isn't a component of your utility function, but rather that it doesn't comprise the entirety of your utility function. (Normatively speaking, of course. In practical terms humans don't even behave as though they have a consistent utility function.)

Replies from: Jan_Kulveit
comment by Jan_Kulveit · 2018-01-09T00:44:43.098Z · LW(p) · GW(p)

Thanks. I guess I should not have included the simple utilitarian calculation, as it seems to work as a red herring :( Mea culpa.

Qiaochu: Would the article make better sense if framed like this: ...assuming as per standard LessWrong reasoning, the actual utility function is very complicated, but also assuming, it has a large happiness component, whatever hapiness means, we may ask: what would be a relation of such component to usual approaches to measure happiness by asking people? And how to aggregate among people?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-01-09T02:09:00.342Z · LW(p) · GW(p)

I don't even buy that there is a large happiness component. I would not be surprised to find that in a hundred years we look back on the modern western preoccupation with happiness as mostly a strange cultural phenomenon. The analogous thing looking back on the past might be 11th century monks thinking of something like serving Christ as a large component of "true utility," or whatever.

(But yes, I would be happier with this framing.)

comment by Charlie Steiner · 2018-01-08T20:27:51.241Z · LW(p) · GW(p)

This seems to fall into the trap of taking something descriptive and trying to make it prescriptive. Simplicity is a bad guide to correct morality, because morality is expected to be as complicated as a fair chunk of the human brain. If your simple guess produces unintuitive results like 9->10 mattering exponentially more than 1->2, your simple guess is wrong.

Replies from: Jan_Kulveit
comment by Jan_Kulveit · 2018-01-08T20:49:10.844Z · LW(p) · GW(p)

Obviously there is a lot of complexity. In _this model_:

the complexity lies mainly in some unknown function Hapiness(), which maps from mind-states (or a fair chunk of the human brain) to real numbers. Apparently, humans have the ability to evaluate some sort of this estimate. The proposal here is they make some sort of really non-linear mapping, e.g. log(), when asked to map this to scale 1.. 10.

Prescriptive morality starts elsewhere: when you take such number, aggregate it somehow over some number of people, and claim it is worth optimizing.

What I'm saying, anyone making such prescriptive claims, should consider the possibility they are aggregating in a bad way. (Anyone optimizing e.g. QUALYs or Gross domestic hapiness or some conceptions of utilitarian value is making such claims)

By revealed preference people often put exponentially more resources to going 9 to 10 than from 1 to 2, so I don't think the suggestion 1 to 2 is as valuable as 9 to 10 is intuitive at all.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2018-01-09T01:54:59.704Z · LW(p) · GW(p)

The only trouble with that last sentence is that if happiness is correlated with amount of resources, then this is going to confound any argument from different people spending different amounts of money.

To answer the question, we could look at cases where someone gives money to someone else (to look at altruistic preferences), and try to guess about what sort of impact people want their resources ot have, as a function of quality of life of the recipient. So, e.g. if people want give a lot of money to people who are already happy, then this would indicate that people are intuitively aggregating in a way that weights higher subjective happiness more.

We could also look at what kind of actions people take when planning for the future (measuring selfish preferences) - if they have a 50% probability of good outcomes and a 50% probability of bad outcomes, and they can buy insurance that pays out double in one of the outcomes, do they want the payout in the bad outcome or in the good outcome?

comment by Roland Pihlakas (roland-pihlakas) · 2018-08-05T02:43:32.671Z · LW(p) · GW(p)

You might be interested in Prospect Theory:

https://en.wikipedia.org/wiki/Prospect_theory

comment by spiralingintocontrol · 2018-01-08T16:02:18.849Z · LW(p) · GW(p)

Have you looked at possible empirical bases of "raw happiness" such as Kahneman's Day Reconstruction Method?

(see also: Happiness is Not a Coherent Concept)

Replies from: Jan_Kulveit
comment by Jan_Kulveit · 2018-01-08T18:50:01.227Z · LW(p) · GW(p)

Ad Kahneman: Yes. This is related, but my impression is the nonlinearity is somewhat more general - in DRM you are still asking people for rating on 1-6 affective scale (the nonlinearity would appear between the "raw affect" and the rating on the scale), and doing aggregates.

Happiness is Not a Coherent Concept thanks. It seems to me to be somewhat orthogonal - the article argues happiness breaks into several different variables, which are correlated, but not identical. Ok, than you can choose one of them. Or, you can try to uderstand the varriables better, and possibly construct something which encompasses all of them.

comment by Elizabeth (pktechgirl) · 2018-01-08T15:59:20.221Z · LW(p) · GW(p)

Moved to front page.