Dumb dichotomies in ethics, part 2: instrumental vs. intrinsic values

post by Aaron Bergman (aaronb50) · 2021-05-07T02:31:50.591Z · LW · GW · 4 comments

This is a link post for https://aaronbergman.substack.com/p/dichotomies-in-ethics-part-2-instrumental

Contents

      Warning
        Reader discretion advised. The text below contains dry philosophy that may induce boredom if not interested in formal ethics. Continue at your own risk. 
  Intro
    Some things don’t fit
  Contingent values
    Examples
      Preferences
      Justice
      Truth
      Conclusion
None
4 comments

Warning

Reader discretion advised. The text below contains dry philosophy that may induce boredom if not interested in formal ethics. Continue at your own risk. 

Added: Since you're reading this on LessWrong, it's probably safe for you to continue.

Intro

With my only credential being a six course philosophy minor, I was entirely unqualified to write Dumb Dichotomies in Ethics, part 1. So, of course, I’m going to do the same thing. This time, I will recognize explicitly up front that actual, real-life philosophers have made similar, albeit better developed arguments than my own (see for example “Two Distinctions in Goodness.”)

That said, I would like to do my part in partially dissolving the dichotomy between intrinsic or ‘final’ values and instrumental values. At a basic level, this distinction is often useful and appropriate. For instance, it is helpful to recognize that money doesn’t intrinsically matter to most people, although it can be very instrumentally useful for promoting the more fundamental values of pleasure, reducing suffering, preference satisfaction, or dignity (depending on your preferred ethical system).

Some things don’t fit

The issue is that certain values don’t fit well into the intrinsic/instrumental dichotomy. I think an example will illustrate best. As I’ve mentioned before, I think hat some version of utilitarianism is probably correct. Like other utilitarians, though, I have a few pesky, semi-conflicting intuitions that won’t seem to go away.

Among these is the belief that, all else equal, more equitable “distribution” of utility (or net happiness) is better. If you grant as I do that there exists a single net amount of utility shared among all conscious beings in the universe, it seems intrinsically better that this be evenly distributed evenly across conscious beings. This isn’t an original point—it’s the reason many find the utility monster critique of utilitarianism compelling, which goes something like this (source [LW · GW]):

A hypothetical being…the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster…If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.

Unlike some, though, I don’t conclude from this that utilitarianism is wrong, in that pleasure/happiness and lack of suffering (call this utility for brevity) is not the sole thing that intrinsically matters. I do think utility is the only “thing” that matters, but I also think that a more equitable distribution of this intrinsic value itself is intrinsically good.

Does that mean that I intrinsically value egalitarianism/equity/equality? Not exactly. I only value these things with respect to some more fundamental intrinsic value, namely utility. I think that equitable distribution of wealth and political equality, for instance, are only good insofar as they contribute to equitable distribution of utility. In the real world, I think it is indeed the case that more wealth equality and political equality would be better, but if you managed to convince me that this would have no effect on the distribution of happiness, I would bite the bullet and say that these things don’t matter.

So, I don’t think it’s fair to call “equitable distribution” an intrinsic value (footnote: though, on second thought, maybe we can get around all this by calling “equitable distribution of utility” an intrinsic value alongside utility itself?), since it depends entirely on the intrinsic value of the thing that is being distributed. But, neither would it be fair to call it an instrumental value or goal in the same way as material wealth or political freedom. It just doesn’t fit neatly neatly into the simple bifurcated framework.

Contingent values

I think the solution here is to recognize that intrinsic values break down into “contingent” and “absolute” subtypes, where the former depend on the latter. Here is something of a working definition:

Contingent values are those things that intrinsically matter only with respect to some ‘underlying’ intrinsic value in that they promote the underlying value in a way that cannot even in principle be achieved any other way.

To flesh this out, here are some more examples.

Examples

Preferences

Jane intrinsically values satisfying people’s preferences, even if doing so does not increase net utility. Admittedly, it’s hard to put myself in the shoes of an ethical system I don’t believe in, but I can imagine Jane holding the contingent value of “people wanting things that I also want.”

Such a value wouldn’t be “absolutely intrinsic,” because she doesn’t care about what others want except insofar as it pertains to how good it is to satisfy other’s preferences. It’s not instrumental either, as getting other people to want good things isn’t merely a means to more fully satisfying those wants (although it could be this too in the case that making everyone’s preferences the same allows more total preferences to be satisfied).

Justice

Kyle intrinsically cares about justice. For example, he thinks that retribution can be good even if it does not elevate anyone’s wellbeing. He might hold the contingent value of “no one doing things worthy of punishment.” In other words, Kyle would prefer a perfectly-just world in which no one is punished to another perfectly-just world in which some people are punished to precisely the right degree, entirely because the former is a better “form” of justice. (Footnote: Perhaps a world in which people do immoral things worthy of punishment cannot be perfectly just. Even if so, I don’t think it changes that Kyle’s value is contingent.)

“People not doing punishable things” might also be one of Kyle’s intrinsic values, but it doesn’t have to be. It would be coherent (if a little odd) for him to not care about this if he became convinced that justice didn’t matter.

Truth

Julia intrinsically values believing true things. She would rather believe the truth about something even if doing so made her upset and did not enable her to do anything about the situation. Julia might have the instrumental value of “equitable distribution of truth-holding,” much as I value equitable distribution of utility. That is, she’d rather everyone believe 70% true things and 30% false things than to have all the false beliefs concentrated among a small subset of the population.

Conclusion

Is this pedantic or trivial? Maybe. I also don’t think it’s terribly important. That said, trying to construct (or determine, if you’re a moral realist) an ethical framework using only the intrinsic-instrumental dichotomy is a bit like building a fire with only twigs and logs; (footnote: Was staring at my fireplace while trying to think of a good metaphor) perhaps it can be done, but an important tool seems to be missing.

4 comments

Comments sorted by top scores.

comment by johnswentworth · 2021-05-07T14:17:13.165Z · LW(p) · GW(p)

If you grant as I do that there exists a single net amount of utility shared among all conscious beings in the universe...

You might want to double check that one. Mathematically speaking, adding together the utility of two different agents is best thought of as a type error, for multiple reasons.

The first problem is invariance: utility is only defined up to affine transformation, i.e. multiplying it by a constant and adding another constant leaves the underlying preferences unchanged. But if we add the utilities, then the preferences expressed by the added-together utility function do depend on which constants we multiply the two original utility functions by. It's analogous to adding together "12 meters" and "3 kilograms".

The second problem is that a utility function is only defined within the context of a world model [LW · GW] - i.e. we typically say that an agent prefers to maximize E[u(X)], with the expectation taken over some model. Problem is, which variables X even "exist" to be used as inputs depends on the model. Even two different agents in the same environment can have models with entirely different variables in them (i.e. different ontologies). Their utility functions are defined only in the context of their individual models; there's not necessarily any way to say that their utility is over world-states, and the different models do not necessarily have any correspondence.

If adding utilities across agents has any meaning at all, there's nothing in the math to suggest what that meaning would be, and indeed the math suggests pretty strongly that it is not meaningful.

Replies from: aaronb50
comment by Aaron Bergman (aaronb50) · 2021-05-07T20:55:46.730Z · LW(p) · GW(p)

Are you using 'utility' in the economic context, for which a utility function is purely ordinal? Perhaps I should have used a different word, but I'm referring to 'net positive conscious mental states,' which intuitively doesn't seem to suffer from the same issues. 

Replies from: johnswentworth
comment by johnswentworth · 2021-05-08T16:44:11.235Z · LW(p) · GW(p)

Yes, I was using it in the economic sense. If we say something like "net positive conscious mental states", it's still unclear what it would mean to add up such things. What would "positive conscious mental state" mean, in a sense which can be added across humans, without running into the same problems which come up for utility?

Replies from: aaronb50
comment by Aaron Bergman (aaronb50) · 2021-05-09T02:56:44.113Z · LW(p) · GW(p)

I don't think it is operationalizable, but I fail to see why 'net positive mental states' isn't a meaningful, real value. Maybe the units would be apple*minutes or something, where one unit is equivalent to the pleasure you get by eating an apple for one minute. It seems that this could in principle be calculated with full information about everyone's conscious experience.