Positive utility in an infinite universe

post by casebash · 2016-01-29T23:40:45.491Z · LW · GW · Legacy · 13 comments

Contents

13 comments

Content Note: Highly abstract situation with existing infinities

This post will attempt to resolve the problem of infinities in utilitarianism. The arguments are very similar to an argument for total utilitarianism over other forms which I'll most likely write up at some point (my previous post was better as an argument against average utilitarianism, rather than an argument in favour of total utilitarianism).

In the Less Wrong Facebook group, Gabe Bf posted a challenge to save utilitarianism from the problem of infinities. The original problem is from by a paper by Nick Bostrom.

I believe that I have quite a good solution to this problem that allows us to systemise comparing infinite sets of utility, but this post focuses on justifying why we should take it to be axiomic that adding another person with positive utility is good and on why the results that seem to contradict this lack credibility. Let's call this the Addition Axiom or A. We can also consider the Finite Addition Axiom (only applies when we add utility into a universe with a finite number of people), call this A0.

Let's consider what other alternative axioms that we might want to take instead. One is the Infinite Indifference Axiom or I, that is that we should be indifferent if both options provide infinite total utility (of the same order of infinity). Another option would be the Remapping Axiom (or R), which would assert that if we can surjectively map a group of people G onto another group H so that each g from G is mapped onto a person h from H where u(g) >= u(h), then v(H) <= v(G) where v represents the value of a particular universe (it doesn't necessarily map onto the real numbers or represent a complete ordering). Using the Remapping Axiom twice implies that we should be indifferent between an infinite series of ones and the same series with a 0 at one spot. This means that the Remapping Axiom is incompatible with the Addition Axiom. We can also consider the Finite Remapping Axiom (R0) which is where we limit the Remapping Axiom to remapping a finite number of elements.

First, we need to determine what are good properties of a statement we wish to take as an axiom. This is my first time trying to establish an axiom so formally, so I will admit that this list is not going to be perfect.

Let's look first at the Infinite Indifference Axiom. Firstly, it deals purely with infinite objects, which are known to often behave irregularly and results in many problems in which there is no consensus. Secondly, it exists in the map to some extent (but not that much at all). In the territory, there are just objects, infinity is our attempt to transpose certain object configurations into a number system. Thirdly, it doesn't seem to extend from the finite numbers very well. If one situation provides 5 total utility and another provides 5 total utility, then it seems logical to treat them as the same as 5 is equal to 5. However, infinity doesn't seem to be equal to itself in the same way. Infinity plus 1 is still infinity. We can remove infinite dots from infinite dots and end up with 1 or 2 or 3... or infinity. Further, this axiom leads to the result that we are indifferent between someone with large positive utility being created and someone with large negative good being created. This is massively unintuitive, but I will admit it is subjective. I think this would make a very poor axiom, but it doesn't mean it is false (Pythagoras' Theorem would make a poor axiom too).

On the other hand, deciding between the Remapping Axiom and Addition Axiom will be much closer. On the first criteria I think the Addition Axiom comes out ahead. It involves making only a single change to the situation, a primitive change if you will. In contrast, the Remapping Axiom involves Remapping an infinite number of objects. This is still a relatively simple change, but it is definitely more complicated and permutations of infinite series are well known to behave weirdly.

On the second criteria, the Addition Axiom (by itself) doesn't lead to any really weird results. We'll get some weird results in subsequent posts, but that's because we are going to going to make some very weird changes to the situation, not because of the Addition Axiom itself. The failure of the Remapping Axion could very well be because mappings lack the resolution to distinguish between different situations. We know that an infinite series can map onto itself, half of itself or itself twice, which lends a huge amount of support to the lack of resolution theory.

On the other hand, the Addition Axiom being false (because we've assumes the Remapping Axiom) is truly bizarre. It basically states that good things are good. Nonetheless, while this may seem very convincing to me, people's intuitions vary so the more relevant material for people with a different intuition is the material above that suggests the Remapping Axiom lacks resolution.

On the third criteria, a new object appearing is something that can occur in the territory. Infinite remappings initially seem to be more in the map than the territory, but it is very easy to imagine a group of objects moving one space to the right, so this assertion seems unjustified. That is, infinity is in the map as discussed before, but an infinite group of objects and their movements can still be in the territory. However, when we think about it again, we see that we have reduced the infinite group of objects X, to a set objects positioned, for example, on X = 0, 1, 2... This is a massive hint about the content of my following posts.

Lastly, the Addition Axiom in infinite case is a natural extension of the Finite Addition Axiom. In A0 the principle is that whatever else happens in the universe is irrelevant and there is no reason for this to change in the infinite case. For the Remapping Axiom, it also seems like a very natural extension of the finite case, so I'll call this criteria a draw.

In summary, if you don't already find the Addition Axiom more intuitive than the Remapping Axiom, the main reasons to favour the Addition Axiom are 1) it deals with better understood objects, 2) it is closer to the territory than the map 3) there are good reasons to suspect that Remapping lacks resolution. Of these reasons, I believe the the 3rd is by far the most persuasive; I consider the other two more to be hints than anything else.

I only dealt with the Infinite Indifference Axiom and the Remapping Axioms, but I'm sure other people will suggest their own alternative Axioms which need to be compared.

Increasing a person's utility, instead of creating a new person with positive utility is exactly the same. Also, this post is just the start. I will provide a systematic analysis of infinite universes over the coming days, plus an FAQ conditional on sufficient high quality questions.

 

 

 

13 comments

Comments sorted by top scores.

comment by Dagon · 2016-01-31T17:51:17.230Z · LW(p) · GW(p)

Paper in question: Infinite Ethics. Also LW Wiki Page and a not-particularly-great Reddit thread.

Nobody seems willing to bite the bullet that in fact, if everything possible actually happens, and all parts of the universe are given equal weight, then it is the case that no choice matters. It is the intuition that there is a moral truth which is wrong, not any specific part of it.

To some extent, it boils down to "how do you justify any discount rate if the future is infinite and you weight all parts of it equally"? I think the answer is "you don't. Infinitesimal value approaches 0 for any individual choice".

Note that even in a finite universe, I think this is one the two huge problems with utilitarianism. How do you know what is the proper discount rate and timeframe for your choices? (not today's topic, but to avoid being a tease, the other is: how do you actually assign these values you're aggregating).

Replies from: gjm, casebash, casebash
comment by gjm · 2016-02-02T10:56:49.090Z · LW(p) · GW(p)

no choice matters

It may well be correct that there are no objective moral truths, but note that none of this stuff is specifically about objective moral truths. You can say the same sort of things about any value system, even if it's just "what I personally happen to value". We do, after all, have decisions to make, even if the universe turns out to be infinite.

Replies from: Dagon
comment by Dagon · 2016-02-02T15:20:57.593Z · LW(p) · GW(p)

If it's only *what I happen to value", then the questions of discount rate go away - distance from the agent is a fair basis for caring less about some things than others. And with enough uncaring about the far future, it doesn't matter whether it's infinite or just very large.

This bypasses the problem, but what you're left with isn't Utilitarianism. You're no longer trying to maximize anything over the entire universe, only your local perceptions.

Replies from: gjm
comment by gjm · 2016-02-02T16:35:27.212Z · LW(p) · GW(p)

The questions go away if you personally are happy having a sufficiently rapidly increasing discount for things far away in space or time. But someone may not be happy with that; they may claim that they care equally about everyone (perhaps only in some "far mode" sense of caring) and want to know what they should then do.

The answer might turn out to be: "No, actually, it turns out you literally can't care equally about everyone and still have any way of making decisions that actually works". That would be interesting. (I gravely that that is the answer, but it might be.)

Replies from: entirelyuseless
comment by entirelyuseless · 2016-02-02T19:17:06.601Z · LW(p) · GW(p)

I argued for this answer in the discussions about Pascal's Mugging, and people kept responding, "Maybe we don't actually have an unbounded utility function, but we want to modify ourselves to have one."

I don't want to modify myself in that way, and I don't think that anyone else does in a coherent way (i.e. I do not believe that they would accept the consequences of their view if they knew them). So if someone can prove that it is not logically consistent in the first place, that would actually be an advantage, from my point of view, since it would prevent people from aiming for it.

Replies from: gjm
comment by gjm · 2016-02-02T19:55:17.809Z · LW(p) · GW(p)

It feels to me as if the following things are likely to be true:

  • If you want your utilities to be real-valued then you can't value everyone equally in a universe with a countable infinity of people (for reasons analogous to the way you can't pick one person at random from a universe with a countable infinity of people).
  • If you allow a more general notion of utilities, you can value everyone equally, but there may be a price to pay (e.g., some pairs of outcomes not being comparable, or not having enough structure for notions like "expected utility" to be defined).

For instance, consider the following construction. We have a countable infinity of possible people (not all necessarily exist). We assume we've got a way of assigning utilities to individuals. Now say that a "global utility" means an assignment of a utility to each person (0 for nonexistent people), and put an equivalence relation on global utilities where u~v if you can get from one to the other by changing a finite number of the utilities, by amounts that add up to zero. (Or, maybe better: by changing any number, where {all the changes} is absolutely convergent -- i.e., sum of the absolute values is finite -- and the sum is zero.)

In this case, you can compute expected utilities "pointwise", which is nice; swapping two people's "labels" (or, more generally, permuting finitely many labels) makes no difference to a "global utility", which is nice; in any world with only finitely many people it's equivalent to total utilitarianism, which is probably nice; if you increase some utilities and don't decrease any, you get something strictly better, which is nice; but utilities aren't always comparable, so in some cases this value system doesn't know what to do. E.g., if you have disjoint infinite sets A and B of people, { everyone in A gets +1, everyone in B gets -1 } and {everyone in A gets -1, everyone in B gets +1 } are incomparable, which isn't so nice.

comment by casebash · 2016-02-02T01:44:51.964Z · LW(p) · GW(p)

Infinite timeframe, no intrinsic discount rate - discount only due to uncertainty

comment by casebash · 2016-01-31T23:59:13.786Z · LW(p) · GW(p)

Your comment seems to have been cut off

Replies from: Dagon
comment by Dagon · 2016-02-01T15:09:36.727Z · LW(p) · GW(p)

Thanks. Edited to remove the start of an incomplete thought.

comment by MrMind · 2016-02-01T16:29:39.610Z · LW(p) · GW(p)

Are the map in the remapping axiom required to be bijection or just injection?
In the first case I don't see how you can make it work to add a 0 at any point, in the second case the axiom would be just silly ({11,20,-100} would be better than {10,20}).

Replies from: casebash
comment by casebash · 2016-02-02T01:48:17.874Z · LW(p) · GW(p)

There's no requirement to be injective, but you have to be surjective

Replies from: MrMind
comment by MrMind · 2016-02-02T13:32:00.485Z · LW(p) · GW(p)

Then I still fail to see how you can apply it twice to get the result stated. Care to give a little demonstration?

comment by [deleted] · 2016-01-31T14:01:22.925Z · LW(p) · GW(p)

x