Big Advance in Infinite Ethics

post by bwest · 2017-11-28T15:10:47.396Z · LW · GW · 13 comments

Contents

  Limit-discounted Utilitarianism (LDU)
    Example
  Markov Decision Processes (MDP)
  Why This Matters
  Outstanding Issues
  Conclusion
None
13 comments

Summary

It is possible that our universe is infinite in both time and space. We might therefore reasonably consider the following question: given some sequences and (where each represents the welfare of persons living at time ), how can we tell if is morally preferable to ?

It has been demonstrated that there is no “reasonable” ethical algorithm which can compare any two such sequences. Therefore, we want to look for subsets of sequences which can be compared, and (perhaps retro-justified) arguments for why these subsets are the only ones which practically matter.

Adam Jonsson has published a preprint of what seems to me to be the first legitimate such ethical system. He considers the following: suppose at any time we are choosing between a finite set of options. We have an infinite number of times in which we make a choice (giving us an infinite sequence), but at each time step we have only finitely many choices. (Formally, he considers Markov Decision Processes.) He has shown that an ethical algorithm he calls “limit-discounted utilitarianism” (LDU) can compare any two such sequences, and moreover the outcome of LDU agrees with our ethical intuitions.

This is the first time that (to my knowledge), we have some justification for thinking that a certain algorithm is all we will "practically" need when comparing infinite utility streams.

Limit-discounted Utilitarianism (LDU)

Given and it seems reasonable to say if

Of course, the problem is that this series may not converge and then it’s unclear which sequence is preferable. A classic example is the choice between and . (See the example below.) 

LDU handles this by using Abel summation. Here is a rough explanation of how that works. 

Intuitively, we might consider adding a discount factor like this:

This modified series may converge even though the original one doesn’t. Of course, this convergence is at the cost of us caring more about people who are born earlier, which might not endear us to our children.

Therefore, we can take the limit case:

This modified summand is what’s used for LDU.

LDU has a number of desirable properties, which are summarized on page 7 of this paper by Jonsson and Voorneveld. I won’t go into them much here other than to say that LDU generally extends our intuitions about what should happen in the finite case to the infinite one.

Example

Suppose we want to compare and . Let's take the standard series:

This is Grandi’s series, which famously does not converge under the usual definitions of convergence.

LDU though will place in a discount term to get:

It is clear that this is simply a geometric series, and we can find its value using the standard formula for geometric series:

Taking the limit:

Therefore, the Abel sum of this series is one half, and, since , we have determined that is better than (morally preferable to) .

This seems kind of intuitive: as you add more and more terms, the value of the series oscillates between zero and one, so in some sense the limit of the series is one half.

Markov Decision Processes (MDP)

Markov Decision Processes, according to Wikipedia, are:

At each time step, the process is in some state , and the decision maker may choose any action that is available in state .  The process responds at the next time step by randomly moving into a new state , and giving the decision maker a corresponding reward .
The probability that the process moves into its new state is influenced by the chosen action.  Specifically, it is given by the state transition function .  Thus, the next state depends on the current state and the decision maker's action .

At each time step the decision-maker chooses between a finite number of options, which causes the universe to (probabilistically) move into one of a finite number of states, giving the decision-maker a (finite) payoff. By repeating this process an infinite number of times, we can construct a sequence where is the payoff at time .

The set of all sequences generated by a decision-maker who follows a single, time independent, (i.e. stationary) policy is what is considered by Jonsson. Crucially, he shows that LDU is able to compare any two streams generated by a stationary Markov decision process. [1] 

Why This Matters

My immediate objection upon reading this paper was “of course if you limit us to only finitely many choices then the problem is soluble – the entire problem only occurs because we want to examine infinite things!”

After having thought about it more though, I think this is an important step forward, and MDPs represent an importantly large class of decision processes.

Even though the universe may be infinite in time and space, in any time interval there is plausibly only finitely many states I could be in, e.g. perhaps because there are only finitely many neurons in my brain.

(Someone who knows more about physics than I might be able to comment on a stronger argument: if locality holds, then perhaps it is a law of nature that only finitely many things can affect us within a finite time window?)

Sequences generated by MDPs are therefore plausibly the only set of sequences a decision-maker may need to practically consider.

Outstanding Issues

My biggest outstanding concern with modeling our decisions with an MDP is that the payoffs have to remain constant. It seems likely that, as we learn more, we will discover that certain states are more or less valuable than we had previously thought. E.g. we may learn that insects are more conscious than previously expected, and therefore insect suffering affects our payoffs more highly than we had originally thought. It seems like maybe one could have a “meta-MDP” which somehow models this, but I’m not familiar enough with the area to say for sure.

A more theoretical question is: what sequences can be generated via MDPs? My hope is that one day someone will show LDU (or a similarly intuitive algorithm) can compare any two computable sequences, but I don’t think that this is that proof.

Lastly, we have the standard problems of infinitarian fanaticism and paralysis. E.g. even if our current best model of the universe predicted that MDP was exactly correct, there would still be some positive probability that it was wrong and then our “meta-decision procedure” is unclear.

Conclusion

Overall, I don't think that this completely solves the questions with comparing infinite utility streams, but it's a large step forward. Previous algorithms like the overtaking criterion had fairly "obvious" incomparable streams, with no real justification for why those streams would not be encountered by a decision-maker. LDU is not complete, but we at least have some reason to think that it may be all we "practically" need.

I would like to thank Adam Jonsson for discussing this with me. I have done my best to represent LDU, but any errors in the above are mine. Notably, the justification for why MDP's are all we need to consider is entirely mine, and I'm not sure what Adam thinks about it.

1. This is not explicitly stated in Jonsson's paper, but it follows from the proof of theorem 1. Jonsson confirmed this in email discussions with me.

13 comments

Comments sorted by top scores.

comment by Stuart_Armstrong · 2017-11-28T15:49:31.785Z · LW(p) · GW(p)

A problem with this approach is that the ordering of the things in the sequence matters ((1,0,1,0,1...) reorders to (1,0,0,1,0,0,1...)). This method works here, where the ordering is by moments of time, but not for, say, summing the utility of infintely many agents, where there is no clear ordering.

I have a method of comparison that doesn't depend on the ordering: https://agentfoundations.org/item?id=1455

Replies from: bwest
comment by bwest · 2017-11-28T16:40:22.461Z · LW(p) · GW(p)

Thanks! Your idea is interesting – I put a comment on that post.

Something you are probably aware of is that accepting "anonymity" (allowing the sequence to be reordered arbitrarily) requires us to reject seemingly intuitive principles like Pareto (if you can make someone better off and no one worse off, then you should).

Personally, I would rather keep Pareto than anonymity, but I think it's cool to explore what anonymous orderings can do.

comment by Ben Pace (Benito) · 2017-11-28T15:40:53.321Z · LW(p) · GW(p)

I have not looked through the math in detail, but I appreciate the non-techncial discussion at the end, and I like summaries of contributions to an important problem, so I've moved it to the frontpage.

comment by Chris_Leong · 2017-11-28T23:14:38.928Z · LW(p) · GW(p)

I believe that the solution to this problem involves surreal numbers. Here's an extract from an email that I sent to Amanda Askell. I'm planning on writing up a full post on this soonish, but I'm also looking for jobs at the moment, so there is a bit of a conflict there. I know this needs to be formalised more though.

"Thanks for feedback on using surreal numbers. 

Eddy Chen and Daniel Rubio seem to be using an approach quite similar to me. In particular, they made two key insights:

  • If non-standard infinities are going to be used, then surreal numbers are a much more natural non-standard class number to use than hyperreals
  • Sequences can also have a surreal number attached as a length 

However, that presentation is not quite a complete theory. One of the biggest issues is that they argued it is invalid to re-arrange sequences, when the spatial order should not make a difference. In particular, they wanted to say that it was invalid to rearrange 1,-1/2,1/3,-1/4… is rearranged to 1,-1/2,1/3,1/5,-1/4,1/7,1/9,1/11,1/13,-1/6 as the original sequence had the same number of positive and negative terms, but the later sequence has more positive terms up to any particular point.

An informal description of my approach to resolve this works as follows:

  • Instead of simply attaching a single length to a sequence, we need lengths attached to all sub-sequences. If we do this, then we can take a countably infinite sequence 1,1,1… with length X (with X is surreal) and another sequence 2,2,2… with length Y and splice them together into a sequence 1,2,1,2… with total utility X+2Y. We could splice them to be 1,1,2,1,1,2... or 1,2,2,1,2,2... instead, but this won't change the total utility as long as we keep in mind that there are X ones and Y twos.
  • Similarly, for the 1,-1/2,1/3,-1/4 sequence. If there are X positive terms and Y negative terms, then this will remain the same after it is rearranged
  • We define homogenous sequences as sequence where that the odd numbered places have the same “length” as the even numbered places and three subsequences of every third element have the same length and so on for every fourth question ect. (It's actually a bit more complex than this)
  • After we have defined homogenous sequences, we can say 1,2,1,2 (length X, homogenous) is different from 1,1,2,1,1,2… (length X, homogenous) as Eddy Chen and Daniel Rubio wanted to do, but using a more formal account.

As per Eddy Chen and Daniel Rubio's model, this will behave as expected with regards to standard changes – adding elements, deleting elements, increasing single values, decreasing single values, increasing all values, decreasing all values, multiplying all values ect.  At the same time, rearrangements preserve utility."

Replies from: bwest
comment by bwest · 2017-11-29T00:20:57.996Z · LW(p) · GW(p)

Thanks! Someone (maybe it was you?) pointed me to Chen and Rubio's stuff before, and it sounds interesting.

I don't fully understand the informal write up you have above, but I'm looking forward to seeing the final thing!

comment by SoerenMind · 2017-12-11T03:30:00.376Z · LW(p) · GW(p)

Warning: I haven't read the paper so take this with a grain of salt

Here's how it would go wrong if I understand it right: For exponentially discounted MDPs there's something called an effective horizon. That means everything after that time is essentially ignored.

You pick a tiny . Say (without loss of generality) that all utilities . Then there is a time with . So the discounted cumulative utility from anything after is bounded by (which follows from the limit of the geometric series). That's an arbitrarily small constant.

We can now easily construct pairs of sequences for which LDU gives counterintuitive conclusions. E.g. a sequence which is maximally better than for any until the end of time but ever so slightly worse (by ) for .

So anything that happens after is essentially ignored - we've essentially made the problem finite.

Exponential discounting in MDPs is standard practice. I'm surprised that this is presented as a big advance in infinite ethics as people have certainly thought about this in economics, machine learning and ethics before.

Btw, your meta-MDP probably falls into the category of Bayes-Adaptive MDP (BAMDP) or Bayes-Adaptive partially observable MDP (BAPOMDP) with learned rewards.

Replies from: bwest
comment by bwest · 2017-12-29T19:46:03.515Z · LW(p) · GW(p)

Thanks for the response. EDIT: Adam pointed out to me that LDU does not suffer from dictatorship of the present as I originally stated below and as you argued above. What you are saying is true for a fixed discount factor, but in this case we take the limit as .

The property you describe is known as "dictatorship of the present", and you can read more about it here. In order to get rid of this "dictatorship" you end up having to do things like reject stationary, which are plausibly just as counterintuitive.

> I'm surprised that this is presented as a big advance in infinite ethics as people have certainly thought about this in economics, machine learning and ethics before.

Could you elaborate? The reason that I thought this was important was:

> Previous algorithms like the overtaking criterion had fairly "obvious" incomparable streams, with no real justification for why those streams would not be encountered by a decision-maker. LDU is not complete, but we at least have some reason to think that it may be all we "practically" need.

Are there other algorithms which you think are all we will "practically" need?

comment by LawrenceC (LawChan) · 2017-11-29T22:00:03.231Z · LW(p) · GW(p)

>My hope is that one day someone will show LDU (or a similarly intuitive algorithm) can compare any two computable sequences, but I don’t think that this is that proof.

I'm pretty sure you can't use a computable algorithm to do this for general computable sequences while maintaining weak Pareto efficiency due to a diagonalization argument. Let be the algorithm you use to choose between two computable sequences, which returns 0 if the first sequence is better and 1 otherwise. Let be the infinite sequence whose value is always 0.5. Consider the sequence where has value . That is, if chooses , then is an infinite sequence of s, and if chooses , then is an infinite sequence of s. Either way, violates weak Pareto efficiency, since it tells you to choose a sequence whose value at every timestep is less than the other sequence.

Replies from: bwest
comment by bwest · 2017-11-29T23:43:35.593Z · LW(p) · GW(p)

Thanks! But is that correct? I notice that your argument seems to work for finite sequences as well (or even single rational numbers), but clearly we can order the rational numbers.

Replies from: Vaniver
comment by Vaniver · 2017-12-08T22:05:42.522Z · LW(p) · GW(p)

I think the issue here is whether you're comparing functions (which allow self-reference in a way that can break ordering) or numbers (which don't); for ordering arbitrary computable sequences, you need to have a way to avoid the sort of diagonalization that makes your choices affect the sequences you have to sort.

Replies from: bwest
comment by bwest · 2017-12-29T19:37:57.410Z · LW(p) · GW(p)

FYI, I was still confused about this so I posted on math .se. Someone responded that the above proof is incorrect, but they gave their own proof that there is no computable ordering over which respects Pareto.

comment by davidkwca · 2017-11-28T15:30:09.928Z · LW(p) · GW(p)

Admittedly, I do not have much of an idea about Infinite Ethics, but it appeared to me that the problem was to a large extent about how to deal with an infinite number of agents on which you can define no measure/order so that you can discount utilities.

Right now, I don't see how this approach helps with that?

Replies from: bwest
comment by bwest · 2017-11-28T15:39:18.656Z · LW(p) · GW(p)

The canonical problem in infinite ethics is to create a preference relation over which is in some sense "reasonable". That's what this approach does.

For more background you can see this background article or my overview of the field.