Naturalism versus unbounded (or unmaximisable) utility options
post by Stuart_Armstrong · 2013-02-01T17:45:28.395Z · LW · GW · Legacy · 74 commentsContents
What to do? None 74 comments
There are many paradoxes with unbounded utility functions. For instance, consider whether it's rational to spend eternity in Hell:
Suppose that you die, and God offers you a deal. You can spend 1 day in Hell, and he will give you 2 days in Heaven, and then you will spend the rest of eternity in Purgatory (which is positioned exactly midway in utility between heaven and hell). You decide that it's a good deal, and accept. At the end of your first day in Hell, God offers you the same deal: 1 extra day in Hell, and you will get 2 more days in Heaven. Again you accept. The same deal is offered at the end of the second day.
And the result is... that you spend eternity in Hell. There is never a rational moment to leave for Heaven - that decision is always dominated by the decision to stay in Hell.
Or consider a simpler paradox:
You're immortal. Tell Omega any natural number, and he will give you that much utility. On top of that, he will give you any utility you may have lost in the decision process (such as the time wasted choosing and specifying your number). Then he departs. What number will you choose?
Again, there's no good answer to this problem - any number you name, you could have got more by naming a higher one. And since Omega compensates you for extra effort, there's never any reason to not name a higher number.
It seems that these are problems caused by unbounded utility. But that's not the case, in fact! Consider:
You're immortal. Tell Omega any real number r > 0, and he'll give you 1-r utility. On top of that, he will give you any utility you may have lost in the decision process (such as the time wasted choosing and specifying your number). Then he departs. What number will you choose?
Again, there is not best answer - for any r, r/2 would have been better. So these problems arise not because of unbounded utility, but because of unbounded options. You have infinitely many options to choose from (sequentially in the Heaven and Hell problem, all at once in the other two) and the set of possible utilities from your choices does not possess a maximum - so there is no best choice.
What should you do? In the Heaven and Hell problem, you end up worse off if you make the locally dominant decision at each decision node - if you always choose to add an extra day in Hell, you'll never get out of it. At some point (maybe at the very beginning), you're going to have to give up an advantageous deal. In fact, since giving up once means you'll never be offered the deal again, you're going to have to give up arbitrarily much utility. Is there a way out of this conundrum?
Assume first that you're a deterministic agent, and imagine that you're sitting down for an hour to think about this (don't worry, Satan can wait, he's just warming up the pokers). Since you're deterministic, and you know it, then your ultimate life future will be entirely determined by what you decide right now (in fact your life history is already determined, you just don't know it yet - still, by the Markov property, your current decision also determines the future). Now, you don't have to reach any grand decision now - you're just deciding what you'll do for the next hour or so. Some possible options are:
- Ignore everything, sing songs to yourself.
- Think about this some more, thinking of yourself as an algorithm.
- Think about this some more, thinking of yourself as a collection of arguing agents.
- Pick a number N, and accept all of God's deals until day N.
- Promise yourself you'll reject all of God's deals.
- Accept God's deal for today, hope something turns up.
- Defer any decision until another hour has passed.
- ...
There are many other options - in fact, there are precisely as many options as you've considered during that hour. And, crucially, you can put an estimated expected utility to each one. For instance, you might know yourself, and suspect that you'll always do the same thing (you have no self discipline where cake and Heaven are concerned), so any decision apart from immediately rejecting all of God's deals will give you -∞ utility. Or maybe you know yourself, and have great self discipline and perfect precommitments- therefore if you pick a number N in the coming hour, you'll stick to it. Thinking some more may have a certain expected utility - which may differ depending on what directions you direct your thoughts. And if you know that you can't direct your thoughts - well then they'll all have the same expected utility.
But notice what's happening here: you've reduced the expected utility calculation over infinitely many options, to one over finitely many options - namely, all the interim decisions that you can consider in the course of an hour. Since you are deterministic, the infinitely many options don't have an impact: whatever interim decision you follow, will uniquely determine how much utility you actually get out of this. And given finitely many options, each with expected utility, choosing one doesn't give any paradoxes.
And note that you don't need determinism - adding stochastic components to yourself doesn't change anything, as you're already using expected utility anyway. So all you need is an assumption of naturalism - that you're subject to the laws of nature, that your decision will be the result of deterministic or stochastic processes. In other words, you don't have 'spooky' free will that contradicts the laws of physics.
Of course, you might be wrong about your estimates - maybe you have more/less willpower than you initially thought. That doesn't invalidate the model - at every hour, at every interim decision, you need to choose the option that will, in your estimation, ultimately result in the most utility (not just for the next few moments or days).
If we want to be more formal, we can say that you're deciding on a decision policy - choosing among the different agents that you could be, the one most likely to reach high expected utility. Here are some policies you could choose from (the challenge is to find a policy that gets you the most days in Hell/Heaven, without getting stuck and going on forever):
- Decide to count the days, and reject God's deal as soon as you lose count.
- Fix a probability distribution over future days, and reject God's deal with a certain probability.
- Model yourself as a finite state machine. Figure out the Busy Beaver number of that finite state machine. Reject the deal when the number of days climbs close to that.
- Realise that you probably can't compute the Busy Beaver number for yourself, and instead use some very fast growing function like the Ackermann functions instead.
- Use the Ackermann function to count down the days during which you formulate a policy; after that, implement it.
- Estimate that there is a non-zero probability of falling into a loop (which would give you -∞ utility), so reject God's deal as soon as possible.
- Estimate that there is a non-zero probability of accidentally telling God the wrong thing, so commit to accepting all of God's deals (and count on accidents to rescue you from -∞ utility).
But why spend a whole hour thinking about it? Surely the same applies for half an hour, a minute, a second, a microsecond? That's entirely a convenience choice - if you think about things in one second increments, then the interim decision "think some more" is nearly always going to be the dominant one.
The mention of the Busy Beaver number hints at a truth - given the limitations of your mind and decision abilities, there is one policy, among all possible policies that you could implement, that gives you the most utility. More complicated policies you can't implement (which generally means you'd hit a loop and get -∞ utility), and simpler policies would give you less utility. Of course, you likely won't find that policy, or anything close to it. It all really depends on how good your policy finding policy is (and your policy finding policy finding policy...).
That's maybe the most important aspect of these problems: some agents are just better than others. Unlike finite cases where any agent can simply list all the options, take their time, and choose the best one, here an agent with a better decision algorithm will outperform another. Even if they start with the same resources (memory capacity, cognitive shortcuts, etc...) one may be a lot better than another. If the agents don't acquire more resources during their time in Hell, then their maximal possible utility is related to their Busy Beaver number - basically the maximal length that a finite-state agent can survive without falling into an infinite loop. Busy Beaver numbers are extremely uncomputable, so some agents, by pure chance, may be capable of acquiring much greater utility than others. And agents that start with more resources have a much larger theoretical maximum - not fair, but deal with it. Hence it's not really an infinite option scenario, but an infinite agent scenario, with each agent having a different maximal expected utility that they can extract from the setup.
It should be noted that God, or any being capable of hypercomputation, has real problems in these situations: they actually have infinite options (not a finite options of choosing their future policy), and so don't have any solution available.
This is also related to theoretical maximally optimum agent that is AIXI: for any computable agent that approximates AIXI, there will be other agents that approximate it better (and hence get higher expected utility). Again, it's not fair, but not unexpected either: smarter agents are smarter.
What to do?
This analysis doesn't solve the vexing question of what to do - what is the right answer to these kind of problems? These depend on what type of agent you are, but what you need to do is estimate the maximal integer you are capable of computing (and storing), and endure for that many days. Certain probabilistic strategies may improve your performance further, but you have to put the effort into finding them.
74 comments
Comments sorted by top scores.
comment by Andreas_Giger · 2013-02-01T22:13:02.700Z · LW(p) · GW(p)
This is very good post. The real question that has not explicitly been asked is the following:
How can utility be maximised when there is no maximum utility?
The answer of course is that it can't.
Some of the ideas that are offered as solutions or approximations of solutions are quite clever, but because for any agent you can trivially construct another agent who will perform better and there is no metrics other than utility itself for determining how much better an agent is than another agent, solutions aren't even interesting here. Trying to find limits such as storage capacity or computing power is only avoiding the real problem.
These are simply problems that have no solutions, like the problem of finding the largest integer has no solution. You can get arbitrarily close, but that's it.
And since I'm at it, let me quote another limitation of utility I very recently wrote about in a comment to Pinpointing Utility:
Replies from: Stuart_Armstrong, casebashAssuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can't assign any utility to actually infinite immortality, or you can't differentiate between higher-quality and lower-quality immortality, or you can't represent utility as a real number.
↑ comment by Stuart_Armstrong · 2013-02-01T22:55:34.678Z · LW(p) · GW(p)
Assuming you assign utility to lifetime as a function of life quality in such a way that for any constant quality longer life has strictly higher (or lower) utility than shorter life, then either you can't assign any utility to actually infinite immortality, or you can't differentiate between higher-quality and lower-quality immortality, or you can't represent utility as a real number.
This seems like it can be treated with non-standard reals or similar.
Replies from: CronoDAS↑ comment by CronoDAS · 2013-02-01T23:46:22.021Z · LW(p) · GW(p)
Yeah, it can. You still run into the problem that a one in a zillion chance of actual immortality is more valuable than any amount of finite lifespan, though, so as long as the probability of actual immortality isn't zero, chasing after it will be the only thing that guides your decision.
Replies from: Andreas_Giger, Slider↑ comment by Andreas_Giger · 2013-02-02T00:25:36.551Z · LW(p) · GW(p)
Actually, it seems you can solve the immortality problem in ℝ after all, you just need to do it counterintuitively: 1 day is 1, 2 days is 1.5, 3 days is 1.75, etc, immortality is 2, and then you can add quality. Not very surprising in fact, considering immortality is effectively infinity and |ℕ| < |ℝ|.
Replies from: twanvl↑ comment by twanvl · 2013-02-02T12:53:57.307Z · LW(p) · GW(p)
But that would mean that the utility of 50% chance of 1 day and 50% chance of 3 days is 0.5*1+0.5*1.75=1.375
, which is different from the utility of two days that you would expect.
↑ comment by Andreas_Giger · 2013-02-02T15:28:12.128Z · LW(p) · GW(p)
You can't calculate utilites anyway; there's no reason to assume that u(n days) should be 0.5 * (u(n+m days) + u(n-m days)) for any n or m. If you want to include immortality, you can't assign utilities linearly, although you can get arbitrarily close by picking a higher factor than 0.5 as long as it's < 1.
↑ comment by Slider · 2013-02-02T10:41:00.816Z · LW(p) · GW(p)
Atleast in surreal numbers you could have infinidesimal chance of getting a (first order) infinite life span and have it able to win or lose against finite chance of finite life. In the transition to hyperreal analysis I expect that the improved accuracy of vanishingly small chances from arbitrary small reals to actually infinidesimal values would happen at the same time as the rewards go from arbitrary large values to actual infinite amounts.
Half of any first order infinidesimal chance could have some first order infinite reward that would make it beat some finite chance of finite reward. However if we have a second order infinidesimal chance of only a first order infinite reward then it loses to any finite expected utility. Not only do you have to attend whether the chance is infinite but how infinite.
There is a difference between an infinite amount and "grows without bound". If I mark the first order infinite with w: there is no trouble saying that a result of w+2 wins over w. Thus if the function does have a peak then it doesn't matter how high it is whether it is w times w or w to the power of w. In order to break things you would either have to have a scenario where god offers an unspesifiedly infinidesimal chance of equal infinite heaven time or have god offer the deal unspesifiedly many times. "a lot" isn't a number between 0 and 1 and thus not a propability. Similarly having an "unbounded amount" isn't a spesified amount and thus not a number.
The absurdity of the situation is it being ildefined or containing other contradictiction than infinities. For if god promises me (some possibly infinite amount) of days in heaven and I never receive them then god didn't make good on his promise. So despite gods abilities I am in the position to make him break his promise or I know beforehand that he can't deliver the goods. If you measure on "earned days on heaven" then only the one that continually accepts wins. If you measure days spent in heaven then only actually spending them counts and having them earned doesn't yet generate direct points. Whether or not an earned day indirectly means days spent is depenent on the ability to cash in and that is dependent on my choice. The situation doesn't have probabilities spesified in absense of the strategy used. Therefore any agent that tries to calculate the "right odds" from the description of the problem either has to use the strategy they will formulate as a basis (and this would totally negate any usefulness of coming up with the strategy) or their analysis assumes they use a different strategy than they actually end up using. So either they have to hear god proposing the deal wrong to execute on it right or they will get it right out of luck of assuming the right thing from the start. So contemplating on this issue you either come to know that your score is lower than it could be for another agent, realise that you don't model yourself correctly, you get max score because you guessed right or that you can't not know what your score is. Knowing that you solved the problem right is impossible.
↑ comment by casebash · 2016-01-05T10:51:43.769Z · LW(p) · GW(p)
"These are simply problems that have no solutions, like the problem of finding the largest integer has no solution. You can get arbitrarily close, but that's it." - Actually, you can't get arbitrarily close. No matter how high you go, you are still infinitely far away.
"How can utility be maximised when there is no maximum utility? The answer of course is that it can't."
I strongly agree with this. I wrote a post today where I came to the same conclusion, but arguably took it a step further by claiming that the immediate logical consequence is that perfect rationality does not exist, only an infinite series of better rationalities.
comment by Qiaochu_Yuan · 2013-02-01T17:57:44.737Z · LW(p) · GW(p)
Suppose that you die, and God offers you a deal. You can spend 1 day in Hell, and he will give you 2 days in Heaven, and then you will spend the rest of eternity in Purgatory (which is positioned exactly midway in utility between heaven and hell). You decide that it's a good deal, and accept. At the end of your first day in Hell, God offers you the same deal: 1 extra day in Hell, and you will get 2 more days in Heaven. Again you accept. The same deal is offered at the end of the second day.
This isn't a paradox about unbounded utility functions but a paradox about how to do decision theory if you expect to have to make infinitely many decisions. Because of the possible failure of the ability to exchange limits and integrals, the expected utility of a sequence of infinitely many decisions can't in general be computed by summing up the expected utility of each decision separately.
Replies from: Stuart_Armstrong, Andreas_Giger↑ comment by Stuart_Armstrong · 2013-02-01T18:38:18.633Z · LW(p) · GW(p)
This isn't a paradox about unbounded utility functions but a paradox about how to do decision theory if you expect to have to make infinitely many decisions.
Yes, that's my point.
↑ comment by Andreas_Giger · 2013-02-02T00:05:19.732Z · LW(p) · GW(p)
This isn't a paradox about unbounded utility functions but a paradox about how to do decision theory if you expect to have to make infinitely many decisions.
I believe it's actually a problem about how to do utility-maximising when there's no maximum utility, like the other problems. It's easy to find examples for problems in which there are infinitely many decisions as well as a maximum utility, and none of those I came up with are in any way paradoxical or even difficult.
comment by Alex_Altair · 2013-02-01T18:49:49.005Z · LW(p) · GW(p)
This is like the supremum-chasing Alex Mennen mentioned. It's possible that normative rationality simply requires that your utility function satisfy the condition he mentioned, just as it requires the VNM axioms.
I'm honestly not sure. It's a pretty disturbing situation in general.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-02-01T19:40:15.029Z · LW(p) · GW(p)
I don't think you need that - you can still profit from God's offers, even without Alex Mennen's condition.
Replies from: Alex_Altair↑ comment by Alex_Altair · 2013-02-01T21:07:49.590Z · LW(p) · GW(p)
You can profit, but that's not the goal of normative rationality. We want to maximize utility.
comment by Nisan · 2013-02-01T20:51:59.157Z · LW(p) · GW(p)
I like this point of view.
ETA: A couple commenters are saying it is bad or discouraging that you can't optimize over non-compact sets, or that this exposes a flaw in ordinary decision theory. My response is that life is like an infinitely tall drinking-glass, and you can put as much water as you like in it. You could look at the glass and say, "it will always be mostly empty", or you could look at it and say "the glass can hold an awful lot of water".
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-02-02T11:42:29.762Z · LW(p) · GW(p)
Yep. If I'm told “Tell Omega any real number r > 0, and he'll give you 1-r utility”, I say “1/BusyBeaver(Graham's number)”, cash in my utilon, and move on with my life.
comment by Douglas_Knight · 2013-02-02T07:15:24.959Z · LW(p) · GW(p)
You're immortal. Tell Omega any real number r > 0, and he'll give you 1-r utility. On top of that, he will give you any utility you may have lost in the decision process (such as the time wasted choosing and specifying your number). Then he departs. What number will you choose?
This is rather tangential to the point, but I think that by refunding utility you are pretty close to smuggling in unbounded utility. I think it is better to assume away the cost.
comment by Paul Crowley (ciphergoth) · 2013-02-02T15:47:00.882Z · LW(p) · GW(p)
An agent who only recognises finitely many utility levels doesn't have this problem. However, there's an equivalent problem for such an agent where you ask them to name a number n, and then you send them to Hell with probability 1/n and Heaven otherwise.
Replies from: DanielVarga↑ comment by DanielVarga · 2013-02-02T17:59:12.810Z · LW(p) · GW(p)
If it really has only finitely many utility levels, then for a sufficiently small epsilon and some even smaller delta, it will not care whether it ends up in Hell with probability epsilon or probability delta.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2013-02-03T08:13:48.284Z · LW(p) · GW(p)
That's if they only recognise finitely many expected utility levels. However, such an agent is not VNM-rational.
comment by Wei Dai (Wei_Dai) · 2013-02-06T11:48:02.558Z · LW(p) · GW(p)
You're immortal. Tell Omega any natural number, and he will give you that much utility.
You could generate a random number using a distribution that has infinite expected value, then tell Omega that number. Your expected utility of following this procedure is infinite.
But if there is a non-zero chance of an Omega existing that can grant you an arbitrary amount of utility, then there must also a non-zero chance of some Omega deciding on its own at some future time to grant you a random amount of utility using the above distribution, so you've already got infinite expected utility, no matter what you do.
It doesn't seem to me the third problem ("You're immortal. Tell Omega any real number r > 0, and he'll give you 1-r utility.") corresponds to any real world problems, so generalizing from the first two, the problem is just the well known problem of unbounded utility function leading to infinite or divergent expected utility. I don't understand why a lot of people seem to think very highly of this post. (What's the relevance of using ideas related to Busy Beaver to generate large numbers, if with a simple randomized strategy, or even by doing nothing, you can get infinite expected utility?)
Replies from: Stuart_Armstrong, Stuart_Armstrong, Stuart_Armstrong, Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-02-06T12:59:56.182Z · LW(p) · GW(p)
You could generate a random number using a distribution that has infinite expected value
Can a bounded agent actually do this? I'm not entirely sure.
Even so, given any distribution f, you can generate a better (dominant) distribution by taking f and adding 1 to the result. So now, as a bounded agent, you need to choose among possible distributions - it's the same problem again. What's best distribution you can specify and implement, without falling into a loop or otherwise saying yes forever?
But if there is a non-zero chance of an Omega existing that can grant you an arbitrary amount of utility, then there must also a non-zero chance of some Omega deciding on its own at some future time to grant you a random amount of utility using the above distribution, so you've already got infinite expected utility, no matter what you do.
??? Your conclusion does not follow, and is irrelevant - we care about the impact of our actions, not about hypothetical gifts that may or may not happen, and are disconnected from anything we do.
Replies from: Wei_Dai, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2013-02-06T13:43:53.107Z · LW(p) · GW(p)
Can a bounded agent actually do this? I'm not entirely sure.
First write 1 on a piece of paper. Then start flipping coins. For every head, write a 0 after the 1. If you run out of space on the paper, ask Omega for more. When you get a tail, stop and hand the pieces of paper to Omega. This has expected value of 1/2 1 + 1/4 10 + 1/8 * 100 + ... which is infinite.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-02-06T13:47:23.553Z · LW(p) · GW(p)
How does that relate to the claim in http://en.wikipedia.org/wiki/Turing_machine#Concurrency that "there is a bound on the size of integer that can be computed by an always-halting nondeterministic Turing machine starting on a blank tape"?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2013-02-06T14:17:37.769Z · LW(p) · GW(p)
I think my procedure does not satisfy the definition of "always-halting" used in that theorem (since it doesn't halt if you keep getting heads) even though it does halt with probability 1.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-02-06T16:37:47.468Z · LW(p) · GW(p)
That's probably the answer, as your solution seems solid to me.
That still doesn't change my main point: if we posit that certain infinite expectations are better than others (St Petersburg + $1 being better that St Petersburg), you still benefit from choosing your distribution as best you can.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2013-02-06T23:01:29.182Z · LW(p) · GW(p)
Can you give a mathematical definition of how to compare two infinite/divergent expectations and conclude which one is better? If you can't, then it might be that such a notion is incoherent, and it wouldn't make sense to posit it as an assumption. (My understanding is that people have previously assumed that it's impossible to compare such expectations. See http://singularity.org/files/Convergence-EU.pdf for example.)
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-02-07T11:10:35.434Z · LW(p) · GW(p)
Not all infinite expectations can be compared (I believe) but there's lots of reasonable ways that one can say that one is better than another. I've been working on this at the FHI, but let it slide as other things became more important.
One easy comparison device: if X and Y are random variables, you can often calculate the mean of X-Y using the Cauchy principal value (http://en.wikipedia.org/wiki/Cauchy_principal_value). If this is positive, then Y is better than X.
This gives a partial ordering on the space of distributions, so one can always climb higher within this partial ordering.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2013-02-08T01:28:57.006Z · LW(p) · GW(p)
Assuming you want to eventually incorporate the idea of comparing infinite/divergent expectations into decision theory, how do you propose to choose between choices that can't be compared with each other?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-02-08T12:47:50.280Z · LW(p) · GW(p)
Random variables form a vector space, since X+Y and rX are both defined. Let V be this whole vector space, and let's define a subspace W of comparable random variables. ie if X and Y are in W, then either X is better than Y, worse, or they're equivalent. This can include many random variables with infinite or undefined means (got a bunch of ways of comparing them).
Then we simply need to select a complementary subspace W^perp in V, and claim that all random variables on it are equally worthwhile. This can be either arbitrary, or we can use other principles (there are ways of showing that even if we can't say that Z is better than X, we can still find a Y that is worse than X but incomparable to Y).
Replies from: Kindly↑ comment by Kindly · 2013-02-08T14:53:47.103Z · LW(p) · GW(p)
Let V be this whole vector space, and let's define a subspace W of comparable random variables.
What exactly are you doing in this step? Are you claiming that there is a unique maximal set of random variables which are all comparable, and it forms a subspace? Or are you taking an arbitrary set of mutually comparable random variables, and then picking a subspace containing it?
Replies from: Stuart_Armstrong, None↑ comment by Stuart_Armstrong · 2013-02-11T13:51:13.523Z · LW(p) · GW(p)
EDIT: the concept has become somewhat complicated to define, and needs a rethink before fomalisation, so I'm reworking this post.
The key assumption I'll use: if X and Y are both equivalent with 0 utility, then they are equivalent with each other and with rX for all real r.
Redefine W to the space of all utility-valued random variables that are equivalent to zero utility, according to our various rules. If W is not a vector space, I extend to be so by taking any linear combinations. Let C be the line of constant-valued random variables.
Then a total order requires:
A space W', complementary to W and C, such that all elements of W' are defined to be equivalent with zero utility. W' is defined up to W, and again we can extend it by linear combinations. Let U= W+W'+C. Thus V/U corresponds to random variables with infinite utility (positive or negative). Because of what we've done, no two elements of V/U can have the same value (if so, their difference would be in W+W'), and no two elements can differ by a real number. So a total order on V/U unambiguously gives one on V. And the total order on V/U is a bit peculiar, and non-archimedean: if X>Y>0, the X>rY for all real r. Such an order can be given (non-uniquely) by an ordered basis (or a complete flag) ).
Again, the key assumption is that if two things are equivalent to zero, they are equivalent to each other - this tends to generate subspaces.
Replies from: Kindly↑ comment by Kindly · 2013-02-11T15:42:15.770Z · LW(p) · GW(p)
It's mainly the subspace part of your statement that I'm concerned about. I see no reason why the space of totally ordered random variables should be closed under taking linear combinations.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-02-11T16:51:36.939Z · LW(p) · GW(p)
Because that's a requirement of the approach - once it no longer holds true, we no longer increase W.
Maybe this is a better way of phrasing it: W is the space of all utility-valued random variables that have the same value as some constant (by whatever means we establish that).
Then I get linear closure by fiat or assumption: if X=c and Y=d, then X+rY=c+rd, for c, d and r constants (and overloading the = sign to mean "<= and >=").
But my previous post was slightly incorrect - it didn't consider infinite expectations. I will rework that a bit.
↑ comment by Vladimir_Nesov · 2013-02-06T13:04:24.671Z · LW(p) · GW(p)
The point might be that if all infinite expected utility outcomes are considered equally valuable, it doesn't matter which strategy you follow, so long as you reach infinite expected utility, and if that includes the strategy of doing nothing in particular, all games become irrelevant.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-02-06T13:10:50.372Z · LW(p) · GW(p)
If you don't like comparing infinite expected outcomes (ie if you don't think that (utility) St Petersburg + $1 is better than simply St Petersburg), then just focus on the third problem, which Wei has oddly rejected.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2013-02-06T14:16:46.228Z · LW(p) · GW(p)
then just focus on the third problem, which Wei has oddly rejected
I've often stated my worry that Omega can be used to express problems that have no real-world counterpart, thus distracting our attention away from problems that actually need to be solved. As I stated at the top of this thread, it seems to me that your third problem is such a problem.
↑ comment by Stuart_Armstrong · 2013-02-13T16:13:20.662Z · LW(p) · GW(p)
Got a different situation where you need to choose sensibly between options with infinite expectation: http://lesswrong.com/r/discussion/lw/gng/higher_than_the_most_high/
Is this a more natural setup?
↑ comment by Stuart_Armstrong · 2013-02-07T11:12:48.472Z · LW(p) · GW(p)
Actually, the third problem is probably the most relevant of them all - it's akin to a bounded paperclipper uncertain as to whether they've succeeded. Kind of like: "You get utility 1 for creating 1 paperclip and then turning yourself off (and 0 in all other situations)."
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2013-02-07T23:09:07.641Z · LW(p) · GW(p)
I still don't see how it's relevant, since I don't see a reason why we would want to create an AI with a utility function like that. The problem goes away if we remove the "and then turning yourself off" part, right? Why would we give the AI a utility function that assigns 0 utility to an outcome where we get everything we want but it never turns itself off?
Replies from: Nebu, Stuart_Armstrong↑ comment by Nebu · 2016-01-05T08:50:07.842Z · LW(p) · GW(p)
Why would we give the AI a utility function that assigns 0 utility to an outcome where we get everything we want but it never turns itself off?
The designer of that AI might have (naively?) thought this was a clever way of solving the friendliness problem. Do the thing I want, and then make sure to never do anything again. Surely that won't lead to the whole universe being tiled with paperclips, etc.
↑ comment by Stuart_Armstrong · 2013-02-08T12:50:00.471Z · LW(p) · GW(p)
This can arise indirectly, or through design, or for a host of reasons. That was the first thought that popped into my mind; I'm sure other relevant examples can be had. We might not assign such a utility - then again, we (or someone) might, which makes it relevant.
↑ comment by Stuart_Armstrong · 2013-02-06T13:28:14.136Z · LW(p) · GW(p)
You could generate a random number using a distribution that has infinite expected value,
Does this not mean that such a task is impossible? http://en.wikipedia.org/wiki/Non-deterministic_Turing_machine#Equivalence_with_DTMs
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-02T14:36:15.802Z · LW(p) · GW(p)
I remember the days when I used to consider Ackermann to be a fast-growing function.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2013-02-02T15:43:29.186Z · LW(p) · GW(p)
What's your favourite computable fast-growing function these days?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-02T15:59:23.426Z · LW(p) · GW(p)
I believe I understand ordinals up to the large Veblen ordinal, so the fast-growing hierarchy for that, plus 2, of 9, or thereabouts, would be the largest computable integer I could program without consulting a reference or having to think too hard. There are much larger computable numbers I can program if I'm allowed to use the Internet to look up certain things.
comment by fubarobfusco · 2013-02-02T05:03:49.716Z · LW(p) · GW(p)
I don't expect extreme examples to lead to good guidance for non-extreme ones.
Two functions may both approach infinity, and yet have a finite ratio between them.
Hard cases make bad law.
comment by Kaj_Sotala · 2013-02-03T20:09:21.072Z · LW(p) · GW(p)
This suggests a new explanation for the Problem of Evil: God could have created a world that had no evil and no suffering which would have been strictly better than our world, but then He could also have created a world that was strictly better than that one and so on, so He just arbitrarily picked a stopping point somewhere and we ended up with the world as we know it.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-02-03T20:32:25.806Z · LW(p) · GW(p)
This was brought up in the recent William Craig - Rosenberg debate (don't waste your time), the Sorites "paradox" answer to the Problem of Evil. Rosenberg called it the type of argument that gives philosophy a bad name, and acted too embarrassed by its stupidity to even state it. (Edit: changed the link)
Replies from: BerryPick6↑ comment by BerryPick6 · 2013-02-03T21:34:15.610Z · LW(p) · GW(p)
Man, Rosenberg looked lost in that debate...
Replies from: Kawoomba↑ comment by Kawoomba · 2013-02-03T21:40:59.050Z · LW(p) · GW(p)
(This one versus Peter Atkins is much better; just watch the Atkins parts, Craig recites the same spiel as always.
Atkins doesn't sugar-coat his arguments, but then again, that's to be expected ... ...)
Replies from: BerryPick6↑ comment by BerryPick6 · 2013-02-03T21:49:56.061Z · LW(p) · GW(p)
I stopped watching Craig's debates after Kagan smoked him so thoroughly that even the steelmanned versions of his arguments sounded embarrassing. The Bradley and Parsons debates are also definitely worth listening to, if only because it's enjoyable (and I must admit, it was quite comforting at the time) to hear Craig get demolished.
comment by Luke_A_Somers · 2013-02-01T18:38:58.043Z · LW(p) · GW(p)
Depends on how well I can store information in hell. I imagine that hell is a little distracting.
Alternately, how reliably I can generate random numbers when being offered the deal (I'm talking to God here, not Satan, so I can trust the numbers). Then I don't need to store much information. Whenever I lose count, I ask for a large number of dice of N sides where N is the largest number I can specify in time (there we go with bounding the options again - I'm not saying you were wrong). If they all come up 1, I take the deal. Otherwise I reset my count.
The only objections I can think of this are based on hell not providing a constant level of marginal disutility, but that's an implicit requirement of the problem. Once I imagine hell getting more tolerable over time so the disutility only increases linearly, it seems a lot better.
comment by AlexMennen · 2013-02-02T21:31:54.050Z · LW(p) · GW(p)
Infinite utilities violate VNM-rationality. Unbounded utility functions do too, because they allow you to construct gambles that have infinite utility. For instance, if the utility function is unbounded, then there exists a sequence of outcomes such that for each n, the utility of the nth outcome is at least 2^n. Then the utility of the gamble that, for each positive integer n, gives you a 1/2^n chance of getting the nth outcome, has infinite utility.
In the case of utility functions that are bounded but do not have a maximum, the problem is not particularly worrying. If you pick a tiny amount of utility epsilon, you can ensure that you will never sacrifice more than epsilon utility. An agent that does this, while not optimal, will be pretty good provided that it actually does always choose tiny values of epsilon.
comment by Sniffnoy · 2013-02-03T05:13:08.331Z · LW(p) · GW(p)
This may be one of those times where it is worth pointing out once again that if you are a utility-maximizer because you follow Savage's axioms then you are not only a utility-maximizer[0], but a utility-maximizer with a bounded utility function.
[0]Well, except that your notion of probability need only be finitely additive.
comment by Kawoomba · 2013-02-01T20:34:07.082Z · LW(p) · GW(p)
Excellent post.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-02-01T20:42:02.661Z · LW(p) · GW(p)
Cheers!
comment by Sniffnoy · 2013-02-03T05:12:08.527Z · LW(p) · GW(p)
This may be one of those times where it is worth pointing out once again that if you are a utility-maximizer because you follow Savage's axioms then you are not only a utility-maximizer[0], but a utility-maximizer with a bounded utility function.
[0]Well, except that your notion of probability need only be finitely additive.
comment by wwa · 2013-02-02T00:17:09.850Z · LW(p) · GW(p)
What should you do?
Figure out that I'm not a perfectly rational agent and go on with the deal for as long as I feel like it.
Bail out when I subjectively can't stand any more of Hell or when I'm fed up with writing lots of numbers on an impossibly long roll of paper.
Of course, these aren't answers that help in developing a decision theory for an AI ...
comment by Shmi (shminux) · 2013-02-01T18:07:30.703Z · LW(p) · GW(p)
First, the original question seems incomplete. Presumably the alternative to accepting the deal is something better than the guaranteed hell forever, say, 50/50 odds of ending up in either hell or haven.
Second, the initial evaluation of utilities is based on a one-shot setup, so you effectively precommit to not accepting any new deals which screw up the original calculation, like spending an extra day in hell.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-02-01T18:27:23.418Z · LW(p) · GW(p)
The problem starts after you took the first deal. If you cut that part of the story, then the other choice is purgatory forever.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-02-01T18:30:51.451Z · LW(p) · GW(p)
The problem starts after you took the first deal.
I must be missing something. Your original calculation assumes no further (identical) deals, otherwise you would not accept the first one.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-02-01T18:49:07.363Z · LW(p) · GW(p)
The deal is one day at a time: 1 day hell now + 2 days heaven later, then purgatory; or take your banked days in heaven and then purgatory.
At the beginning you have 0 days in heaven in the bank.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-02-01T18:58:01.171Z · LW(p) · GW(p)
I see. Then clearly your initial evaluation of the proposed "optimal" solution (keep banking forever) is wrong, as it picks the lowest utility. As in the other examples, there is no best solution due to unboundedness, but any other choice is better than infinite banking.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-02-01T19:09:11.991Z · LW(p) · GW(p)
I was attempting to complete the problem statement that you thought was incomplete - not to say that it was a good idea to take that path.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-02-01T19:16:22.325Z · LW(p) · GW(p)
I thought it was incomplete? Are you saying that it can be considered complete without specifying the alternatives?
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-02-01T19:44:15.492Z · LW(p) · GW(p)
I think that sorting this muddled conversation out would not be worth the effort required.
comment by Will_Newsome · 2013-02-02T02:43:40.845Z · LW(p) · GW(p)
Busy Beaver numbers are extremely uncomputable, so some agents, by pure chance, may be capable of acquiring much greater utility than others.
Pure chance is one path, divine favor is another. Though I suppose to the extent divine favor depends on one's policy bits of omega begotten of divine favor would show up as a computably-anticipatable consequence, even if omega isn't itself computable. Still, a heuristic you didn't mention: ask God what policy He would adopt in your place.
comment by gothgirl420666 · 2013-02-01T20:31:45.638Z · LW(p) · GW(p)
I've heard hell is pretty bad. I feel like after some amount of time in hell I would break down like people who are being tortured often do and tell God "I don't even care, take me straight to purgatory if you have to, anything is better than this!" TBH, I feel like that might even happen at the end of the first day. (But I'd regret it forever if I never even got to check heaven out at least once.) So it seems extremely unlikely that I would ever end up "accidentally" spending an eternity in hell. d:
In all seriousness, I enjoyed the post.
Replies from: Stuart_Armstrong, Andreas_Giger↑ comment by Stuart_Armstrong · 2013-02-01T20:33:11.254Z · LW(p) · GW(p)
Alas, the stereotypical images of Heaven and Hell aren't perfectly setup for our thought experiments! I shall complain to the pope.
↑ comment by Andreas_Giger · 2013-02-01T20:47:53.150Z · LW(p) · GW(p)
You're taking this too literally. The point is that you're immortal, u(day in heaven) > u(day in neither heaven nor hell) > u(day in hell), and u(2 days in heaven and 1 day in hell) > u(3 days in neither heaven nor hell).
You don't even need hell for this sort of problems; suppose God offers you to either cash in on your days in heaven (0 at the beginning) right now or wait a day after which he will add 1 day to your bank and offer you the same deal again. How long will you wait? What if God would halve the additional time for each deal so you couldn't even spend 2 days in heaven, but could get arbitrarily close to it?
comment by Shmi (shminux) · 2013-02-01T18:28:42.264Z · LW(p) · GW(p)
You're immortal. Tell Omega any real number r > 0, and he'll give you 1-r utility.
This problem is obviously isomorphic to the previous one under the transformation r=1/s and rescaling the utility: pick a number s > 0 and rescale the utility by s/(1-r), both are valid operations on utilities.