The Doubling Box
post by Mestroyer · 2012-08-06T05:50:19.798Z · LW · GW · Legacy · 84 commentsContents
84 comments
Let's say you have a box that has a token in it that can be redeemed for 1 utilon. Every day, its contents double. There is no limit on how many utilons you can buy with these tokens. You are immortal. It is sealed, and if you open it, it becomes an ordinary box. You get the tokens it has created, but the box does not double its contents anymore. There are no other ways to get utilons.
How long do you wait before opening it? If you never open it, you get nothing (you lose! Good day, sir or madam!) and whenever you take it, taking it one day later would have been twice as good.
I hope this doesn't sound like a reductio ad absurdum against unbounded utility functions or not discounting the future, because if it does you are in danger of amputating the wrong limb to save yourself from paradox-gangrene.
What if instead of growing exponentially without bound, it decays exponentially to the bound of your utility function? If your utility function is bounded at 10, what if the first day it is 5, the second 7.5, the third 8.75, etc. Assume all the little details, like remembering about the box, trading in the tokens, etc, are free.
If you discount the future using any function that doesn't ever hit 0, then the growth rate of the tokens can be chosen to more than make up for your discounting.
If it does hit 0 at time T, what if instead of doubling, it just increases by however many utilons will be adjusted to 1 by your discounting at that point every time of growth, but the intervals of growth shrink to nothing? You get an adjusted 1 utilon at time T - 1s, and another adjusted 1 utilon at T - 0.5s, and another at T - 0.25s, etc? Suppose you can think as fast as you want, and open the box at arbitrary speed. Also, that whatever solution your present self precommits to will be followed by the future self. (Their decision won't be changed by any change in what times they care about)
EDIT: People in the comments have suggested using a utility function that is both bounded and discounting. If your utility function isn't so strongly discounting that it drops to 0 right after the present, then you can find some time interval very close to the present where the discounting is all nonzero. And if it's nonzero, you can have a box that disappears, taking all possible utility with it at the end of that interval, and that, leading up to that interval, grows the utility in intervals that shrink to nothing as you approach the end of the interval, and increasing the utility-worth of tokens in the box such that it compensates for whatever your discounting function is exactly enough to asymptotically approach your bound.
Here is my solution. You can't assume that your future self will make the optimal decision, or even a good decision. You have to treat your future self as a physical object that your choices affect, and take the probability distribution of what decisions your future self will make, and how much utility they will net you into account.
Think if yourself as a Turing machine. If you do not halt and open the box, you lose and get nothing. No matter how complicated your brain, you have a finite number of states. You want to be a busy beaver and take the most possible time to halt, but still halt.
If, at the end, you say to yourself "I just counted to the highest number I could, counting once per day, and then made a small mark on my skin, and repeated, and when my skin was full of marks, that I was constantly refreshing to make sure they didn't go away...
...but I could let it double one more time, for more utility!"
If you return to a state you have already been at, you know you are going to be waiting forever and lose and get nothing. So it is in your best interest to open the box.
So there is not a universal optimal solution to this problem, but there is an optimal solution for a finite mind.
I remember reading a while ago about a paradox where you start with $1, and can trade that for a 50% chance of $2.01, which you can trade for a 25% chance of $4.03, which you can trade for a 12.5% chance of $8.07, etc (can't remember where I read it).
This is the same paradox with one of the traps for wannabe Captain Kirks (using dollars instead of utilons) removed and one of the unnecessary variables (uncertainty) cut out.
My solution also works on that. Every trade is analogous to a day waited to open the box.
84 comments
Comments sorted by top scores.
comment by jimrandomh · 2012-08-06T07:26:34.706Z · LW(p) · GW(p)
This problem makes more sense if you strip out time and the doubling, and look at this one:
Choose an integer N. Receive N utilons.
This problem has no optimal solution (because there is no largest integer). You can compare any two strategies to each other, but you cannot find a supremum; the closest thing available is an infinite series of successively better strategies, which eventually passes any single strategy.
In the original problem, the options are "don't open the box" or "wait N days, then open the box". The former can be crossed off; the latter has the same infinite series of successively better strategies. (The apparent time-symmetry is a false one, because there are only two time-invariant strategies, and they both lose.)
The way to solve this in decision theory is to either introduce finiteness somewhere that caps the number of possible strategies, or to output an ordering over choices instead of a single choice. The latter seems right; if you define and prove an infinite sequence of successively better options, you still have to pick one; and lattices seem like a good way to represent the results of partial reasoning.
Replies from: Oscar_Cunningham, Giles↑ comment by Oscar_Cunningham · 2012-08-07T13:27:39.170Z · LW(p) · GW(p)
This is pretty much the only comment in the entire thread that doesn't fight the hypothetical. Well done, I guess?
↑ comment by Giles · 2012-08-07T16:58:34.669Z · LW(p) · GW(p)
This seems like a helpful simplification of the problem. Note that it also works if you receive 1-1/N utilons, so as with the original post this isn't an unbounded utility issue as such.
Just one point though - in the original problem specification it's obvious what "choose an integer N" means: opening a physical box on day n corresponds to choosing 2^n. But how does your problem get embedded in reality? Do you need to write your chosen number down? Assuming there's no time limit to writing it down then this becomes very similar to the original problem except you're multiplying by 10 instead of 2 and the time interval is the time taken to write an extra digit instead of a day.
Replies from: wedrifid, prase↑ comment by prase · 2012-08-07T20:16:26.856Z · LW(p) · GW(p)
... to write an extra digit ...
Writing decimal digits isn't the optimal way to write big numbers. (Of course this doesn't invalidate your point.)
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2012-08-09T14:44:50.952Z · LW(p) · GW(p)
It kind of is if you have to be able to write down any number.
comment by TrE · 2012-08-06T06:33:07.295Z · LW(p) · GW(p)
I've never met an infinite decision tree in my life so far, and I doubt I ever will. It is a property of problems with an infinite solution space that they can't be solved optimally, and it doesn't reveal any decision theoretic inconsistencies that could come up in real life.
Consider this game with a tree structure: You pick an arbitrary natural number, and then, your opponent does as well. The player who chose the highest number wins. Clearly, you cannot win this game, as no matter which number you pick, the opponent can simply add one to that number. This also works with picking a positive rational number that's closest to 1 - your opponent here adds one to the denominator and the numerator, and wins.
The idea to use a busy beaver function is good, and if you can utilize the entire universe to encode the states of the busy beaver with the largest number of states possible (and a long enough tape), then that constitutes the optimal solution, but that only takes us further out into the realm of fiction.
Replies from: fubarobfusco, OrphanWilde, tgb, Grognor, Mestroyer↑ comment by fubarobfusco · 2012-08-06T16:52:17.921Z · LW(p) · GW(p)
I've never met an infinite decision tree in my life so far, and I doubt I ever will. It is a property of problems with an infinite solution space that they can't be solved optimally, and it doesn't reveal any decision theoretic inconsistencies that could come up in real life.
"You are finite. Zathras is finite. This utility function has infinities in it. No, not good. Never use that."
— Not Babylon 5
↑ comment by Mestroyer · 2012-08-06T23:32:47.439Z · LW(p) · GW(p)
But I do not choose my utility function as an means to get something. My utility function describes is what I want to choose means to get. And I'm pretty sure it's unbounded.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-08-07T00:24:37.784Z · LW(p) · GW(p)
You've only expended a finite amount of computation on the question, though; and you're running on corrupted hardware. How confident can you be that you have already correctly distinguished an unbounded utility function from one with a very large finite bound?
(A genocidal, fanatical asshole once said: "I beseech you, in the bowels of Christ, think it possible that you may be mistaken.")
Replies from: Mestroyer↑ comment by OrphanWilde · 2012-08-06T14:34:27.914Z · LW(p) · GW(p)
The tax man's dilemma, an infinite decision tree grounded in reality:
Assume you're the anthropomorphization of government. And you have a decision to make: You need to decide the ideal tax rate for businesses.
In your society, corporations reliably make 5% returns on investments, accounting for inflation. That money is reliably reinvested, although not necessarily in the same corporation.
How should you tax those returns in order to maximize total utility? You may change taxes at any point. Also, you're the anthropomorphic representation of government - you are, for all intents and purposes, immortal.
Assume a future utility discount rate of less than the investment return rate, and assume you don't know the money-utility relationship - you can say you weigh the possibility of future disasters which require immense funds against the possibility that money has declining utility over time to produce a constant relationship for simplicity, if you wish. Assume that your returns will be less than corporate returns, and corporate utility will be less than your utility. (Simplified, you produce no investment returns, corporations produce no utility.)
↑ comment by Grognor · 2012-08-06T15:01:00.074Z · LW(p) · GW(p)
I like this. I was going to say something like,
"Suppose , what does that say about your solutions designed for real life?" and screw you I hate when people do this and think it is clever. Utility monster is another example of this sort of nonsense.
but you said the same thing, and less rudely, so upvoted.
↑ comment by Mestroyer · 2012-08-06T06:56:08.802Z · LW(p) · GW(p)
If it's impossible to win, because your opponent always picks second, then every choice is optimal.
If you pick simultaneously, picking the highest number you can describe is optimal, so that's another situation where there is no optimal solution for an infinite mind, but for a finite mind, there is an optimal solution.
comment by [deleted] · 2012-08-06T14:05:24.833Z · LW(p) · GW(p)
After considering this problem, what I found was surprisingly fast, the specifics of the boxes physical abilities and implementation becomes relevant. I mean, let's say Clippy is given this box, and has already decided to wait a mere 1 year from day 1, which is 365.25 days of doubling, and 1 paperclip is 1 utilon. At some point, during this time, before the end of it, There are more paperclips then there used to be every atom in the visible universe. Since he's predicted to gain 2^365.25 paperclips, (which is apparently close to 8.9*10^109) and the observable universe is only estimated to contain 10^80 atoms. So to make up for that, let's say the box converts every visible subatomic particle into paperclips instead.
That's just 1 year, and the box has already announced it will convert approximately every visible subatomic particle into pure paperclip bliss!
And then another single doubling... (1 year and 1 day) Does what? Even if Clippy has his utility function unbounded, it should presumably still link back to some kind of physical state, and at this point the box starts having to implement increasingly physically impossible ideas to have to double paperclip utility, like:
Breaking the speed of light.
Expanding the paperclip conversion into the past.
Expanding the paperclip conversion into additional branches of many worlds.
Magically protecting the paperclips from the ravages of time, physics, or condensing into blackholes, despite the fact it is supposed to lose all power after being opened.
And that's just 1 year! We aren't even close to a timeless eternity of waiting yet, and the box already has to smash the currently known laws of physics (more so than it did by converting every visible subatomic particle into paperclips) to do more doublings, and will then lose power afterwards.
Do the laws of physics resume being normal after the box loses power? If so, massive chunks of utility will fade away almost instantly (which would seem to indicate the Box was not very effective), but if not I'm not sure how the loop below would get resolved:
The Box is essentially going to rewrite the rules of the universe permanently,
Which would affect your utility calculations, which are based on physics,
Which would affect how the Box rewrote the rules of the universe,
Which would affect your utility calculations, which are based on physics,
Except instead of stopping and letting you and the box resolve this loop, it must keeps doubling, so it keeps changing physics more.
By year 2, it seems like you might be left with either:
A solution, in which case whatever the box will rewrite the laws of physics to, you understand and agree with it and can work on the problem based on whatever that solution is. (But I have no idea how you could figure out what this solution would be in advance, since it depends on the specific box?)
Or, an incredibly intractable metaphysics problem which is growing more complicated faster than you can ever calculate, in which case you don't even understand what the box is doing anymore.
The reason I said that this was incredibly fast is that my original guess was that it would take at least 100 years of daily doubling for the proposed world to become that complicated, but when I tried doing a bit of math it didn't take anywhere near that long.
Edit: Fixed a few typos and cleared up grammar.
Replies from: Mitchell_Porter, Giles↑ comment by Mitchell_Porter · 2012-08-06T14:52:36.867Z · LW(p) · GW(p)
This is a thought experiment which is not meant to be possible in our world. But such thought experiments are a way of testing the generality of your decision procedures - do they work in all possible worlds? If you must imagine a physics that makes the eternal doubling possible, try picturing a network of replicating baby universes linked by wormholes.
Replies from: Vaniver↑ comment by Vaniver · 2012-08-06T16:05:27.038Z · LW(p) · GW(p)
But such thought experiments are a way of testing the generality of your decision procedures - do they work in all possible worlds?
As in the old saw, part of your strength as a real decision-maker is that your decision procedures choose less well in impossible worlds than in possible worlds.
Replies from: Nisan, Nisan↑ comment by Nisan · 2012-08-06T18:34:50.461Z · LW(p) · GW(p)
Why does that have to be true?
Replies from: Vaniver↑ comment by Vaniver · 2012-08-06T20:06:30.108Z · LW(p) · GW(p)
It doesn't have to be true. It's desirable because decision procedures that rely on other knowledge about reality are faster/better/cheaper than ones that don't import knowledge about reality. Specialization for the situation you find yourself in is often useful, though it does limit flexibility.
↑ comment by Giles · 2012-08-07T16:32:41.910Z · LW(p) · GW(p)
Utility doesn't have to be proportional to the amount of some particular kind of physical stuff in the universe. If the universe contained 1 paperclip, that could be worth 2 utilons, if it contained 2 paperclips then it could be worth 4 utilons, if it contained 20 paperclips then it could be worth 2^20 utilons. The box would then double your utility each day just by adding one physical paperclip.
I still think these kinds of considerations are worth thinking about though. Your utility function might grow faster than a busy beaver function, but then the doubling box is going to have trouble waiting the right length of time to deliver the
comment by DuncanS · 2012-08-06T23:33:19.114Z · LW(p) · GW(p)
Your other option is to sell the box to the highest bidder. That will probably be someone who's prepared to wait longer than you, and will therefore be able to give you a higher price than the utilons you'd have got out of the box yourself. You get the utilons today.
Replies from: Mestroyer, army1987↑ comment by A1987dM (army1987) · 2012-08-07T15:12:56.550Z · LW(p) · GW(p)
Why does my fight-the-hypothetical module never think about that? (It does often think about options which wouldn't be available in the Least Convenient Possible World -- but not this one, until someone else points it out.)
comment by Vladimir_Nesov · 2012-08-06T09:17:19.280Z · LW(p) · GW(p)
If you can use mixed strategies (i.e. are not required to be deterministically predictable), you can use the following strategy for the doubling-utility case: every day, toss a coin; if it comes up heads, open the box, otherwise wait another day. Expected utility of each day is constant 1/2, since the probability of getting heads on a particular day halves with each subsequent day, and utility doubles, so the series diverges and you get infinite total expected utility.
Replies from: Kindly, aaronde, Mestroyer↑ comment by Kindly · 2012-08-06T15:02:17.045Z · LW(p) · GW(p)
Even better, however, would be to toss two coins every day, and only open the box if both come up heads :)
Replies from: Pentashagon, Giles↑ comment by Pentashagon · 2012-08-06T18:24:07.170Z · LW(p) · GW(p)
This suggests a strategy; tile the universe with coins and flip each of them every day. If they all come up heads, open the box (presumably it's full of even more coins).
Replies from: Mestroyer↑ comment by Giles · 2012-08-07T16:39:54.996Z · LW(p) · GW(p)
That way your expected utility becomes INFINITY TIMES TWO! :)
Replies from: Kindly, aaronde↑ comment by Kindly · 2012-08-08T12:41:48.199Z · LW(p) · GW(p)
There are meaningful ways to compare two outcomes which both have infinite expected utility. For example, suppose X is your favorite infinite-expected-utility outcome. Then a 20% chance of X (and 80% chance of nothing) is better than a 10% chance of X. Something similar happens with tossing two coins instead of one, although it's more subtle.
↑ comment by aaronde · 2012-08-08T03:34:37.157Z · LW(p) · GW(p)
Actually what you get is another divergent infinite series that grows faster. They both grow arbitrarily large, but the one with p=0.25 grows arbitrarily larger than the series with p=0.5, as you compute more terms. So there is no sense in which the second series is twice as big, although there is a sense in which it is infinitely larger. (I know your point is that they're both technically the same size, but I think this is worth noting.)
↑ comment by aaronde · 2012-08-08T03:35:17.356Z · LW(p) · GW(p)
This is what I was going to say; it's consistent with the apparent time symmetry, and is the only solution that makes sense if we accept the problem as stated. But it seems like the wrong answer intuitively, because it means that every strategy is equal, as long as the probability of opening the box on a given day is in the half-open interval (0,0.5]. I'd certainly be happier with, say, p=0.01 than p=0.5, (and so would everyone else, apparently) which suggests that I don't actually have a real-valued utility function. This might be a good argument against real-valued utility functions in general (bounded or not). Especially since a lot of the proposed solutions here "fight the hypothetical" by pointing out that real agents can only choose from a finite set of strategies.
comment by Kindly · 2012-08-06T15:15:26.500Z · LW(p) · GW(p)
So I don't really know how utilons work, but here is an example of a utility function which is doubling box-proof. It is bounded; furthermore, it discounts the future by changing the bound for things that only affect the future. So you can get up to 1000 utilons from something that happens today, up to 500 utilons from something that happens tomorrow, up to 250 utilons from something that happens two days from now, and so on.
Then the solution is obvious: if you open the box in 4 days, you get 16 utilons; if you open the box in 5 days, you'd get 32 but your utility function is bounded so you only get 31.25; if you open the box in 6 days or more, the reward just keeps shrinking. So 5 days is the best choice, assuming you can precommit to it.
A suitable transformation of this function probably does capture the way we think about money: the utility of it is both bounded and discounted.
Replies from: wedrifid, Mestroyer↑ comment by wedrifid · 2012-08-07T06:25:57.593Z · LW(p) · GW(p)
So I don't really know how utilons work, but here is an example of a utility function which is doubling box-proof. It is bounded; furthermore, it discounts the future by changing the bound for things that only affect the future. So you can get up to 1000 utilons from something that happens today, up to 500 utilons from something that happens tomorrow, up to 250 utilons from something that happens two days from now, and so on.
You are right that such a utility function cannot be supplied with a (utility) doubling box problem---and for much the same reason that most utility functions that approximate human preferences could not be exposed to a doubling box. Nevertheless this amounts to refusing to engage with the game theory example rather than responding to it.
↑ comment by Mestroyer · 2012-08-07T00:10:41.572Z · LW(p) · GW(p)
This thwarts the original box, but I just edited the OP to describe another box that would get this utility function in trouble.
Replies from: Kindly↑ comment by Kindly · 2012-08-07T05:20:12.501Z · LW(p) · GW(p)
So you're suggesting, in my example, a box that approaches 500 utilons over the course of a day, then disappears?
This isn't even a problem. I just need to have a really good reaction time to open it as close to 24 hours as possible. Although at some point I may decide that the risk of missing the time outweighs the increase in utilons. Anyway this isn't even a controversial thought experiment in that case.
Replies from: Mestroyer↑ comment by Mestroyer · 2012-08-07T22:53:51.073Z · LW(p) · GW(p)
I thought you would realize I was assuming what I did for the case with the utility function that discounts completely after a certain time: "Suppose you can think as fast as you want, and open the box at arbitrary speed."
But if your utility function discounts based on the amount of thinking you've done, not on time, I can't think of an analogous trap for that.
Replies from: Kindly↑ comment by Kindly · 2012-08-08T12:37:35.499Z · LW(p) · GW(p)
So, ideally, these utility functions wouldn't be arbitrary, but would somehow reflect things people might actually think. So, for example, if the box is only allowed to contain varying amounts of money, I would want to discount based on time (for reasons of investment if nothing else) and also put an upper bound on the utility I get (because at some point you just have so much money you can afford pretty much anything).
When arbitrary utilons get mixed in, it becomes complicated, because I discount different ways to get utility at a different rate. For instance, a cure for cancer would be worthless 50 years from now if people figured out how to cure cancer in the meantime already, at which point you'd total up all the casualties from now until then and discount based on those. This is different from money because even getting a dollar 100 years from now is not entirely pointless.
On the other hand, I don't think my utility function discounts based on the amount of thinking I've done, at least not for money. I want to figure out what my true response to the problem is, in that case (which is basically equivalent to the "You get $X. What do you want X to be?" problem). I think it's that after I've spent a lot of time thinking about it and decided X should be, say, 100 quadrillion, which gets me 499 utilons out of a maximum of 500, then making the decision and not thinking about it more might be worth more than 1 utilon to me.
Replies from: Mestroyercomment by OrphanWilde · 2012-08-07T14:33:35.627Z · LW(p) · GW(p)
Am I correct in assessing that your solution is to stop when you can no longer comprehend the value in the box? That is, when an additional doubling has no subjective meaning to you? (Until that point, you're not in a state loop, as the value with each doubling provides an input you haven't encountered before.)
I was about to suggest stopping when you have more utilons than your brain has states (provided you could measure such), but then it occurred to me the solutions might be analogous, even if they arrive at different numbers.
Replies from: Mestroyer↑ comment by Mestroyer · 2012-08-07T22:44:17.739Z · LW(p) · GW(p)
I wouldn't want to stop when I couldn't comprehend what was in the box. I always want more utility, whether I can understand that I have it or not. My solution is to wait as long as you can before waiting any longer puts you in an infinite loop and guarantees you will never get any.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2012-08-08T13:41:30.681Z · LW(p) · GW(p)
As long as you comprehend the number in the box, you're not in an infinite loop. The input is different. Once the number is no longer meaningful, you're potentially in an infinite loop; the input is the same.
Replies from: Mestroyer↑ comment by Mestroyer · 2012-08-10T01:29:26.585Z · LW(p) · GW(p)
I'm pretty sure I could stay out of an infinite loop much longer than I could comprehend what was in the box. The contents of the box are growing exponentially with the number of days. If I just count the number of days, I can stay in the realm of small numbers much longer.
comment by Decius · 2012-08-06T20:51:41.040Z · LW(p) · GW(p)
I wait until there are so many utiltons in the box that I can use them to get two identical boxes and have some utiltions left over. Every time a box has more than enough utilitons to make two identical boxes, I repeat that step. Any utilitons not used to make new boxes are the dividend of the investment.
Replies from: None↑ comment by [deleted] · 2012-08-07T13:27:31.492Z · LW(p) · GW(p)
Now that you mention that, that's true, and it gives me several other weird ideas. The box gives you tokens that you exchange for utilons, which seem like they are supposed to be defined as "Whatever you want/define them to be, based on your values."
Ergo, imagine a Happy Michaelos that gets about twice as much positive utilons from everything compared to Sad Michaelos. Sad Michaelos gets twice as much NEGATIVE utilons from everything compared to Happy Michaelos.
Let's say a cookie grants Happy Michaelos 1 utilons. It would take two cookies to grant Sad Michaelos 1 utilons. Let's say a stubbed toe grants Sad Michaelos -1 utilons. It would take two stubbed toes to grant Happy Michaelos -1 utilons.
So if Happy Michaelos or Sad Michaelos gets to open the box and they are friends who substantially share utility and cookies... It should be Sad Michaelos who does so (both will get more cookies that way.)
As far as I can tell, this is a reasonable interpretation of the box.
So, I should probably figure out how the people below would work, since they are increasingly unreasonable interpretations of the box:
Extremely Sad Michaelos:
Is essentially 1 million times worse off than Sad Michaelos. Ergo, it the logic above holds, Extremely Sad Michaelos gets 2 million cookies from turning in a single token.
Hyper Pout Michaelos:
Is essentially 1 billion times worse off than Sad Michaelos. He also has a note in his utility function that he will receive -infinity(aleph 0) utilons if he does not change his utility function back to Sad Michaelos's utility function within 1 second after the box is powerless and he has converted all of his tokens. If the logic above holds, Hyper Pout Michaelos gets 1 billion times more cookies than Sad Michaelos, and then gets to enjoy substantially more utilons from them!
Omnidespairing Michaelos:
Is almost impossible to grant utilons to. The certainty of omnipotence grants him 1 utilon. Everything else that might be positive (say, a 99% chance of omnipotence) grants him 0 utilons.
This is a coherent utility function. You can even live and have a normal life with it if you also want to avoid negative utilons (eating might only grant -infinite (aleph 0) utilons and not eating might grant -infinite (aleph 1) utilons.
Box Cynical Despairmax Michaelos:
Gets some aleph of negative infinite utilons from every decision whatsoever. Again, he can make decisions and go throughout the day, but any number of the tokens that the box grants don't seem to map to anything relevant on his utility function. For instance, waiting a day might cost him - infinite (aleph 2) utilons. Adding a finite number of utilons is irrelevant. He immediately opens the box so he can discard the useless tokens and get back to avoiding the incomprehensible horrors of life, and this is (as far as I can tell) a correct answer for him.
It seems like at least some of the utility functions above cheat the box, but I'm not sure which ones go to far, if the sample is reasonable. They all give entirely different answers as well:
1: Go through life as sad as possible.
2: Go through life pretending to be sad to get more and then actually be happy later.
3: Only omnipotence will make you truly happy. Anything else is an endless horror.
4: Life is pain, and the box is trying to sell you something useless, ignore it and move on.
Replies from: Decius↑ comment by Decius · 2012-08-07T17:43:19.643Z · LW(p) · GW(p)
If changing my utility function has expected positive results, based both on my current utility function and in the proposed change, then...
Here the problem is that the utilon is not a unit that can be converted into any other unit, including physical phenomena.
comment by RolfAndreassen · 2012-08-06T16:45:29.662Z · LW(p) · GW(p)
What if instead of growing exponentially without bound, it decays exponentially to the bound of your utility function?
I think you mean 'asymptotically'.
Replies from: rocurleycomment by falenas108 · 2012-08-06T13:37:19.916Z · LW(p) · GW(p)
I way to think about this problem to put you in near mode is to imagine what the utility might look like. Ex:
Day 1: Finding a quarter on the ground
Day 2: A child in Africa getting $5
.....
Day X: Curing cancer
Dax X+1: Curing cancer, Alzheimers, and AIDS.
On one hand, by waiting a day, more people would die of cancer. On the other, by not waiting, you'd doom all those future people to die of AIDS and Alzheimers.
Replies from: Giles, tim↑ comment by Giles · 2012-08-07T16:37:31.603Z · LW(p) · GW(p)
Suppose instead of multiplying the utility by 2 each day, the box multiplied the utility by 1. Would it look like this?
Day 1: Curing cancer
Day 2: Curing cancer
Day 3: Curing cancer ...
Probably not - each of those "curing cancer" outcomes is not identical (cancer gets cured on a different day) so you'd assign them different utilities. In order to conform to the specification, the box would have to add an extra sweetener each day in order to make up for a day's worth of cancer deaths.
↑ comment by tim · 2012-08-08T02:46:37.720Z · LW(p) · GW(p)
You are adding a condition that was not present in the original problem. Namely, that every day you do not open the box, you lose some number of utilions.
Replies from: falenas108↑ comment by falenas108 · 2012-08-08T13:58:30.790Z · LW(p) · GW(p)
Whoops, you're right.
comment by Pentashagon · 2012-08-06T18:37:26.342Z · LW(p) · GW(p)
How exactly do the constant utilons in the box compensate me for how I feel the day after I open the box (I could have doubled my current utility!)? The second day after (I could have quadrupled my current utility!!)? The Nth day after (FFFFFFFFFFFFUUUUU!!!)? I'm afraid the box will rewrite me with a simple routine that says "I have 2^(day-I-opened-the-box - 1) utility! Yay!"
comment by pragmatist · 2012-08-06T07:06:57.755Z · LW(p) · GW(p)
If you return to a state you have already been at, you know you are going to be waiting forever and lose and get nothing.
You seem to be assuming here that returning to a state you have already been at is equivalent to looping your behavior, so that once a Turing machine re-enters a previously instantiated state it cannot exhibit any novel behavior. But this isn't true. A Turing machine can behave differently in the same state provided the input it reads off its tape is different. The behavior must loop only if the the combination of Turing machine state and tape configuration recurs. But this need never happen as long as the tape is infinite. If there were an infinite amount of stuff in the world, even a finite mind might be able to leverage it to, say, count to an arbitrarily high number.
Now you might object that it is not only minds that are finite, but also the world. There just isn't an infinite amount of stuff out there. But that same constraint also rules out the possibility of the utility box you describe. I don't see how one could squeeze arbitrarily large amounts of utility into some finite quantity of matter.
comment by pragmatist · 2012-08-06T06:23:52.925Z · LW(p) · GW(p)
You have given reasons why requiring bounded utility functions and discounting the future are not adequate responses to the problem if considered individually. But your objection to the bounded utility function response assumes that future utility isn't discounted, and your objection to the discounting response assumes that the utility function is unbounded. So what if we require both that the utility function must be bounded and that future utility must be discounted exponentially? Doesn't that get around the paradox?
I remember reading a while ago about a paradox where you start with $1, and can trade that for a 50% chance of $2.01, which you can trade for a 25% chance of $4.03, which you can trade for a 12.5% chance of $8.07, etc (can't remember where I read it).
The problem statement isn't precisely the same as what you specify here, but were you thinking of the venerable St. Petersburg paradox?
Replies from: Mestroyer↑ comment by Mestroyer · 2012-08-06T06:52:57.457Z · LW(p) · GW(p)
If your utility function is bounded and you discount the future, then pick an amount of time after now, epsilon, such that the discounting by then is negligible. Then imagine that the box disappears if you don't open it by then. at t = now + epsilon * 2^-1, the utilons double. At 2^-2, they double again. etc.
But if your discounting is so great that you do not care about the future at all, I guess you've got me.
This isn't the St. Petersburg paradox (though I almost mentioned it) because in that, you make your decision once at the beginning.
Replies from: pragmatist↑ comment by pragmatist · 2012-08-06T08:42:39.263Z · LW(p) · GW(p)
If your utility function is bounded and you discount the future, then pick an amount of time after now, epsilon, such that the discounting by then is negligible. Then imagine that the box disappears if you don't open it by then. at t = now + epsilon * 2^-1, the utilons double. At 2^-2, they double again. etc.
Perhaps I am misinterpreting you, but I don't see how this scheme is compatible with a bounded utility function. For any bound n, there will be a time prior to epsilon where the utilons in the box will be greater than n.
When you say "At 2^-2...", I read that as "At now + epsilon 2^-1 + epsilon 2^-2...". Is that what you meant?
Replies from: Mestroyercomment by IlyaShpitser · 2012-08-06T06:13:49.240Z · LW(p) · GW(p)
If you know the probability distribution P(t) of you dying on day t, then you can solve exactly for optimal expected lifetime utilons out of the box. If you don't know P(t), you can do some sort of adaptive estimation as you go.
Replies from: TrE, Mestroyer↑ comment by Mestroyer · 2012-08-06T06:56:33.280Z · LW(p) · GW(p)
P(t) = 0.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2012-08-06T08:06:14.883Z · LW(p) · GW(p)
Why is this an interesting problem?
Replies from: Mestroyercomment by MileyCyrus · 2012-08-06T19:35:15.677Z · LW(p) · GW(p)
I asked Ask Philosophers about this a few years ago.
comment by Manfred · 2012-08-06T06:22:54.472Z · LW(p) · GW(p)
You could build a machine that opens the box far in the future, at the moment when the machine's reliability starts degrading faster than the utilons increase. This maximizes your expected utility.
Or if you're not allowed to build a machine, you simply do the same with yourself (depending on our model, possibly multiplying by your expected remaining lifespan).
comment by aaronde · 2012-08-08T04:38:24.295Z · LW(p) · GW(p)
Bringing together what others have said, I propose a solution in three steps:
Adopt a mixed strategy where, for each day, you open the box on that day with probability p. The expected utility of this strategy is the sum of (p (1-p)^n 2^n), for n=0... which diverges for any p in the half-open interval (0,0.5]. In other words, you get infinite EU as long as p is in (0,0.5]. This is paradoxical, because it means a strategy with a 0.5 risk of ending up with only 1 utilon is as good as any other.
Extend the range of our utility function to a number system with different infinities, where a faster-growing series has greater value than a slower-growing series, even if they both grow without bound. Now the EU of the mixed strategy continues to grow as p approaches 0, bringing us back to the original problem: The smaller p is, the better, but there is no smallest positive real number.
Realize that physical agents can only choose between a finite number of strategies (because we only have a finite number of possible mind states). So, in practice, there is always a smallest p: the smallest p we can implement in reality.
So that's it. Build a random number generator with as many bits of precision as possible. Run it every day until it outputs 0. Then open the box. This strategy improves on the OP because it yields infinite expected payout, and is intuitively appealing because it also has a very high median payout, with a very small probability of a low payout. Also, it doesn't require precommitment, which seems more mathematically elegant because it's a time-symmetric strategy for a time-symmetric game.
comment by wedrifid · 2012-08-06T17:05:27.949Z · LW(p) · GW(p)
How long do you wait before opening it? If you never open it, you get nothing (you lose! Good day, sir or madam!) and whenever you take it, taking it one day later would have been twice as good.
When do I "lose" precisely? When I never take it? By happy coincidence 'never' happens to be the very next day after I planned to open the box!
comment by billswift · 2012-08-06T12:49:53.619Z · LW(p) · GW(p)
There are no other ways to get utilons.
Is a weakness in your argument. Either you can survive without utilons, a contradiction to utility theory, or you wait until your "pre-existing" utilons are used up and you need more to survive.
Replies from: selylindi, wedrifid, Mestroyer↑ comment by wedrifid · 2012-08-06T17:18:21.714Z · LW(p) · GW(p)
Is a weakness in your argument. Either you can survive without utilons, a contradiction to utility theory, or you wait until your "pre-existing" utilons are used up and you need more to survive.
Utilons don't need to be associated with survival. Survival can be a mere instrumental good used to increase the amount of actual utilons generated (by making, say, paperclips). I get the impression that you mean something different by the word than what the post (and the site) mean.
↑ comment by Mestroyer · 2012-08-06T14:13:45.548Z · LW(p) · GW(p)
What's wrong with not having any more reason to live after you get the utilons?
Replies from: billswiftcomment by asparisi · 2012-08-06T16:00:25.352Z · LW(p) · GW(p)
If I am actually immortal, and there is no other way to get Utilions then each day, the value of me opening the box is something like:
Value=Utilions/Future Days
Since my Future Days are supposedly infinite, we are talking about at best an infinitesimal difference between me opening the box on Day 1 and me opening the box on Day 3^^^^3. There is no actual wrong day to open the box. If that seems implausible, it is because the hypothetical itself is implausible.
Replies from: wedrifid↑ comment by wedrifid · 2012-08-06T17:28:51.257Z · LW(p) · GW(p)
If I am actually immortal, and there is no other way to get Utilions then each day, the value of me opening the box is something like:
Value=Utilions/Future Days
The expected value of opening the box is:
Value=Utilons
That is all. That number already represents how much value is assigned to the state of the universe given that decision. Dividing by only future days is an error. Assigning a different value to the specified reward based on whether days are in the past or the future changes the problem.
Replies from: asparisi↑ comment by asparisi · 2012-08-06T17:54:09.763Z · LW(p) · GW(p)
Presumably, if Utilions are useful at all, then you use them. Usually, this means that some are lost each day in the process of using them.
Further, unless the Utilions represent some resource that is non-entropic, then I will lose some number of Utilions each day even if they aren't lost by me using them. This works out to the same answer in the long run.
Let's assume we have an agent Boxxy, an immortal AI whose utility function is that opening the box tomorrow is twice as good as opening it today. Once he opens the box, his utility function assigns that much value to the universe. Let's assume this is all he values. (This gets us around a number of problems for the scenario.)
Even in this scenario, unless Boxxy is immune to entropy, some amount of information (and thus, some perception of utility) will be lost over time. Over a long enough time, Boxxy will eventually lose the memory of opening the Box. Even if Boxxy is capable of self-repair in the face of entropy, unless Boxxy is capable of actually not undergoing entropy, some of the Box-information will be lost. (Maybe Boxxy hopes that it can replace it with an identical memory for its utility function, although I would suspect at that point Boxxy might just to decide to remember having opened the Box at a nearer future date) Eventually, Boxxy's memory and thus, Boxxy's Utilions, will either be completely artifiical with at best something like a causal relationship to previous memory states of opening the box, or Boxxy will lose all of its Utilions.
Of course, Boxxy might never open the box. (I am not a superintelligence obsessed with box opening. I am a human intelligence obsessed with things that Boxxy would find irrelevant. So I can only guess as to what a box-based AGI would do.) In this case, the Utilions won't degrade, but Boxxy can still expect a value of 0 in this case.
Frankly, the problem is hard to think about at that level, because real immortality (as the problem requires) would require someway to ensure that entropy doesn't occur but somehow some sort of process occurs, which seems a contradiction in terms. I guess this could be occuring in a universe without entropy, (but which somehow has other processes) although both my intuitions and my knowledge are so firmly rooted in a universe that has entropy that I don't have a good grounding on how to evaluate problems in such a universe.
Replies from: wedrifidcomment by Mitchell_Porter · 2012-08-06T14:33:47.792Z · LW(p) · GW(p)
I admire the way this post introduces an ingenious problem and an ingenious answer.
Replies from: Grognor↑ comment by Grognor · 2012-08-06T15:14:23.349Z · LW(p) · GW(p)
Nonsense. The problem has posed has always been around, and the solution is just to avoid repeating the same state twice, because that results in a draw.