Pascal's Mugging and One-shot Problems
post by Mathilde · 2019-04-22T22:12:21.230Z · LW · GW · 12 commentsContents
12 comments
I've had some thoughts on Pascal's Mugging which might be worth sharing. I'm assuming some familiarity with Pascal's Mugging in this post.
Before we get really started, let's transform Pascal's Mugging into a problem that is easier to reason about but still gets at the core idea.
First, forget "utility", forget money, we're maximising paperclips. I know it's kind of silly, but using money or utility really tends to muddy up thinking. I was surprised how much easier it was to reason about the problem when I switched to paperclips[0].
Now, here's my version of the problem. Suppose there's a casino, in which there is a googolplex-sided die (10^(10^100) sides). You can pay 10 paperclips to roll the die, which are destroyed. If you roll a 1 on the die, the casino manufactures 3^^^^3 paperclips. Otherwise, nothing else happens and you lost 10 paperclips.
3^^^^3 is much, much, much larger than a googolplex, so expected utility maximisation overwhelmingly says you should play at this casino if you're trying to maximise the number of paperclips.
(From this point on, I will refer to "expected utility" as "EU")
In some cases, EU is correct. Here are 3 cases.
Case 1: If you expect to live a googolplex years, then paying to roll the die a few times per year is a good idea, because over that time frame the probability of winning at least once is very high.
Case 2: You have a normal lifespan, but there are a googolplex other paperclip maximisers on Earth. In this case, everyone plays, and the probability that at least one person wins is very high.
Case 3: You are the only paperclip maximiser in this universe, but there are a googolplex alternate universes that contain alternate "you"s who are in the same situation as you, and for whom the dice rolls are all independent. In this case the probability that at least one "you" wins is high, so you should play.
Here is a different case.
Case 0: You are alone in existence. There is no one else on earth, there are no alternate realities, you are literally alone in the entirety of all existence. This is the only decision you will ever get the chance to make. Should you pay 10 paperclips to roll the die?
I think it's clear in Case 0 that you should not pay. If you pay, you lose 10 paperclips[1] and, for all practical purposes, are certain to lose. If you don't pay, at least you get to keep your 10 paperclips. Since we're trying to maximise the number of paperclips, the latter wins.
The key difference between case 0 and the other 3 cases is that in case 0, you only get one chance to maximise paperclips. I'm calling this sort of scenario a one-shot problem.
A one-shot problem can basically be described as trying to maximise paperclips by choosing from a finite set of choices, each choice being mutually exclusive and having a finite set of outcomes whose probability sums up to 1, and each outcome containing a finite number of paperclips.
I think a key insight, which I don't know enough to prove but which seems correct, is that any finite sequence of decisions can be transformed into a one-shot problem, simply by viewing each possible sequence of decisions as a single choice.
As a simple example, if choosing between either flipping a coin or rolling a die is one decision, then flipping a coin and then rolling a die is 2 decisions. But they can be combined together as a single choice where you both flip a coin and then subsequently roll a die. This combination can be done even when the sequence of decisions is more complicated, for example when choosing to flip a coin, and, if it's heads, rolling a die, otherwise flipping a coin again.
I expect you can do something similar to group the decisions of alternate "you"s into one, or even the decisions of other people on Earth, so long as their decision-making procedure is similar enough to yours.
If I'm right that this transformation is possible, that means one-shot problems are isomorphic to finite multi-shot problems, and hence insights into one are applicable to the other. This means that a solution to one-shot problems should give a solution to Pascal's Mugging in general.
Solving one-shot problems means finding a decision-making procedure that maximises paperclips when you only have one decision. One might expect EU, which is all about maximising paperclips, would at least provide some insight into this.
Surprisingly, EU doesn't seem to help. The key property of an EU maximiser is that as it makes more and more decisions, the probability that it will get more paperclips approaches 1.
For example, in my casino version of Pascal's Mugging, at 1 repetition there is a very low probability of an EU maximiser winning. But at 1 googolplex repetitions, there is an ~63% chance of it winning at least once. At 2 googolplex, that probability becomes ~86%. In the long run, the probability that EU will come out on top approaches 1.
This means that EU completely sidesteps the problem of how to make decisions under uncertainty, by choosing the sequence of decisions that has a virtually 100% probability of winning out in the long run!
In summary, Pascal's Mugging occurs because expected utility depends on there being a long time frame, and solving what to do when there isn't a long enough time frame is equivalent to solving it for the simpler case where you only get to make a single decision.
Thank you for reading this, and I hope to learn a lot from your replies!
[0] Switching to paperclip maximising also helps show why I think bounded utility functions are an incomplete solution to Pascal's Mugging. Which choice is optimal for maximising the number of paperclips in the world? This is a seemingly factual question, and our best answer is expected utility maximisation, which is vulnerable to Pascal's Mugging. This question is independent of our utility function, and can't be resolved by saying that we should use a bounded utility function.
Using paperclip maximisation also helps remove anthropic problems in Pascal's Mugging. You can argue that producing 3^^^^3 utility requires creating 3^^^^3 people, which means the probability that you are one of those 3^^^^3 people counterbalances the reward from being Pascal Mugged. But this reasoning does not work if your "utility" is paperclips.
[1] Note that the price being 10 paperclips is only a courtesy from the casino. They could charge a billion paperclips per roll and EU would still say that you should pay up.
Update: On further reflection, my criticism of bounded utility functions in the zeroth footnote is wrong. I've updated in this direction due to Dagon's second point in the comments (thank you!). Maximising the number of paperclips can be done using a bounded utility function as well, for example the function 1-1/2^p is bounded between 0 and 1, where p is an nonnegative integer giving the number of paperclips.
That this can be done is surprising to me right now, and suggests that I need to think some more about all this.
12 comments
Comments sorted by top scores.
comment by Donald Hobson (donald-hobson) · 2019-04-23T22:21:09.812Z · LW(p) · GW(p)
If you literally maximize expected number of paperclips, using standard decision theory, you will always pay the casino. To refuse the one shot game, you need to have a nonlinear utility function, or be doing something weird like median outcome maximization.
Choose action A to maximixe m such that P(paperclip count>m|a)=1/2
A well defined rule, that will behave like maximization in a sufficiently vast multiverse.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2019-04-27T21:46:54.334Z · LW(p) · GW(p)
What do you mean by a sufficiently large multiverse? If your first choice loses many paperclips in 40% of cases and wins's few in the rest, you would take it and a maximizer wouldn't.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2019-04-28T13:05:09.481Z · LW(p) · GW(p)
If you were truly alone in the multiverse, this algorithm would take a bet that had a 51% chance of winning them 1 paperclip, and a 49% chance of loosing 1000000 of them.
If independant versions of this bet are taking place in 3^^^3 parallel universes, it will refuse.
For any finite bet, for all sufficiently large If the agent is using TDT and is faced with the choice of whether to make this bet in multiverses, it will behave like an expected utility maximizer.
comment by Dagon · 2019-04-23T23:25:09.067Z · LW(p) · GW(p)
1) You're correct that "known finite iterations" can be treated as "single-shot" by defining a complete strategy and not caring about intermediate states. "unknown ending conditions" may or may not be reducible in this way.
2) You can't get away from utility. You have to define how much better a universe with X - 10n + 3^^^^3 paperclips is than a universe with X or a universe with X - 10n (where X is starting paperclips, n is number of wagers you'll make before giving up or hitting the jackpot).
3) using ludicrous numbers breaks most people's intuitions (cf scope insensitivity), and you should explain why you don't use a 100-sided die and a payout of a trillion paperclips.
Replies from: Mathilde↑ comment by Mathilde · 2019-04-24T03:24:25.035Z · LW(p) · GW(p)
-
Thank you, this is good to know. I'll have to think about this some more.
-
Hm, I was working under the assumption that the "utility" with paperclips was just the number of paperclips. A universe with X - 10n + 3^^^^3 paperclips is better than a universe with just X paperclips by 3^^^^3 - 10n. Is this not a proper utility function?
-
The casino version evolved from repeated alterations to Pascal's Mugging, so it retained the 3^^^^3 from there. I had written a paragraph where I mentioned that for one-shot problems, even a more realistic probability could qualify as a Pascal's Mugging, though I had used a 1/million chance of a trillion paperclips instead of 1/100. I ended up editing that paragraph out, though.
Working with a 1/100 probability, it's less obviously a bad idea to pay up, of course. I don't know where to draw the line between "this is a Pascal's Mugging" and "this is good odds", so I'm less confident that you shouldn't pay up for a 1/100 probability. I think it becomes a more obviously bad idea if we up the price of the casino, for example to 1 million paperclips. This still gives positive EU to paying, but has a fairly steep price compared to doing nothing unless you get pretty lucky.
Looking back, I think that one of the factors in my decision to retain such ludicrous numbers was that it seemed more persuasive. I apologise for this.
All that being said, thank you very much for your reply!
comment by Slider · 2019-04-23T20:33:03.989Z · LW(p) · GW(p)
I think your analysis of "maximise" just compares x>y without regard how much bigger x is which is kind of a natural consequence for subtracting expected utility out. However it does highlight that if our goal is "maximise paperclips" it doesn't really say whether "win harder" is relevant or not. That is 2>1 but so is 1000>1. So for cases when an outcome is not a constant amount of paperclips we need more rules than what the object of attention is. So a paperclip maximiser is actually underspecified.
Replies from: Mathilde↑ comment by Mathilde · 2019-04-23T21:47:52.482Z · LW(p) · GW(p)
Very interesting, thank you!
I think "maximising" still makes sense in one-shot problems. 2>1 and 1000>1, but it's also the case that 1000>2, even without expected utility. The way I see it, EU is a method of comparing choices based on their average utility, but the "average" turns out to be a less useful metric when you only have one chance.
So for cases when an outcome is not a constant amount of paperclips we need more rules than what the object of attention is. So a paperclip maximiser is actually underspecified.
If this is true, it would imply that in a one-shot problem, a utility function is not enough on its own to determine what is the "optimal" choice when you want to "maximise" (get the highest value you can) on that utility function. This would be a pretty big result, I think.
I think that if there is a part that is underspecified, though, it's not the paperclip maximiser, but the word "optimal". What does it mean for a choice to be "optimal" relative to other choices, when it might turn out better or worse depending on luck? I haven't been able to answer that question.
Replies from: Slider↑ comment by Slider · 2019-04-25T13:47:52.455Z · LW(p) · GW(p)
Many times opinions how to handle uncertainty get baked into the utility functions. That is a standard naive construction is to say "be risk neutral" and value paperclips linearly for their amount. But I could imagine a policy for which more paperclips is always better but from a default position of 100% 2 paperclips it wouldn't choose a option of 0.1% 1 paperclips, 49.9% 2 paperclips and 50% 3 paperclips. One can construct a "risk averse" function where the new function can simply be optimised. But does it really mean the new function is not a paper clip maximation function?
Replies from: Mathildecomment by Gurkenglas · 2019-04-23T06:47:01.301Z · LW(p) · GW(p)
Are you rejecting Pascal's mugging because of the prospect of relying on uncertain models that you do not expect to confirm?
Is all your intuition captured by maximizing utility over all but the extreme billionth of the distribution?
Here's a one-shot problem for your intuition to answer: You get to design the probability distribution to draw the number of paperclips from, except that its expectation must be at most its negative kolgomorov complexity. What distribution makes for a good choice?
Replies from: Mathilde↑ comment by Mathilde · 2019-04-23T13:38:46.602Z · LW(p) · GW(p)
Thank you for your response!
Are you rejecting Pascal’s mugging because of the prospect of relying on uncertain models that you do not expect to confirm?
My intuition is that in a one-shot problem, gambling everything on an extremely low probability event is a bad idea, even when the reward from that low probability event is very high, because you are effectively certain to lose. This is the basis for me not paying up in Pascal's Mugging and in the casino problem in the post.
I'm trying to keep my reasoning simple, so in my examples I always assume that there are no infinities, no unknown unknowns, every outcome of every choice is statistically independent, and all the assigned probabilities are statistically correct (if there is a 1/6 chance of an outcome and you get to repeat the problem, you will get that outcome on average 1/6 of the time).
Is all your intuition captured by maximizing utility over all but the extreme billionth of the distribution?
Honestly, I have no idea how to solve the problem. My intuition is hopelessly muddled on this, and every idea I've been able to come up with seems flawed, including the one you've just asked about.
Here's a one-shot problem for your intuition to answer: You get to design the probability distribution to draw the number of paperclips from, except that its expectation must be at most its negative kolgomorov complexity. What distribution makes for a good choice?
My first thought is 1/googolplex chance of losing 3^^^^3 paperclips, and the rest of the probability giving as many paperclips as the kolmogorov complexity constraint allows. I could do better by increasing the probability of the loss, for example 1/googol would be a better probability. However, I have no idea where to draw the line, at what point it stops being a good idea to increase the probability.
Replies from: Gurkenglas↑ comment by Gurkenglas · 2019-04-27T21:51:55.443Z · LW(p) · GW(p)
In a market of bettors that draw the line of how much risk to take at different points, the early game will be dominated by the most risk-taking folks and as the game grows older, the line that was chosen by the current winners moves. Perhaps your intuition is merely the product of evolution playing this game for as long as it took for the line to reach its current point?