# Omega's Idiot Brother, Epsilon

post by OrphanWilde · 2015-11-25T19:57:41.351Z · LW · GW · Legacy · 7 commentsEpsilon walks up to you with two boxes, A and b, labeled in rather childish-looking handwriting written in crayon.

"In box A," he intones, sounding like he's trying to be foreboding, which might work better when he hits puberty, "I may or may not have placed a million of your human dollars." He pauses for a moment, then nods. "Yes. I may or may not have placed a million dollars in this box. If I expect you to open Box B, the million dollars won't be there. Box B will contain, regardless of what you do, one thousand dollars. You may choose to take one box, or both; I will leave with any boxes you do not take."

You've been anticipating this. He's appeared to around twelve thousand people so far. Out of eight thousand people who accepted both boxes, eighty found the million dollars missing, and walked away with $1,000; the other seven thousand nine hundred and twenty people walked away with $1,001,000 dollars. Out of the four thousand people who opened only box A, only four found it empty.

The agreement is unanimous: Epsilon is really quite bad at this. So, do you one-box, or two-box?

There are some important differences here with the original problem. First, Epsilon won't let you open either box until you've decided whether to open one or both, and will leave with the other box. Second, while Epsilon's false positive rate on identifying two-boxers is quite impressive, making mistakes about one-boxers only .1% of the time, his false negative rate is quite unimpressive - he catches 1% of everybody who engages in it. Whatever heuristic he's using, clearly, he prefers to let two-boxers slide than to accidentally punish one-boxers.

I'm curious to know whether anybody would two-box in this scenario and why, and particularly curious in the reasoning of anybody whose answer is different between the original Newcomb problem and this one.

## 7 comments

Comments sorted by top scores.

## comment by AABoyles · 2015-11-25T21:12:07.459Z · LW(p) · GW(p)

To take the obvious approach, let's calculate Expected Values for both strategies. To start, let's try two-boxing:

(80/8000 * 1000) + (7920/8000 * 1,001,000) = $991,000

Not bad. OK, how about one-boxing?

(3996/4000 * 1,000,000) + (4/4000 * 0) = $999,000

So one-boxing is the rational strategy (assuming you're seeking to *maximize* the amount of money you get).

However, this game has two interesting properties which, together, would make me consider one-boxing based on exogenous circumstances. The first is that the difference between the two strategies is very small: only $8000. If I have $990-odd thousand dollars, I'm not going to be hung up the last $8000. In other words, money has a diminishing marginal utility. As a corollary to this, two-boxing guarantees that the player receives at least $1000, where one-boxing could result in the player receiving nothing. Again, because money has a diminishing marginal utility, getting the first $1000 may be worth the risk of not winning the million. If, for example, I needed a sum of money less than $1000 to keep myself alive (with certainty), I would two-box in a heartbeat.

All that said, I would (almost always, certainly) one-box.

## comment by OrphanWilde · 2015-11-25T21:28:16.946Z · LW(p) · GW(p)

The interesting properties actually all exist in the original Newcomb's Problem, which if you're not familiar with it, has two important differences: First, Omega leaves the boxes, so they're both there. Second, Omega always, or nearly always in some variations, predicts what you'll do. (So the expected value is $1,000 versus $1,000,000).

The addition of these two properties result in some number of people insisting they'd two-box, and in at least one philosopher's answer, if for no other reason than to take a principled stand for human autonomy and free will. (Which, if this weren't all talk, would be rather an expensive principle that one has no choice but to stand up for...)

## comment by mwengler · 2015-11-29T16:05:00.598Z · LW(p) · GW(p)

I would one box. Clearly spending $1000 for an expected $8000 return is generally speaking a Good Thing (tm). Over the course of my life if I *always* take the higher expectation value choice when offered these choices, then by the central limit theorem I will be almost certainly better off than if I generally take the lower return Sure Thing. So except for extremely odd corner cases where the sure thing is a life saver, the rational policy is to not be seduced by lower return sure things. And $1000 is not a life saver for me and actually at no point in my life has it ever been a life saver.

As a rationalist, I am afraid that if I make irrational choices I will be punished by having something bad happen to me. (It's a joke)

## comment by Douglas_Knight · 2015-11-25T21:44:46.689Z · LW(p) · GW(p)

First, Epsilon won't let you open either box until you've decided whether to open one or both, and will leave with the other box.

How is that different? Are you thinking of the transparent variant?

## comment by lmm · 2015-11-26T12:48:10.053Z · LW(p) · GW(p)

I would two-box on this problem because of diminishing returns, and one-box on the original problem.

## comment by gjm · 2015-11-26T15:15:12.528Z · LW(p) · GW(p)

Your returns must be *very rapidly* diminishing. If u is your kilobucks-to-utilons function then you need [7920u(1001)+80u(1)]/8000 > [3996u(1000)+4u(0)]/4000, or more simply 990u(1001)+10u(1) > 999u(1000)+u(0). If, e.g., u(x) = log(1+x) (a plausible rate of decrease, assuming your initial net worth is close to zero) then what you need is 6847.6 > 6901.8, which doesn't hold. Even if u(x) = log(1+log(1+x)) the condition doesn't hold.

If we fix our origin by saying that u(0)=0 (i.e., we're looking at utility *change* as a result of the transaction) and suppose that at any rate u(1001) <= 1001/1000.u(1000), which is certainly true if returns are always diminishing, then "two-boxing is better because of diminishing returns" implies 10u(1) > 8.01u(1000). In other words, gaining $1M has to be no more than about 25% better than gaining $1k.

Are you *sure* you two-box because of diminishing returns?

## comment by lmm · 2015-12-03T23:31:42.825Z · LW(p) · GW(p)

In other words, gaining $1M has to be no more than about 25% better than gaining $1k.

Interesting. My thought process was that it's worth losing $8000 in EV to avoid a 1% chance of losing $1000. I think my original statement was true, but perhaps poorly calibrated; these days I *shouldn't* be that risk-averse.