Newcomb's Problem: A Solution

post by Liam Goddard · 2019-05-26T16:32:55.987Z · score: -1 (7 votes) · LW · GW · 17 comments

In the classic form of Newcomb's Problem, you receive two boxes, one (Box A) clear with $1,000, or one (Box B) opaque with either $0 or $1,000,000. You can take either both boxes or only Box B. Omega, the superintelligence who has been right 99% of the time (some phrase it as 100%), will put $0 in Box B if they think you will take both boxes, and puts $1,000,000 in Box B if they think you will take only Box B.

People have argued constantly over which option is better. "Take both," some say, "because they are already filled, and either way, you will get $1,000 instead of $0, or $1,001,000 instead of $1,000,000." "Take only B," others say, "for you must be the type of person who will take only B, and if you are the type of person who wants to take both, if that is your decision, then Box B will be empty.

And people say that these are rational arguments.

Back in the days before Galileo, people argued constantly over how gravity works. "Heavier objects fall faster!" said one person. "No, size is what matters, larger fall faster!" said another. "No, denser objects fall faster!" said another. "No, things blessed by mysterious rituals fall faster!"

And then Galileo came, and he was the one who figured it out, because he stopped arguing over philosophy, did an experiment, and actually looked at the way the world was.

The reason that Galileo's theory of "The weight of an object does not affect how fast it falls" was correct was not because of some philosophical truth. It was because he looked at the world, and that was what he saw!

So let's go back to Newcomb's problem, and solve it through scientific experiment. 200 people do the problem- 100 one-boxers, 100 two-boxers. Omega is right 99% of the time, so 99 one-boxers get $1,000,000, one one-boxer gets $0, 99 two-boxers get $1,000, and one two-boxer gets $1,001,000.

This means that on average, a one-boxer gets $990,000, and a two-boxer gets $11,000. Our goal as rationalists is to win. We want as much money as possible, and through scientific experiment we have found that the option that gives us the most money- 90 times as much money, in fact- is to take Box B and only Box B.

So people will continue their debates on Omega's strength and our ability to change the future and whether we should choose Box A. But those of us who follow the scientific method, the basis of all rationality, know our decision.

And the two-boxers will be proud of how they managed to grab $1,000+0 instead of just $0, while we receive $1,000,000.

17 comments

Comments sorted by top scores.

comment by Slider · 2019-05-26T19:50:40.540Z · score: 7 (6 votes) · LW · GW

Your thought experiement has failed to actually look at the world, you still do not have any empirical evidence. If the Galileo argumenters had made a thought-experiment and concluded "thus blessed by mysterious rituals falls faster" the result would still be firmly "within philosophy".

comment by habryka (habryka4) · 2019-05-27T01:38:36.254Z · score: 12 (4 votes) · LW · GW

Hey, you've been making a lot of comments lately, and when I am honest I've been failing to parse a large fraction of them, and another significant fraction that I have been able to parse haven't been very good. I think it would be better for you to make slightly fewer comments, and invest more time into each individual one.

(This isn't really a moderator warning yet, but I do think it's plausible that we would give you a temporary ban if you continue commenting at your current volume and quality level)

comment by Slider · 2019-05-27T10:39:08.102Z · score: 1 (1 votes) · LW · GW

Okay feedback heeded. I did form an expression impression that since none of my comments went into negative that I was not harming anyone atmost being ineffective. I would really appriciate if people would hint at where my quality is low or where I am wrong ("you suck" is too general to be be used to improve). I can kinda appricate the fact that in order to get a valid downvote someone needs to parse it and it can end up being very unrewarding work. "You should understrand without explanation why you are too stupid to contribute" could be a very unhealhty moderation line that could result.

comment by habryka (habryka4) · 2019-05-27T18:24:19.142Z · score: 2 (1 votes) · LW · GW

I think for comments that are hard to parse, it's a bit more difficult since there is a lot of technical discussion on the site, and I at least try to only downvote something if I understood what it was trying to say, and then decided I didn't like it. Only if a pattern emerges where I repeatedly have trouble parsing someone's comments do I feel justified in downvoting or pointing that out.

I think a large fraction of the problem is just english proficiency, now that I am rereading them. Which is something I am very sympathetic to, having learned english as a second language myself. Some other fraction is just stuff that I expect could be fixed by activating a simple english spell check, like a lot of this comment.

comment by Gurkenglas · 2019-05-27T13:19:54.761Z · score: 0 (3 votes) · LW · GW

I agree that if hard-to-parse posts aren't wanted they should be downvoted.

I did form an expression

"I did form an opinion", or better yet "I thought"

comment by Liam Goddard · 2019-05-26T20:25:59.918Z · score: 1 (1 votes) · LW · GW

Since Newcomb's Problem, the boxes, and Omega don't actually exist, we can't physically conduct the experiment. However, based on the rules of the problem we can calculate the average amount of profits. In this fictional world, we are already told that Omega guesses correctly 99% of the time, and since we learned that from Newcomb himself it counts as a fact about this fictional world. This means that 99% of the time, the one-boxer gets $1,000,000 and 99% of the time the two-boxer gets $1,000. That's like saying that we can't be sure of whether purebloods are stronger in HPMOR. Even though we haven't seen any evidence in our world, since there's no purebloods in the real world, Yudkowsky tells us the facts in HPMOR, and since Yudkowsky's word is fact about HPMOR, this confirms hypothesis "purebloods are no stronger than other wizards." And even though we haven't seen any Omega evidence in our world, Newcomb tells us the facts in his problem, and since Newcomb's word is fact about Newcomb's problem, this confirms hypothesis "one-boxers almost always do better than two-boxers."

If a pre-Galileo person wrote a fictional story about a different land in which heavier objects fell faster, in that world, heavier objects would fall faster. By simple mathematics, we can prove that under the conditions stated by Newcomb, we should take both boxes.

comment by Dagon · 2019-05-27T08:07:31.291Z · score: 5 (2 votes) · LW · GW

We can't do the experiment because the problem isn't real. So appealing to Galileo and experiments is at best misleading. There's no reality we're testing here.

A _LOT_ hinges on how Omega is performing this impossible feat. I assert that two-boxers believe that past averages don't apply to this instance - they don't actually expect to get $1000, they expect $1001000. But we can't be sure what they're thinking, and we can't be sure what Omega's mechanism is, because we can't do the experiment.

The thought experiment is far enough removed from reality that it doesn't tell us much about ... anything. When I first heard it a few decades ago, it seemed to be about free will. and even then it didn't teach anything, as its assuming the answer is "no". Now it's morphed into ... something something decision theory. And still doesn't map to any reality, so still doesn't have much truth-value.


comment by Slider · 2019-05-27T00:31:03.760Z · score: 3 (2 votes) · LW · GW

The charactes and the argument sides are not lacking in information how the world works. The important bit of your strategy is to argue how your static keeping statistics is relevant to the question and to the right question. The issue is going to be that the traditional problematic ways would suggest an incorrect experiment setup. Feels weird why I can't figure out what those would be but one obviously false would be that "if you could influence to be given one or two boxes contents should you take one or both" where answer would be "both" because no boxes ever hold negative amount of money. One of the relevant catches would be that naming "both boxes" is not an effective way to cause what is in boxes to be in your posession, answer "both" to the wrong question doesn't imply that you should choose option "both".

But instead of being able to skip theory you will end up recreating the "must be type of person" argument in why the experiment reflects the right question. In care you can't you will be subject to not being able to set up a an experiment in other thought experiments testing different decison theory failures.

comment by Gurkenglas · 2019-05-27T13:31:50.378Z · score: 2 (2 votes) · LW · GW

static keeping

calculation

Rephrasing your whole comment: Liam claims to have dissolved the high-level arguments for different solutions by applying the low-level way to brute-force the correct solution to any problem. He needs to show that his way is correct.

comment by Slider · 2019-05-27T14:06:01.454Z · score: 1 (1 votes) · LW · GW

You are doing good work in salvaing my point. However I still think that there are multiple low-level methods and that the approach isn't evidently applicaple to all problems.

comment by Gurkenglas · 2019-05-28T02:17:50.997Z · score: 2 (2 votes) · LW · GW

My rephrasing says Liam claims that his low-level method is The One and always applies. You say "however", then fail to disagree with me.

comment by Slider · 2019-05-28T10:16:34.653Z · score: 1 (1 votes) · LW · GW

I read the summation as "Liam applies generically known brute-force method" when you seemed to mean "Liam uses a brute force method he claims is the only one possible". If I say "The president of the United States is arrogant" am I making a claim that there is only one such president? This seems to be about how the definite article "the" is used in english language and I am genuinely unsure whether there is a reliable way to be unambigious about it.

comment by Richard_Kennaway · 2019-05-28T07:37:17.006Z · score: 4 (2 votes) · LW · GW

This is the standard argument for one-boxing. The two-boxer will reply, "But the boxes are already filled!" The one-boxer replies "One boxing wins!" The two-boxer replies "THE BOXES ARE ALREADY FILLLLED!!!" The one-boxer replies "BUT. ONE. BOX. ING. WINNNNS!!!"

A paradox is not resolved by clinging to one side of it and claiming it refutes the other.

This video may be illuminating: Ilya Shpitser's talk on Newcomb's problem at FHI.

Here is another variation on the problem. Suppose you discover how Omega makes its predictions. It turns out that there is a gene whose different alleles predispose you to one-boxing or two-boxing on Newcomb's problem. (Hey, this is no sillier an idea than in a lot of thought experiments.) If you have variant 1, then 99% of the time you will one-box, and similarly for variant 2. Omega is, in effect, telling you with 100% reliability which variant you have, and has filled the boxes accordingly.

No-one has previously faced Omega with that knowledge. What do you choose?

comment by TAG · 2019-05-27T12:48:12.110Z · score: 2 (4 votes) · LW · GW

You're not bringing in anything new. Everyone argues on the basis if theoretical probability, and people don't agree because they use different assumption about determinism, free will and omniscience.

comment by Alexei · 2019-05-27T04:39:52.084Z · score: 2 (1 votes) · LW · GW

Seems fine as a practical solution. But it’s still nice to do the math to figure out the formula, just like we have a formula for gravity.

comment by Donald Hobson (donald-hobson) · 2019-05-27T08:19:53.627Z · score: 1 (1 votes) · LW · GW

This is pretty much the standard argument for one boxing.

comment by Liam Goddard · 2019-05-28T01:59:44.740Z · score: 3 (2 votes) · LW · GW

From what I've seen, most people seem to argue two-box, and the one-boxers usually just say that Omega needs to think you'll be a one-boxer, so precommit even if it later seems irrational... I haven't seen this exact argument yet, but I might have just not read enough.