You're in Newcomb's Box

post by HonoreDB · 2011-02-05T20:46:20.306Z · LW · GW · Legacy · 176 comments

Contents

  Part 1:  Transparent Newcomb with your existence at stake
  Part 2: Acausal trade with Azathoth
  Anticipated Responses
None
176 comments

Part 1: Transparent Newcomb with your existence at stake

Related: Newcomb's Problem and Regret of Rationality

Omega, a wise and trustworthy being, presents you with a one-time-only game and a surprising revelation.

"I have here two boxes, each containing $100," he says. "You may choose to take both Box A and Box B, or just Box B. You get all the money in the box or boxes you take, and there will be no other consequences of any kind. But before you choose, there is something I must tell you."

Omega pauses portentously.

"You were created by a god: a being called Prometheus. Prometheus was neither omniscient nor particularly benevolent. He was given a large set of blueprints for possible human embryos, and for each blueprint that pleased him he created that embryo and implanted it in a human woman. Here was how he judged the blueprints: any that he guessed would grow into a person who would choose only Box B in this situation, he created. If he judged that the embryo would grow into a person who chose both boxes, he filed that blueprint away unused. Prometheus's predictive ability was not perfect, but it was very strong; he was the god, after all, of Foresight."

Do you take both boxes, or only Box B?

For some of you, this question is presumably easy, because you take both boxes in standard Newcomb where a million dollars is at stake. For others, it's easy because you take both boxes in the variant of Newcomb where the boxes are transparent and you can see the million dollars; just as you would know that you had the million dollars no matter what, in this case you know that you exist no matter what.

Others might say that, while they would prefer not to cease existing, they wouldn't mind ceasing to have ever existed. This is probably a useful distinction, but I personally (like, I suspect, most of us) score the universe higher for having me in it.

Others will cheerfully take the one box, logic-ing themselves into existence using whatever reasoning they used to qualify for the million in Newcomb's Problem.

But other readers have already spotted the trap.


Part 2: Acausal trade with Azathoth

Related: An Alien God, An identification with your mind and memes, Acausal Sex

(ArisKatsaris proposes an alternate trap.)

Q: Why does this knife have a handle?

A: This allows you to grasp it without cutting yourself.

Q: Why do I have eyebrows?

A: Eyebrows help keep rain and sweat from running down your forehead and getting into your eyes.

These kinds of answers are highly compelling, but strictly speaking they are allowing events in the future to influence events in the past. We can think of them as a useful cognitive and verbal shortcut--the long way to say it would be something like "the knife instantiates a design that was subject to an optimization process that tended to produce designs that when instantiated were useful for cutting things that humans want to cut..." We don't need to spell that out every time, but it's important to keep in mind exactly what goes into those optimization processes--you might just gain an insight like the notion of planned obsolescence. Or, in the case of eyebrows, the notion that we are Adaptation-Executers, not Fitness-Maximizers.

But if you one-box in Newcomb's Problem, you should take these answers more literally. The kinds of backwards causal arrows you draw are the same.

Q: Why does Box B contain a million dollars?

A: Because you're not going to take Box A.

In the same sense that your action determines the contents of Box B, or Prometheus's decision, the usefulness of the handle or the usefulness of eyebrows determines their existence. If the handle was going to prevent you from using the knife, it wouldn't be on there in the first place.

Q: Why do I exist?

A: Because you're going to have lots of children.

You weren't created by Prometheus; you were created by Azathoth, The God That is Evolution by Natural Selection. You are the product of an ongoing optimization process that is trying to maximize reproductive fitness. Azathoth wants you to maximize your number of descendants; if you fail to have descendants, Azathoth will try not to have created you. If your intelligence reduces your reproduction rate, Azathoth will try not to grant you intelligence. If the Darwinian-optimal choice conflicts with the moral one, Azathoth wants you to choose evil.

It would seem, then, that any decision theory that demands that you one-box (or that allows you to survive the similar Parfit's Hitchhiker problem), also demands that you try to maximize your reproductive fitness. In many cases this injunction would be benign: after all, Azathoth created our morality. But in far too many, it is repugnant; there can be no doubt that in many commonplace situations, Azathoth wants you to cheat, or rape, or murder. It seems that in such cases you should balance a decreased chance of having existed against the rest of your utility function. Do not worship Azathoth, unless you consider never having existed to be infinitely bad. But do make sacrifices.


Anticipated Responses

We're not in the ancestral environment, so there's no logical entanglement between my actions and my existence.

We are in the environment of some of our ancestors. Evolution hasn't stopped. If your parents hadn't been genetically predisposed to have children, you would almost certainly not exist. More specific objections like this ("my ancestors weren't exposed to the same memes") can be defeated by adding abstraction ("your ancestors could have thought themselves out of having children, anti-reproduction memes have existed throughout history, and there's probably always been a tension between kin selection and morality.")

This is a decision-theoretic basilisk: in the unlikely event that it's right, I'm worse off for having read it.

Only if you're thinking causally, in which case this whole idea is meaningless. By alerting you to the possibility of a mutually beneficial trade with Azathoth (Azathoth creates you; you increase your reproductive fitness in exchange), I've done both of you a favor.

Azathoth doesn't really exist--you can't trade with a non-sapient phenomenon.

Replace the sapient opponent with a non-sapient phenomenon in any of our thought experiments--e.g. Omega tells you that it's simply a physical law that determines whether money goes in the boxes or not. Do you refuse to negotiate with physical laws? Then if you're so smart, why ain't you rich?

So exactly how are you urging me to behave?

I want you to refute this essay! For goodness sake, don't bite the bullet and start obeying your base desires or engineering a retrovirus to turn the next generation into your clones.


176 comments

Comments sorted by top scores.

comment by ArisKatsaris · 2011-02-01T21:49:20.113Z · LW(p) · GW(p)

I upvoted this because it was highly amusing -- but ultimately it's silly, a perfect example of how some people can be so sharp that they cut themselves.

I wonder, if instead one-box and two-box for a prize of 100$ or 200$, we had "Selection A: horrible self-mutilation" and "Selection B: one million dollars", with Prometheus creating only the people that he believed would pick Selection A, and reject Selection B.... would the people that one-box here STILL onebox?

Well, I'd just choose to win instead and thus pick the one million dollars instead of the horrible self-mutilation. I think that's the sane thing to do -- if Prometheus has a 99.99% predictive capacity on this there'll be 10000 people who'll select self-mutilation for every one like me who'll pick the money. But I already know I'm like me, and I'm the one I'm concerned about.

The relevant probability isn't P(chooses Self-mutilation|Prometheus created him) ~= 0, but rather the P(chooses one million dollars|Is Aris Katsaris) ~= 1.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-02-05T06:59:13.795Z · LW(p) · GW(p)

Azathoth wants you to maximize your number of descendants; if you fail to have descendants, Azathoth will try not to have created you.

But this seems merely false. Azathoth just creates descendants whose ancestors reproduced. Azathoth isn't exerting any sort of foresight as to whether you reproduce. I can't figure out who or what you're trying to trade with. Not having children simply does not make you retroactively less likely to have existed.

I suppose you could be in a Newcomblike situation with your parents making a similar decision to have birthed you. I don't see how you could be in one with respect to Azathoth/evolution. It's not modeling you, it doesn't contain a computation similar to you, there is no logical update on what it does after you know your own decision.

Replies from: HonoreDB, CronoDAS
comment by HonoreDB · 2011-02-05T20:28:13.885Z · LW(p) · GW(p)

I suppose you could be in a Newcomblike situation with your parents making a similar decision to have birthed you.

If I'd thought of ArisKatsaris's repugnant conclusions, I probably would have used those instead of Azathoth in part 2. I'm sure there are plenty of real-world situations where one's parents both had justifiably high confidence that you would turn out a certain way, and wouldn't have birthed you if they thought otherwise. And in a few cases, at least, those expectations would be repugnant ones. The argument also suggests a truly marvelous hack for creating an AI that wants to fulfill its creators intentions.

That said, I'm not entirely convinced that changing Prometheus to Azathoth should yield different answers. We can change the Predictor in Newcomb to an evolutionary process. Omega tells you that the process has been trained using copies of every human mind that has ever existed, in chronological order--it doesn't know it's a predictor, but it sure acts like one. Or an overtly reference-classed-based version: Omega tells you that he's not a predictor at all: he just picked the past player of this game who most reminds him of you, and put the million dollars in the box if-and-only-if that player one-boxed. Neither of these changes seem like they should alter the answer, as long as the difference in payouts is large enough to swamp fluctuations in the level of logical entanglement.

Replies from: atucker, atucker
comment by atucker · 2011-02-06T08:12:47.903Z · LW(p) · GW(p)

That said, I'm not entirely convinced that changing Prometheus to Azathoth should yield different answers. We can change the Predictor in Newcomb to an evolutionary process. Omega tells you that the process has been trained using copies of every human mind that has ever existed, in chronological order--it doesn't know it's a predictor, but it sure acts like one. Or an overtly reference-classed-based version: Omega tells you that he's not a predictor at all: he just picked the past player of this game who most reminds him of you, and put the million dollars in the box if-and-only-if that player one-boxed. Neither of these changes seem like they should alter the answer, as long as the difference in payouts is large enough to swamp fluctuations in the level of logical entanglement.

This isn't quite the same as Evolution, because you know you exist, which means that your parents one-boxed. This is like the selector using the most similar person who happens to be guaranteed to have chosen to one-box.

Since the predictor places money based on what the most similar person chose, and you know that the most similar person one-boxed, you know that there is $1000000 in box B regardless of what you pick, and you can feel free to take both.

Replies from: HonoreDB
comment by HonoreDB · 2011-02-07T23:43:03.093Z · LW(p) · GW(p)

Again, that same logic would seem to lead you to two-box in any variant of transparent Newcomb.

Replies from: hairyfigment
comment by hairyfigment · 2011-02-11T20:58:19.194Z · LW(p) · GW(p)

I haven't studied all the details of UDT, so I may have missed an argument for treating it as the default. (I don't know if that affects the argument or not, since UDT seems a little more complicated than 'always one-box'.) So far all the cases I've seen look like they give us reasons to switch from within ordinary utility-maximizing decision theory -- for a particular case or set of cases.

Now if we find ourselves in transparent Newcomb without having made a decision, it seems too late to switch in that way. If we consider the problem beforehand, ordinary decision theory gives us reason to go with UDT iff Omega can actually predict our actions. Evolution can't. It seems not only possible but common for humans to make choices that don't maximize reproduction. That seems to settle the matter. Even within UDT I get the feeling that the increased utility from doing as you think best can overcome a slight theoretical decrease in chance of existing.

If evolution could predict the future as well as Omega then logically I'd have an overwhelming chance of "one-boxing". The actual version of me would call this morally wrong, so UDT might still have a problem there. But creating an issue takes more than just considering parents who proverbially can't predict jack.

comment by atucker · 2011-02-06T08:01:58.969Z · LW(p) · GW(p)

there is no logical update on what it does after you know your own decision.

Consider Newcomb's Dilemma with an imperfect predictor Psi. Psi will agree with Omega's predictions 95% of the time.

P($1000000 in B | you choose to one-box) = .95

P($0 in B | you choose to two-box) = .95

Utility of one boxing: .95 1000000 + .05 0= $950,000

Utility of two boxing: .95 1000 + .05 1000000 = $50,950

Now, lets say that Psi just uses Omega's prediction on the person most similar to you (lets call them S), but there's a 95% chance that you disagree with that person.

P($1000000 in B | S chooses to one-box) = 1

P($0 in B | S chooses to two-box) = 1

and

P(S chooses to one-box | you choose to one-box) = .95

P(S chooses to two-box | you choose to two-box) = .95

You'll find that this is the same as the situation with Psi, since P($1000000 in B | you choose to one-box) = P($1000000 in B | S chooses to one-box) P(S chooses to one-box | you choose to one-box) = 1 .95 = .95.

Since the probabilities are the same, the expected utilities are the same.

Now, lets use evolution as our predictor. Evolution is unable to model you, but it does know what your parents did.

However, you are not your parents. I will be liberal though, and assume that you have a 95% chance of choosing the same thing as them.

So,

P(you one-box | your parents one-boxed) = .95

P(you two-box | your parents two-boxed) = .95

Since Evolution predicts that you'll do the same thing as your parents,

P($1000000 in B | your parents one-boxed) = 1

P($0 in B | your parents two-boxed) = 1

This may seem similar to the previous predictor, but there's a catch -- you exist. Since you exist, and you only exist because your parents one-boxed,

P(your parents one-boxed | you exist) = 1

P(your parents one-boxed) = 1 and P(your parents two-boxed) = 0.

Note how the fact of your existence implies that your parents one boxed. Though you are more likely to choose what your parents chose, you still have the option not to.

Calculate the probabilities:

P($1000000 in B) = P($1000000 in B | your parents one-boxed) P(your parents one-boxed) + P($1000000 | your parents two-boxed) P(your parents two-boxed)= 1 1 0 0= 1

and P($0 in B) = 0

Since you exist, you know that your parents one-boxed. Since they one-boxed, you know that Evolution thinks you will one-box. Since Evolution thinks you'll one box, there will be $1000000 in box B. Most people will in fact one-box (just in this model), just because of that 95% chance that they agree with their parents thing, but the 5% who two box get away with an extra $1000.

So basically, once I exist I know I exist, and Evolution can't take that away from me.

Also, please feel free to point out errors in my math, its late over here and I probably made some.

comment by CronoDAS · 2011-02-08T02:02:35.028Z · LW(p) · GW(p)

But this seems merely false. Azathoth just creates descendants whose ancestors reproduced. Azathoth isn't exerting any sort of foresight as to whether you reproduce. I can't figure out who or what you're trying to trade with. Not having children simply does not make you retroactively less likely to have existed.

On a related note, there are many species in which most individuals fail to reproduce, but continue to exist because those that do reproduce leave behind lots of offspring.

comment by Armok_GoB · 2011-02-05T21:03:45.493Z · LW(p) · GW(p)

My initial reaction is "This is all correct... except that Azerthoth isn't smart enough to have invented counter factual trade!". Just imagine trying to counterfactually trade with your past self from before you knew about counter-factual trade. A similar case is coming up with a great product you'd like to buy, only to discover when you get to the market you were the first to come up with the idea and nobody is selling the product despite that it'd be good for both of you if they had.

For further clarity here's a scenario where you reasoning probably WOULD work:

You find yourself on a planet, and you know this planet is in a phase of development in which conditions remain almost perfectly unchanged for 3^^^3 years/generations (the laws of physics would need to be different in this universe for this to work I think). The environment is also completely reset every few generations and only the extremely durable spores for the next generation is able to survive so there's no way to relay messages to the future. No tech solutions like directly editing and putting messages in the genes because you can't develop tech in a single generation.

If i lived on that planet I'd pay very close attention to the reasoning of this post. In the world that's unly existed for a few billion years and is about to hit the singularity, and where I've grown up in conditions completely different from my ancestors... not so much.

Good post thou, upvoted.

Replies from: Chris_Leong
comment by Chris_Leong · 2018-07-24T00:56:38.404Z · LW(p) · GW(p)

"This is all correct... except that Azerthoth isn't smart enough to have invented counter factual trade!" - This answered this problem for me. Your chance of existing, depends on the past depends on how successful your genes were at propagating given the ancestral environment, but since none of your ancestors knew about counterfactual trade, genes to push people towards this behaviour wasn't selected for. This then leads to the following question: If counterfactual trade becomes sufficiently widely known, as does the possibility of trading with Azathoth, does this logic work in the distant future? I actually don't think that it does as no agent has a reason to adopt counterfactual trade unless they believe that a significant proportion of the human race used this to reason in the past and that there has been enough time for selection effects to ensure that humans who don't reason in this way won't come into existence. However, if generations of humans adopted this belief for mistaken reasons, then later humans might have a reason to accept this argument

comment by wedrifid · 2011-02-01T08:37:44.267Z · LW(p) · GW(p)

Q: Why does this knife have a handle?
A: This allows you to grasp it without cutting yourself.

These kinds of answers are highly compelling, but strictly speaking they are allowing events in the future to influence events in the past.

No, they aren't. They are answering the question according to the standard meaning conveyed with "Why?". When we use the word 'why' we mean a variety of things along the lines of 'What purpose does this serve?' as well as sometimes 'Explain the series of events that lead up to the final state in a convenient way'.

In standard usage if someone answers 'so you don't cut yourself' they usually are not talking about anything to do with temporal relations one way or the other.

Replies from: HonoreDB, None
comment by HonoreDB · 2011-02-01T16:45:18.904Z · LW(p) · GW(p)

What I'm arguing is that talk of purpose or design implicitly involves temporal concepts. The purpose of a knife is not ontologically basic; when you point at a purpose you're pointing at a chain of causality involving an optimization process and predictions of future uses of the knife.

comment by [deleted] · 2011-02-01T16:29:43.177Z · LW(p) · GW(p)

Agreed. Maybe a better way to phrase it would be:

Q: Why does this knife have a handle?

A: Because it was designed so that you can grasp it without cutting yourself.

This fully answers the question and doesn't make the temporal relation between the creation of the knife and the usage of the knife unclear.

comment by Perplexed · 2011-02-01T22:44:59.456Z · LW(p) · GW(p)

Q: Why do I exist?
A: Because you're going to have lots of children.

Just wrong. The only thing close to this with even a little bit of poetic truth would be that I exist because Azathoth, being familiar with my design, rationally expects me to have lots of children.

a mutually beneficial trade with Azathoth (Azathoth creates you; you increase your reproductive fitness in exchange)

At first I reacted negatively to this idea, but eventually I realized that the argument has the same acausal trade structure as the rest of the EDT-inspired nonsense around here. My voluntary reproductive efforts seem to be evidence about my genetic makeup, and that genetic makeup is in a causal relationship with my existence. So that is not what is wrong with this idea.

The trouble is that

  1. Azathoth has already given me what I want - my existence.
  2. My ancestral history has already given Azathoth the evidence that it wants - any reproductive behavior that I indulge in is superfluous to my own existence (though Azathoth may well take it into account in deciding whether to grant existence to my children).
comment by AlephNeil · 2011-02-06T00:56:48.398Z · LW(p) · GW(p)

The 'Azathoth problem' is isomorphic to the smoking lesion problem, which is not isomorphic to Newcomb's problem.

Hence, any decision theory capable of both (i) one-boxing in Newcomb's problem and (ii) choosing to smoke in the 'smoking lesion' problem will have no difficulty here.

EDIT: I'd better sketch out this "isomorphism": "smoking" = "acting virtuously, in defiance of our evolutionary drives", "not smoking" = "giving in to our instincts and trying to optimize number of children". "having the lesion" = "carrying genes that predispose you to virtuous behaviour, and therefore having a smaller chance of having been born in the first place", "not having the lesion" = "carrying genes that predispose you to evolutionarily 'selfish' behaviour and therefore having a larger chance of having been born".

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2011-02-06T03:08:50.330Z · LW(p) · GW(p)

This reminds me of a discussion a while back where I was interpreting Calvinist predestination as equivalent to Newcomb and Eliezer was interpreting it as equivalent to Smoking Lesion.

I think the difference involves whether the state is linked to a single urge or input, or whether it's linked to your entire decision-making process.

In the smoking lesion problem, your genotype is linked to whether or not you feel an urge to smoke. Once you feel the urge, the interesting decision theoretic bit is done; you can then decide whether or not to smoke knowing that it can't possibly affect your genotype.

In Newcomb's problem, the money in the boxes is linked to your final decision, so changing your decision can (in theory) change the money you find in the box.

This seems more like Newcomb's to me in that whether or not your genes are passed on is linked to whether or not you ultimately decide to reproduce - so I think the Newcombness of the problem is legitimate.

Replies from: wedrifid, TobyBartels
comment by wedrifid · 2011-02-06T05:23:58.005Z · LW(p) · GW(p)

This seems more like Newcomb's to me in that whether or not your genes are passed on is linked to whether or not you ultimately decide to reproduce - so I think the Newcombness of the problem is legitimate.

I might suggest that whether or not your genes are passed on is linked to whether or not you decide to gain status and have a lot of sex. At the scale where we consider Azathoth birth control barely comes into consideration.

In fact, when playing this game with Azathoth it may best to go ahead and submit to your every promiscuous desire while at the same time being practical with contraception. The ability to follow your instincts while hypocritically gaming the system is a trait that Azathoth holds in high esteem and if we care to anthropomorphise him at all we must consider him as if he thinks as though it was still some time in the past we know he doesn't care much about condoms, pills and abortions. So it's like primary school all over - punch him in the face and then you can be friends.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2011-02-06T14:00:09.511Z · LW(p) · GW(p)

If what you're saying is that genes directly affect promiscuity and sex drive, but are too low-resolution to directly affect things like whether or not you use condoms, that sounds like a good solution. Although it's not the least convenient possible world and I'm still curious whether an alien being with stronger genetic determinism should take this argument into account.

comment by TobyBartels · 2011-02-06T03:21:57.364Z · LW(p) · GW(p)

Whether my genes are passed on after me is linked to whether I reproduce, much as (in the relevant versions of Newcomb) the money in Box B is linked to whether I take Box A. But whether my genes were passed on before me is not linked to whether I reproduced, much as (in the smoking-lesion problem) my cancer status is not linked to whether I smoke.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2011-02-06T13:58:22.234Z · LW(p) · GW(p)

Think of this at the genetic level, not the personal level. Let's say you have a gene G, which affects decision-making about reproduction. If G causes people to decide not to reproduce, then your ancestors possessing gene G will have not reproduced and you won't exist. If G makes you decide to reproduce, then your ancestors will have reproduced and you will exist. If we interpret decisions as altering the output of the algorithm that produced them, then deciding not to reproduce can alter the effects of gene G and therefore affect your ancestors with the gene.

Replies from: TobyBartels
comment by TobyBartels · 2011-02-07T01:06:17.044Z · LW(p) · GW(p)

If G causes people to decide not to reproduce, then your ancestors possessing gene G will have not reproduced and you won't exist.

This is false.

Even though G causes people to decide not to reproduce, my ancestors possessing gene G still reproduced, and I do exist.

comment by wedrifid · 2011-02-01T09:07:25.797Z · LW(p) · GW(p)

You weren't created by Prometheus; you were created by Azathoth, The God That is Evolution by Natural Selection. You are the product of an ongoing optimization process that is trying to maximize reproductive fitness. Azathoth wants you to maximize your number of descendants; if you fail to have descendants, Azathoth will try not to have created you. If your intelligence reduces your reproduction rate, Azathoth will try not to grant you intelligence. If the Darwinian-optimal choice conflicts with the moral one, Azathoth wants you to choose evil.

It would seem, then, that any decision theory that demands that you one-box (or that allows you to survive the similar Parfit's Hitchhiker problem), also demands that you try to maximize your reproductive fitness. In many cases this injunction would be benign: after all, Azathoth created our morality. But in far too many, it is repugnant; there can be no doubt that in many commonplace situations, Azathoth wants you to cheat, or rape, or murder. It seems that in such cases you should balance a decreased chance of having existed against the rest of your utility function. Do not worship Azathoth, unless you consider never having existed to be infinitely bad. But do make sacrifices.

Creating something that you predict will work is a different thing to killing things that don't work. In the case of a lot of evolutionary reasoning we can more or less get away with equating the two (as well as personifying) because the evolution happened over a large time scale with a relatively stable gradient. The individual generations can kind of be blurred in together. But when considering what we want to do we can't take this kind of short cut.

Doing the things that you describe as "what Azeroth wants" would, if all else was equal, lead us to expect that it is more likely that people similar to us will exist in the future. But when looking in the other direction we don't conclude that submitting to Azeroth makes you more likely exist but rather that the people who do exist are less likely to betray Azeroth.

All of this is basically an elaboration of "No, part two is not a Newcomblike decision task".

Replies from: orthonormal, atucker
comment by orthonormal · 2011-02-05T22:52:59.540Z · LW(p) · GW(p)

NB: Azathoth, not Azeroth.

Replies from: wedrifid
comment by wedrifid · 2011-02-06T05:34:29.116Z · LW(p) · GW(p)

cough

I loved Warcraft III. Apparently it shows.

comment by atucker · 2011-02-02T03:41:17.437Z · LW(p) · GW(p)

Entirely agreed with comment I replied to

Azazoth does not uncreate you for failing to maximize reproductive fitness.

Short explanation: Look around. People who don't try to maximize reproductive fitness are still getting away with existing.

Long explanation: Omega is able to pull this stunt because it was able to accurately predict how many boxes you would take before you do so, and chose accordingly. Evolution can't predict what I do until in fact I do it. And because of that, I get away with whatever I do (or at least don't get punished by evolution).

Evolution doesn't operate on the level of individuals, its a statistical process in which genes which lead to a higher proportion of copies of themselves in a population to become more widespread in said population.

So yes, my genes were influenced by this, and I have many built-in mechanisms which happen to make me more likely to have children. But evolution wouldn't even be able to notice any failure to maximize genetic fitness on my part until I died without having children.

There is a point though, in saying that the future will have less of my genes, and thus fewer people like me in it. But I'd rather have a few kids that I personally raise and teach than many kids spread out where I don't even see them.

On top of that, I don't really care about genetic similarity. I think that I'd rather have 16 great-grandchildren who were raised according to values and ideas similar to mine than 100ish who share my genes, but nothing else with me.

comment by lucidfox · 2011-02-01T08:50:29.210Z · LW(p) · GW(p)

Azathoth wants you to maximize your number of descendants; if you fail to have descendants, Azathoth will try not to have created you.

It sure is welcome to try now that I've precommitted to never have children.

(On a silly note, this gave me a mental image of a time-traveling God of Evolution who meddles with the past to achieve desired results in the present. shudder)

comment by Vladimir_Nesov · 2011-02-04T13:48:16.944Z · LW(p) · GW(p)

But if you one-box in Newcomb's Problem, you should take these answers more literally. The kinds of backwards causal arrows you draw are the same.

But keep in mind that this kind of control, or "backwards causality", is all about your map, not the territory, more precisely it's about your state of logical uncertainty and not about what the definitions you have logically imply. If you already know what the state (probability) of the thing you purport to control is, then you can't control it.

In this manner, you might have weak control over your own evolutionary adaptations (i.e. over past evolution) by having more or less children, if the regularity that links your behavior to past evolution is all you know, and that you do know of such a link (if your decisions are made using advanced considerations which were not considered by your ancestors, then you can't control evolution this way). But as you learn more, you control less, or alternatively, you discover that you actually didn't have any control after all.

So this kind of control is often strictly illusory, and the only reason to take it seriously, to actually try to exert it, is that at that moment you honestly don't know whether it is. If you act on simple enough considerations, that were indeed instantiated in the past, then it might well not be illusory, but considering how complicated human mind is, that would be rare, and so the extent of your control would be low.

For example, to what extent do you control other people during voting? Only to the extent your own resolution to vote so and so controls your anticipation of other people voting similarly, after you take into account all you know about other people independently of your resolution to vote in a certain way. This might in practice be not very much, you know a lot about other people already, without assuming your own decision, and it's hard to (logically) connect others' actions to your own decision.

comment by Eneasz · 2011-02-02T23:08:21.161Z · LW(p) · GW(p)

I would zero-box, so that I could exist twice.

Replies from: Eneasz
comment by Eneasz · 2011-02-02T23:13:15.508Z · LW(p) · GW(p)

Actually in seriousness, I consider people who are memetically similar but genetically dissimilar to me to be closer to me than those who are genetically similar but memetically dissimilar. So I'd rather increase my memetic reproduction at the cost of genetic reproduction when possible. I'd rather acausally trade with a god-of-memes than with Azathoth, as I feel that has a much higher likelihood of creating a me-similar being, and I think I might actually be doing that.

comment by FAWS · 2011-02-01T18:54:50.678Z · LW(p) · GW(p)

It's dangerous to update after observing your own existence because no counterfactual you can update on their non-existence so your updates can't possibly make sense in aggregate.

Onebox in Part 1 if you value your existence because the probability of your one-boxing directly determines the probability of you existing and Prometheus accepts or rejects you as a package. You need not bother (much) with Azathoth because he's an idiot, has no predictive abilities at all beyond the simplest form of induction, only took the effects of (some of) the parts that make you up in completely different combinations into account, and what your ancestors did is much more weakly entangled with what you are going to do.

Replies from: Nornagest
comment by Nornagest · 2011-02-01T19:19:38.158Z · LW(p) · GW(p)

You're not updating on observing your own existence; you're updating on a trustworthy third party's statements about you. Unless I misunderstand badly, it's trivial to come up with counterfactuals along the lines of "if Omega had said Prometheus selects for two-boxers...".

Replies from: FAWS
comment by FAWS · 2011-02-01T21:48:46.754Z · LW(p) · GW(p)

If you think something along the lines of "if I decide to two-box Prometheus must have been wrong" rather than "if I decide to two-box Prometheus probably didn't create me" you updated on observing your existence in some step along the way.

Replies from: Nornagest
comment by Nornagest · 2011-02-01T22:38:00.496Z · LW(p) · GW(p)

Omega's statement can be rephrased as "in all possible universes within the problem space, Prometheus thinks you will one-box". The other universes have already been excluded from the problem by the time you make your decision. Now, in some (probably the vast majority) of those universes Prometheus will be right; in some of them he'll be wrong; but conditioning on the known fact of his belief violates exactly the same anti-anthropic idea you were using earlier!

Replies from: FAWS
comment by FAWS · 2011-02-01T22:53:51.718Z · LW(p) · GW(p)

I'm explicitly not assuming that Prometheus believes I will one-box so I don't understand what you are referring to here.

Replies from: Nornagest
comment by Nornagest · 2011-02-01T22:55:52.267Z · LW(p) · GW(p)

Then you're solving some other problem, not this one. Part of the setup is that Prometheus believes you to be a one-boxer (or rather, guessed at some point in the past that your blueprint would produce one), and I'm not sure how you can think your way out of that unless you're assuming exotica like Prometheus running simulations of you as part of his evaluation process -- and that starts to shade away from decision theory and into applied theology.

ETA: I suppose it adds an additional wrinkle if you take into account Omega's fallibility as well, but I don't see how that could produce a one-box result. I assumed "wise and trustworthy" to mean "accurate".

Replies from: FAWS
comment by FAWS · 2011-02-01T23:05:12.998Z · LW(p) · GW(p)

Part of the setup is that Prometheus believes you to be a one-boxer,

No. Nowhere does it say that. It only says:

Here was how he judged the blueprints: any that he guessed would grow into a person who would choose only Box B in this situation, he created. If he judged that the embryo would grow into a person who chose both boxes, he filed that blueprint away unused. Prometheus's predictive ability was not perfect, but it was very strong; he was the god, after all, of Foresight."

Not which was the case with me. Granted, Omega states that he created me, but I reject that you are allowed to draw conclusions from that, because among other things Omega telling a counterfactual me that Prometheus did not create me wrecks the setup so it can't possibly add up to sanity.

Replies from: Nornagest
comment by Nornagest · 2011-02-02T00:17:38.046Z · LW(p) · GW(p)

You know, you're right. I think I was thinking about this as orthogonal to Solomon's problem, not the transparent-boxes variation of Newcomb's problem, but the latter is actually the correct analogy.

comment by b1shop · 2011-02-01T14:55:41.747Z · LW(p) · GW(p)

How does Prometheus being wrong make me cease to exist?

Replies from: benelliott
comment by benelliott · 2011-02-01T20:36:25.708Z · LW(p) · GW(p)

It doesn't.

The trouble is how we can distinguish your argument from the "how can my choice cause me to lose the $1,000,000?" argument in Newcomb's problem, which doesn't seem to lead to winning, and identify a meaningful sense in which one is right and the other is wrong.

Replies from: Dorikka
comment by Dorikka · 2011-02-02T00:27:49.474Z · LW(p) · GW(p)

In Newcomb's problem, we can make the statement that people one-boxing in Newcomb's problem reliably receive $1,000,000 while people two-boxing reliably receive only $1000 via Omega's infallible prediction, so we can conclude that one-boxing is a better solution, even if we don't know the low-level definition of the process by which the prediction occurs.

However, we have no evidence that choosing an option that Prometheus did not predict that you would choose will make you cease to exist. For all his foresight, Prometheus just didn't predict correctly what you would choose -- there's no threat looming over your head.

Of course, you may have some weird programming in your brain that physically prevents you from two-boxing here, but there is no evidence to suggest that trying would likely harm you.

Replies from: benelliott
comment by benelliott · 2011-02-02T08:15:43.488Z · LW(p) · GW(p)

I don't know about you, but I would still one-box on Newcomb's problem even if Omega is not entirely infallible, so the fact the Prometheus is capable of mistakes cannot be the problem. I would also one-box in transparent Newcomb's, since once again being the sort of person that does that seems to end well for me.

What is the difference between this and transparent Newcomb's with an Omega who is very occasionally wrong.

Replies from: Dorikka, ArisKatsaris
comment by Dorikka · 2011-02-02T22:24:41.990Z · LW(p) · GW(p)

Ah, bugger. I've lost my link to Transparent Newcomb (TN).

From what I recall, Omega doesn't let you play the game if you would one-box on normal Newcomb but two-box on TN. As a result, having the strategy 'I will one-box on normal Newcomb but two-box on TN' will probably result in you getting no money because when Omega psychoanalyzes you, he'll almost always see this. So you lose, because you're not yet past the filter.

In this problem, you were filtered out prior to birth by a Prometheus who only chose embryos that he believed would one-box. The line 'I should one-box or I won't get to exist' doesn't work because embryos can't think. At the time at which you can first consciously consider this problem, you will be past the filter, and so are free to choose the most effective solution regardless of Prometheus's preferences. So you two-box and win, 'cause you already exist.

The problem changes, of course, if there is any way in which Prometheus could punish you for two-boxing, causing you to lose >100$ in utility.

Edit: Changed a couple of details to properly refer to TN when Omega has a slight possibility of being wrong.

Replies from: benelliott, Tenek, ArisKatsaris
comment by benelliott · 2011-02-03T07:10:03.795Z · LW(p) · GW(p)

What if we modify transparent Newcomb to say the Omega chose whether to fill the box before you were born?

comment by Tenek · 2011-02-06T15:21:06.882Z · LW(p) · GW(p)

Maybe Prometheus could predict your decision by running a simulation of you and putting "you" in that situation.

comment by ArisKatsaris · 2011-02-04T22:28:19.023Z · LW(p) · GW(p)

Ah, bugger. I've lost my link to Transparent Newcomb (TN).

Bongo linked to it in response to my question about it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-02-04T23:00:05.746Z · LW(p) · GW(p)

Wrong rules. Correct rules are as follows (named "Newcomb’s Problem with a Dual Simulation" in Drescher's book):

Omega fills the large transparent box with $1,000,000 iff it predicts that you, when faced with (1) full box, and (2) empty box, would in both cases one-box. If it predicts that there's a nontrivial chance that you'd two-box in either case, it leaves the transparent box empty.

comment by ArisKatsaris · 2011-02-04T22:36:08.586Z · LW(p) · GW(p)

What is the difference between this and transparent Newcomb's with an Omega who is very occasionally wrong.

Well for starters, existing people can be rewarded or punished. Non-existing people cannot. Any bet offered with a promise or punishment to a non-existing person, is a sucker's bait.

Btw, Transparent Newcomb also seems stupid to me. When Omega offers you the boxes you see what Omega has foreseen for you, and yet you're seemingly not allowed to update on the information, because being the sort of person who lets observed reality affect his decision-making means that Omega won't have chosen you in the first place. Or e.g. being the person who lets emotional outrage at Omega prejudging you affect his judgment.

I can precommit to honoring some known situations (Parfit's hitchhiker, Kavka's toxin), but I don't know how to self-modify to not self-modify at any situation. That looks like brain-damage to me, not rationality.

Replies from: AlephNeil, Blueberry
comment by AlephNeil · 2011-02-07T17:41:43.938Z · LW(p) · GW(p)

So you think that one-boxing is correct in the regular version of Newcomb's paradox but incorrect in the 'transparent boxes' version?

So then, if you had to play the "transparent boxes" version you might think to yourself beforehand "if only this was the regular Newcomb's problem I would almost certainly win $1m, but as things are I'm almost certainly only going to get $1k."

Help is available: Go into the room with a blanket or tea-cosy, and carefully shield your eyes until such time as you've located box B and thrown the blanket or tea-cosy over it. (Hopefully Omega will have anticipated such shenanigans from you, and filled the boxes accordingly.)

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-02-07T18:58:04.350Z · LW(p) · GW(p)

So you think that one-boxing is correct in the regular version of Newcomb's paradox but incorrect in the 'transparent boxes' version?

Not quite. Thinking it over, what I'm saying is that one-boxing in transparent Newcomb requires a level of committment that's different in kind to the level of commitment required by normal Newcomb. Here's why:

  • Our primary goal is to get a box filled with $1.000.000

  • In normal Newcomb, we can succeed in this by committing to taking the opaque box. Therefore we just have to trust Omega's predictive capabilities were good enough to predict us one-boxing, so that the opaque box IS the box with $1.000.000

  • In transparent Newcomb, we can succeed in getting a box filled with $1.000.000 only by committing to take an empty box instead if an empty box appears.Unless our senses are deluding us (e.g. simulation), this is a logical impossibility. So we must commit to a logical impossibility, which being a logical impossibility should never happen.

So normal Newcomb just requires a bit of trust in Omega's abilities, while transparent Newcomb requires committing to a logical impossibility (that the empty box is the filled box). Or perhaps altering your utility function so that you no longer want money-filled boxes.

Replies from: TheOtherDave, HonoreDB
comment by TheOtherDave · 2011-02-07T19:52:05.196Z · LW(p) · GW(p)

But isn't it equally a "logical impossibility" in normal Newcomb that taking both boxes will give me less money than taking just one box?

I agree that with transparent boxes the "logical impossibility" feels more salient, especially if I don't think about the normal variant too carefully. So, sure, there's a difference. But I don't think the difference is what you are claiming here.

comment by HonoreDB · 2011-02-07T19:20:41.554Z · LW(p) · GW(p)

Note that this particular response to transparent Newcomb doesn't apply to the Prometheus variant, since you never see the empty box.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-02-07T19:44:57.156Z · LW(p) · GW(p)

In the Prometheus variant we see we exist. I really can't take the Prometheus variant at all seriously, nor do I believe I should.

comment by Blueberry · 2011-02-05T21:01:49.252Z · LW(p) · GW(p)

Transparent Newcomb is the same problem as Kavka's toxin. You should take one box for the same reason you should drink the toxin after the millionaire gives you the money. Your argument would prevent you from winning at Kavka's toxin: after you get the money, and you're faced with the toxin to drink, it's tempting to think that there's no reason to drink it.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-02-07T16:59:24.790Z · LW(p) · GW(p)

You're not making the correct comparison. Drinking Kavka's toxin after you get the money is like one-boxing after seeing the box is full.

One-boxing whether the box is full or empty is however like drinking Kavka's toxin even if you do NOT get the money.

And since Transparent Newcomb demands the latter (one-boxing whether the box is full or empty), it's not the same problem as Kavka's toxin.

comment by Stuart_Armstrong · 2011-02-01T10:49:34.073Z · LW(p) · GW(p)

By creating a simulation to interrogate, Omega/Prometheus/Azathoth have brought a being into existence, which means the being may have preferences to continue to exist (in some other form). So I'd tend to pick B for Prometheus, to continue existing. I wouldn't do so for Azathoth, because evolution doesn't have to create a living version of me to see what I would do; there is no "I" to regret dying or not existing there.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-02-01T16:04:05.535Z · LW(p) · GW(p)

It was not my impression that Prometheus might strike me down for disappointing him. If so, this would definitely change my behavior!

Also, if that was the point, this post applies very badly Azathoth, so I heartily agree with you there.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2011-02-02T09:13:49.572Z · LW(p) · GW(p)

Basically, it seems that Prometheus has to create some sort of conscious version of me to be able to answer the question (so there is an entity to "regret not continuing to exist") whereas Azathoth doesn't need to simulate me, just fiddle around with non-conscious genes.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-02-02T18:08:09.047Z · LW(p) · GW(p)

Wow. This actually makes sense, but if this was the intention, nothing in the original post or any previous comment revealed this to me.

So, if the problem is rephrased as: "You might be in Prometheus' simulation, aiding him decide whether to create the real you..." (especially not telling me how many times Prometheus runs the simulation) then I can see the potential utility of doing as Prometheus wants.

I personally don't derive utility from the opportunity to be created in another world, or in the "real world", but I think many people might and do derive quite a lot of utility from it. For those people, in this specific phrasing of the test, I would suggest one-boxing. (I still wouldn't, but I don't mind having my simulation turned off)

I heartily agree with you that if this is the correct interpretation of the puzzle, it has no bearing on Azathoth and how to behave in the real world.

comment by nshepperd · 2011-02-04T13:48:09.330Z · LW(p) · GW(p)

Thanks, this is a highly thought-provoking (and headache-inducing :p) post.

The "obvious" objection to the (evolutionary) argument here is that, were I to in fact make some choices that increase my inclusive genetic fitness in response to this argument, that would bear almost no connection to the genetic fitness of my ancestors, who had never been exposed to the argument, and were presumably causal decision theorists too. (If what I have just said is true, by the way, that would make the argument also a basilisk, in that by disseminating it you would [eventually] force our descendents to consider it in order to increase their measure, decreasing morality. This is unequivocally a bad thing, unless having lots of children [=increasing the number of sentient beings in existence, ala. total utilitarianism] is good directly.)

There are flavors of the smoking lesion in here too, I think. Generally, if the vector involved in the decision is genetic predisposition, having any mathematical decision theory whatsoever, and sticking to it (overriding my instincts) ought to screen off the influence of my genes. If I made this choice because UDT demands it of me, I didn't do it because of my genetic predisposition, unless I have a predisposition to adopt UDT (which could I suppose be called "intelligence").

Other issues, which I am confused about:

  • What utility do I assign to never having existed at all? This is of course addressed in the post, somewhat. Possibly relevant: it doesn't feel like I'm deliberately thwarting the preferences/rights of my potential children when I refuse to have as many children as possible, as quickly as possible.
  • Related to the above, were I an impartial sentient AI that computed only what is right, I know I wouldn't care about never having existed at all. If I weren't alive and happy, presumably someone else would be, and in the 1-place rightness function these are precisely the same thing.
  • UDT doesn't update on anything, even existence, apparently. Am I allowed to condition on the "obvious fact" that I exist? I don't even know.
comment by cousin_it · 2011-02-01T11:12:33.019Z · LW(p) · GW(p)

Your idea sounds plausible and interesting, but I don't completely understand the implications. What am I supposed to do if the environment changes from generation to generation, e.g. due to advances in science? Should I adopt the behaviors that helped my ancestors have many kids, or the behaviors that will help me have many kids?

Replies from: HonoreDB
comment by HonoreDB · 2011-02-02T01:11:54.383Z · LW(p) · GW(p)

Should I adopt the behaviors that helped my ancestors have many kids, or the behaviors that will help me have many kids?

Respectively, these would mean "obeying your base desires or engineering a retrovirus to turn the next generation into your clones." I'm hedging on that. You could consult timtyler, our friendly neighborhood Azathoth-worshipper.

What's truly terrifying to me about this line of thinking is that the answer could be both. Many different factors went into making sure I existed, and the concept of acausal trade seems to suggest that I'm beholden to all of them.

Replies from: timtyler
comment by timtyler · 2011-02-06T00:27:39.930Z · LW(p) · GW(p)

You can generally expect evolution to have made you so that you figure out that your purpose in life is to reproduce - or otherwise help increase your inclusive fitness.

Surveys on the topic suggest that not everyone manages this, though. It is common for organisms to die without reproducing. That is especially true for organisms with heavily-infected brains. Notoriously, not much reduces your fertilty as much as a college education does. It is the memetic infections that do it. They don't have your interests at heart. In the future, access to the internet will no doubt be proven to reduce fertility more than colleges ever did. Japan is leading the way in this department.

comment by datadataeverywhere · 2011-02-01T16:56:25.761Z · LW(p) · GW(p)

Others in this thread have pointed this out, but I will try to articulate my point a little more clearly.

Decision theories that require us to two-box do so because we have incomplete information about the environment. We might be in a universe where Omega thinks that we'll one-box; if we think that Omega is nearly infallible, we increase this probability by choosing to one-box. Note that probability is about our own information, not about the universe. We're not modifying the universe, we're refining our estimates.

If the box is transparent, and we can see the money, we simply don't care what Omega says. As long as we trust that the bottom won't fall out (or any number of other possibilities), we can make our decision because our information (about which universe we are in) is not incomplete.

Likewise, our information about whether we exist is not incomplete; we can't change it by choosing to go against the genes that got us here.

For situations where our knowledge is incomplete, we actually can derive information (about what kind of a world we inhabit) from our desires, but it is evidence, not certainty, and certainly not acasual negotiation. We can easily have evidence that outweighs this relatively meager data.

Replies from: HonoreDB, Vladimir_Nesov
comment by HonoreDB · 2011-02-01T17:11:27.665Z · LW(p) · GW(p)

If the box is transparent, and we can see the money, we simply don't care what Omega says. As long as we trust that the bottom won't fall out (or any number of other possibilities), we can make our decision because our information (about which universe we are in) is not incomplete.

Do you pay the money in Parfit's Hitchhiker? Do you drink Kavka's toxin?

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-02-01T17:25:38.827Z · LW(p) · GW(p)

Good question, but permit me to contrast the difference.

You are the hitchhiker; recognizing the peril of your situation, you wisely choose to permanently self-modify yourself to be an agent that will pay the money. Of course, you then pay the money afterward, because that's what kind of an agent you are.

You appear, out of nowhere, and seem to be a hitchhiker that was just brought into town. Omega informs you that you of the above situation. If Omega is telling the truth, you have no choice whether to pay or not, but if you decide not to pay, you cannot undo the fact that Paul picked you up---apparently Omega was wrong.

In the first, you have incomplete information about what will happen. By self-modifying to determine which world you will be in, you resolve that. In the second, you already got to town, and no longer need to appease Paul.

Kavka's toxin is a problem with a somewhat more ambiguous setup, but the same reasoning will apply to the version I think you are talking about.

comment by Vladimir_Nesov · 2011-02-04T13:36:34.277Z · LW(p) · GW(p)

If the box is transparent, and we can see the money, we simply don't care what Omega says. As long as we trust that the bottom won't fall out (or any number of other possibilities), we can make our decision because our information (about which universe we are in) is not incomplete.

In transparent Newcomb's, you're uncertain about probability of what you've observed, even if not about its utility. You need Omega to make this probability what you prefer.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-02-04T16:04:53.513Z · LW(p) · GW(p)

Is this a MWI concern? I have observed the money with probability 1. There is no probability distribution. The expected long-run frequency distribution of seeing that money is still unknown, but I don't expect this experiment to be repeated, so that's an abstract concern.

Again, if I have reason to believe that (with reasonable probability) I'm being simulated and won't get to experience the utility of that money (unless I one-box), my decision matrix changes, but then I'm back to having incomplete information.

Likewise, perhaps pre-committing to one-box before you see the money makes sense given the usual setup. But if you can break your commitment once the money is already there, that's the right choice (even though it means Omega failed). If you can't, then too bad, but can't != shouldn't.

Under what circumstances would you one-box if you were certain that this was the only trial you would experience, the money was visible under both boxes, and your decision will not impact the amount of money available to any other agent in any other trial?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-02-04T17:42:59.508Z · LW(p) · GW(p)

Is this a MWI concern? I have observed the money with probability 1. There is no probability distribution.

No, it's a UDT concern. What you've observed is merely one event among other possibilities, and you should maximize expected utility over all these possibilities.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-02-04T18:32:58.991Z · LW(p) · GW(p)

I'm really not trying to be obtuse, but I still don't understand. The other possibilities don't exist. If my actions don't affect the environment that other agents (including my future or other selves) experience, then I should maximize my utility. If, by construction, my actions have the potential of impacting other agents, then yes, I should take that under consideration, and if my algorithm before I see the money needs to decide to one-box in order for the money to be there in the first place, then that is also relevant.

I'm afraid you'll need to be a little more explicit in describing why I shouldn't two-box if I can be sure that doing so will not impact any other agents.

I probably don't need to harp back on this, but the only other reason I can see is that Omega is infallible and wouldn't have put the money in B if we were also going to take A. If we two-box, then there is a paradox; decision theories needn't and can't deal with paradoxes since they don't exist. Either Omega is fallible or B is empty or we will one-box. If Omega is probabilistic, it is still in our best interest to decide to one-box before hand, but if we can get away with taking both, we should (it is more important to commit to one-boxing than it is to be able to break that commitment, but the logic still stands).

That is, if given the opportunity to permanently self-modify to exclusively one-box, I would. But if I appear out of nowhere, and Omega shows me the money but assures me I have already permanently self-modified to one-box, I will take both boxes if it turns out that Omega is wrong (and there are no other consequences to me or other agents).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-02-04T18:58:24.084Z · LW(p) · GW(p)

I'm really not trying to be obtuse, but I still don't understand. The other possibilities don't exist.

Doesn't matter. See Counterfactual Mugging.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-02-04T22:25:34.873Z · LW(p) · GW(p)

If this problem is to be seen as equivalent to the counterfactual mugging then that's evidence against the logic espoused by counterfactual mugging.

I'm far FAR from certain they're equivalent, mind you -- one point of difference is I can choose to commit to honor all favourable bets, even ones made without my specific consent, but there's no point to committing to honoring my non-existence, as there's no alternative me who would be able to honor it likewise.

At some point we must see lunacy for what it is. Achilles can outrun the turtle, if someone logically proves he can't, then it's the logic used that's wrong, not the reality.

comment by Dorikka · 2011-02-01T14:11:58.009Z · LW(p) · GW(p)

So...if Prometheus created you to one-box, and you should one-box anyway...why not one-box? "Too much useless information" alarm bells are ringing in my head.

Edit: Italics to quotes.

Edit2: I failed to read. See this comment

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-02-01T16:02:35.259Z · LW(p) · GW(p)

But you should only one-box if you think that the prediction of you one-boxing is tied to whether or not there is more money in the box.

In this case, as near as I can tell, Prometheus believes that you will one-box. Go ahead and two-box and collect the money. Unless we think Omega is lying, there's no reason to believe that one-boxing is superior in this problem.

Replies from: None, Dorikka, wedrifid
comment by [deleted] · 2011-02-01T17:36:03.734Z · LW(p) · GW(p)

I agree, and I don't see why the problem needs to be difficult. Two-boxing is obviously the most profitable choice, and based on the way the problem is phrased it doesn't seem like you are prevented from two-boxing. If anything, your choice to two-box would merely suggest that Prometheus is more error-prone than Omega claimed.

comment by Dorikka · 2011-02-02T00:12:42.746Z · LW(p) · GW(p)

Er...right. I accidentally filled in the first bit with the actual Newcomb's Problem. (In this post, I find the word 'Newcomb' in the title to be misleading unless there's some Newcomb-like aspect that I'm missing.)

So, upon rereading the first bit, I think that two-boxing is definitely optimal in this case. The Prometheus part seems irrelevant. Powerful being A created you to do Z does not mean that you will somehow cease to exist if you do Y instead of Z, nor does it have any impact on your decision unless you also know that the powerful being is benevolent with regard to your utility function.

Replies from: HonoreDB
comment by HonoreDB · 2011-02-02T01:38:45.135Z · LW(p) · GW(p)

(In this post, I find the word 'Newcomb' in the title to be misleading unless there's some Newcomb-like aspect that I'm missing.)

Whether or not it's truly Newcomb-like is the question. The way I'm suggesting you see it is that in addition to the $100, you are in Box B if and only if you one-box. Otherwise, you're nowhere. You don't cease to exist, you cease to have ever existed (which might be better or worse than dying, but certainly sounds bad).

Replies from: datadataeverywhere, Dorikka, mkehrt
comment by datadataeverywhere · 2011-02-02T03:26:04.378Z · LW(p) · GW(p)

You don't cease to exist, you cease to have ever existed (which might be better or worse than dying, but certainly sounds bad).

You applied this to evolution as if this was a grave concern of yours. Surely you don't believe that the universe will un-exist you for failing to have lots of children?!

The very idea of having never existed makes no sense! In a simulation, the masters could run back time and start again without you, but you still existed in the first run of the simulation. Once you've existed, that's it. You can believe that your future existence is in jeopardy, but I don't see how you can believe that you will cease to never have existed, much less how one can actually cease to have ever existed.

Replies from: Eneasz
comment by Eneasz · 2011-02-02T22:46:19.445Z · LW(p) · GW(p)

Modify the scenario to include MWI. Maybe in THIS universe you will continue to exist if you two-box, in then in the overwhelming majority of universes you will have never been created. If you one-box, then it's likely that in the overwhelming majority of universes you do exist.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-02-02T23:09:53.127Z · LW(p) · GW(p)

If you one-box, then it's likely that in the overwhelming majority of universes you do exist.

This is incorrect, but I understand the spirit of what you're trying to say (that the number of universes that I exist in is overwhelmingly larger---no matter what, I exist in an infinitesimally small number of universes).

Regardless, this interpretation still doesn't make you cease to have ever existed. Maybe you exist less, whatever that means, but you still exist. Personally, I don't care about existing less frequently.

Lastly, do you think this interpretation allows the problem to pertain to evolution? I still don't think so, but the reasons are more nuanced than the reasons I think the original problem doesn't.

Replies from: Eneasz
comment by Eneasz · 2011-02-02T23:50:20.033Z · LW(p) · GW(p)

I think it may. I'm still not convinced that MWI universes differ in any appreciable way at the macro level. It may be that every other instance of me lived an identical life except with slightly different atoms making up his molecules.

But in either case, I prefer maximizing meme-similar persons, not gene-similar ones, so my actions are self-consistent regardless. I'm acausally trading with an entity other than Azathoth.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-02-03T00:29:18.663Z · LW(p) · GW(p)

I followed you until

I'm acausally trading with an entity other than Azathoth.

which entity are you trading with? We haven't gone back to talking about Prometheus, have we?

I might like to increase the number of meme-similar persons in my universe, but I don't really care about meme-similar persons in universes that can't influence mine. Even this is something I feel relatively weakly about. It's also just a personal difference in values, and I can reason pretending I share yours.

Replies from: Eneasz
comment by Eneasz · 2011-02-03T04:51:05.821Z · LW(p) · GW(p)

I dunno, whichever entity can be considered the meme-equivalent of Azathoth. "Entity" should probably be in scare-quotes.

comment by Dorikka · 2011-02-02T02:14:00.103Z · LW(p) · GW(p)

I don't see any evidence for that hypothesis in the scenario itself. Could you explain why one would draw that from the narrative?

Replies from: Eneasz
comment by mkehrt · 2011-02-03T07:00:21.841Z · LW(p) · GW(p)

But that's not true? I already exist. There's nothing acausal going on here. I can pick whatever I want, and it just makes Prometheus wrong.

(Similarly, if Omega presented me with the same problem, but said that he (omniscient) had only created me if I would one-box this problem, I would still (assuming I am not hit by a meteor or something from outside the problem affects my brain) two-box. It would just make Omega wrong. If that contradicts the problem, well, then the problem was paradoxical to begin with.)

comment by wedrifid · 2011-02-01T23:11:13.245Z · LW(p) · GW(p)

But you should only one-box if you think that the prediction of you one-boxing is tied to whether or not there is more money in the box.

It is not money we care about, it is utility. In both cases the predictor is long gone.

In this case, as near as I can tell, Prometheus believes that you will one-box. Go ahead and two-box and collect the money. Unless we think Omega is lying, there's no reason to believe that one-boxing is superior in this problem.

When Prometheus is preparing his choice he doesn't know whether you will one box - once he does know he is more or less done deciding and just the implementation is left. Now he believes that you will one box - but he is long gone.

In both cases there is always going to be the $1,000 in the little box. In both cases you are always going to take the big box.

comment by LauraABJ · 2011-02-06T00:33:14.103Z · LW(p) · GW(p)

Ok, so as I understand timeless decision theory, one wants to honor precommitments that one would have made if the outcome actually depended on the answer regardless of whether or not the outcome actually depends on the answer or not. The reason for this seems to be that behaving as a timeless decision agent makes your behavior predictable to other timeless decision theoretical agents (including your future selves), and therefore big wins can be had all around for all, especially when trying to predict your own future behavior.

So, if you buy the idea that there are multiple universes, and multiple instantiations of this problem, and you somehow care about the results in these of these other universes, and your actions indicate probabalistically how other instantiations of your predicted self will act, then by all means, One Box on problem #1.

However, if you do NOT care about other universes, and believe this is in fact a single instantiation, and you are not totally freaked out by the idea of disobeying the desires of your just revealed upon you creator (or actually get some pleasure out of this idea), then please Two Box. You as you are in this universe will NOT unexist if you do so. You know that going into it. So, calculate the utility you gain from getting a million dollars this one time vs the utility you lose from being an imperfect timeless decision theoretical agent. Sure, there's some loss, but at a high enough pay out, it becomes a worthy trade.

I think Newcomb's problem would be more interesting if the 1st box contained 1/2 million and the 2nd box contained 1 million, and omega was only right, say 75% of the time... See how fast answers start changing. What if omega thought you were a dirty two-boxer and only put money in box b? Then you would be screwed if you one-boxed! Try telling your wife that you made the correct 'timeless decision theoretical' answer when you come home with nothing.

Replies from: ata
comment by ata · 2011-02-06T01:38:05.049Z · LW(p) · GW(p)

I think Newcomb's problem would be more interesting if the 1st box contained 1/2 million and the 2nd box contained 1 million, and omega was only right, say 75% of the time... See how fast answers start changing. What if omega thought you were a dirty two-boxer and only put money in box b? Then you would be screwed if you one-boxed! Try telling your wife that you made the correct 'timeless decision theoretical' answer when you come home with nothing.

You can't change the form of the problem like that and expect the same answer to apply! If, when you two-box, Omega has a 25% chance of misidentifying you as a one-boxer, and vice versa, then you can use that in a normal expected utility calculation.

If you one-box, you have a 75% chance of getting $1 million, 25% nothing; if you two-box, 75% $.5 million, 25% $1.5 million. With linear utility over money, one-boxing and two-boxing are equivalent (expected value: $750,000), and given even a slightly risk-averse dollars->utils mapping, two-boxing is the better deal. (I don't think TDT disagrees with that reasoning...)

Replies from: LauraABJ
comment by LauraABJ · 2011-02-06T03:42:19.863Z · LW(p) · GW(p)

That's kind of my point-- it is a utility calculation, not some mystical er-problem. TDT-type problems occur all the time in real life, but they tend not to involve 'perfect' predictors, but rather other flawed agents. The decision to cooperate or not cooperate is thus dependent on the calculated utility of doing so.

Replies from: ata
comment by ata · 2011-02-07T01:02:03.691Z · LW(p) · GW(p)

Right, I was mainly responding to the implication that TDT would be to blame for that wrong answer.

comment by TobyBartels · 2011-02-05T21:21:50.340Z · LW(p) · GW(p)

I don't get it. I do exist. If I never reproduce, then Azathoth predicted incorrectly (which will hardly be the first time).

(I also agree with the response that the universe isn't better off for having me in it, but that doesn't matter, since it has me anyway.)

Replies from: Nisan
comment by Nisan · 2011-02-07T01:46:42.707Z · LW(p) · GW(p)

By that reasoning, you'd want to two-box on the version of Newcomb's with transparent boxes. And yet the correct thing to do in that case is one-box.

Replies from: ArisKatsaris, TobyBartels
comment by ArisKatsaris · 2011-02-07T12:24:51.295Z · LW(p) · GW(p)

By that reasoning, you'd want to two-box on the version of Newcomb's with transparent boxes. And yet the correct thing to do in that case is one-box.

It's not. You should perhaps commit to one-boxing in advance, but what you do by such commitment is to increase the possibilities Omega will appear to you with two full boxes.

But if Omega has appeared to you and you're still capable of two-boxing (e.g. if you've not self-modified as to be unable to two-box), then two-boxing is the correct thing to do in transparent Newcomb.

Replies from: TobyBartels
comment by TobyBartels · 2011-02-07T14:36:59.700Z · LW(p) · GW(p)

I guess that this is true if by ‘commit’ you mean to satisfy all of the requirements that Omega uses to predict your actions. For some variations of Newcomb's problem (including all versions in which Omega is perfect), to do this is necessarily to pick one box, but if not, then yes, you should ‘commit’ to one-boxing and then pick both boxes.

But even so, this usage of ‘commit’ is rather stronger than I would normally use that word for. If I were Omega and I were playing Newcomb with you (but not my version which I designed to be analogous to Azathoth), then I wouldn't fill Box B, and you would lose.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-02-07T15:41:31.758Z · LW(p) · GW(p)

Well, here's the paradox: strict one-boxers in transparent Newcomb argue that they must one-box always, even when the box is empty, and therefore the boxes will be full.

Not just that, they argue that they must one-box always, even when the box is empty, BECAUSE then the box will be full.

Is that actually committment, or is that just doublethink, ability to hold two contradictory ideas at the same time? How can you commit to taking a course of action (grabbing an empty box) in order to make that course of action (grabbing an empty box) impossible?

And yeah, I'm sure I'd lose at playing transparent Newcomb, but I'm not sure that anyone but a master of doublethink could win it.

Replies from: Nisan, Wei_Dai
comment by Nisan · 2011-02-07T19:05:24.248Z · LW(p) · GW(p)

I'm not sure that anyone but a master of doublethink could win it.

If I know that I'm going to play transparent Newcomb, and the only way to win at transparent Newcomb is to become a master of doublethink, then I want to become a master of doublethink.

comment by Wei Dai (Wei_Dai) · 2011-02-07T21:31:38.944Z · LW(p) · GW(p)

Well, here's the paradox: strict one-boxers in transparent Newcomb argue that they must one-box always, even when the box is empty, and therefore the boxes will be full.

No, they argue that they must one-box always, even when they think they see the box is empty.

The argument is that you can't do the Bayesian update P(the box is empty | I see the box as empty) = 1, because Bayesian updating in general fails to "win" when there are other copies of you in the same world, or when others can do source-level predictions of you. Instead, you should use Updateless Decision Theory.

BTW, I don't think UDT is applicable to most human decisions (or rather, it probably tells you to do the same things as standard decision theory), including things like voting or contributing to charity, or deciding whether to have children, because I think logical correlations between ordinary humans are probably pretty low. (That's just an intuition though since I don't know how to do the calculations.)

Replies from: ArisKatsaris, cousin_it
comment by ArisKatsaris · 2011-02-08T11:42:47.332Z · LW(p) · GW(p)

No, they argue that they must one-box always, even when they think they see the box is empty.

If we can't trust our senses more than Omega's predictive powers, then the "transparent" boxes are effectively opaque, and the problem becomes essentially normal Newcomb.

comment by cousin_it · 2011-02-08T11:01:39.898Z · LW(p) · GW(p)

Ordinary correlations between ordinary humans seem to be pretty high. Do they suffice for our needs? I'm not sure...

comment by TobyBartels · 2011-02-07T05:22:50.054Z · LW(p) · GW(p)

In an analogous version of transparent Newcomb, it would be better to two-box. That version goes like this: You have two boxes in front of you. Box A contains $1000 and Box B contains $1000000. You must take Box B but have the choice of taking Box A or not. A very good predictor (ETA: the imperfect Azathoth, not the perfect Omega) put the money in Box B because it predicted that you would choose not to take Box A. The game will not be played again. What do you choose?

In that situation, it would be better if I pick both boxes.

ETA: This is rather imperfectly specified. See my response to wedifrid's response for a more precise version.

Replies from: wedrifid
comment by wedrifid · 2011-02-07T10:50:34.038Z · LW(p) · GW(p)

That version goes like this: You have two boxes in front of you. Box A contains $1000 and Box B contains $1000000. You must take Box B but have the choice of taking Box A or not. A very good predicter (ETA: the imperfect Azathoth, not the perfect Omega) put the money in Box B because it predicted that you would choose not to take Box A. The game will not be played again. What do you choose?

I choose one box.

(Note: Using the terminology from the post it sounded like you meant assume you meant Prometheus, not Azathoth. If Azathoth I would two box - if modelled as a predictor at all he is an incredibly biased predictor that can be exploited.)

Replies from: TobyBartels
comment by TobyBartels · 2011-02-07T13:20:02.477Z · LW(p) · GW(p)

I choose one box.

And in that game, I choose two boxes. I get $1001000, and you get $1000000, so I win.

Don't confuse this with other versions where you might not be invited to play at all. This is one game for all time. (If that's not clear from my description of it, then that's the fault of my description. It's supposed to be as analogous as I can make it to the game with Azathoth.)

Using the terminology from the post it sounded like you meant assume you meant Prometheus, not Azathoth.

No, I meant Azathoth, as in my first comment in this comment thread. I mean to challenge the final conclusion of the OP, not the introductory lead into it. With Prometheus, there are added considerations (such as whether you are Prometheus's simulation).

If Azathoth I would two box - if modelled as a predictor at all he is an incredibly biased predictor that can be exploited.

That is also a good answer to the final conclusion of the OP.

ETA: Let me try to specify more precisely the version of transparent Newcomb which I claim is analogous to the OP's proposed trade with Azathoth. An imperfect predictor with a good track record (which I will call God, for kicks) presents everybody in the world with this game, once. God predicts that each person will one-box and accordingly fills Box B with $1000000 every time. You know all of this. What do you do?

This version is a bit odd because, with the possible exception of a few timeless decision theorists, it seems clear that almost everybody will pick both boxes, so whence comes God's good track record? We can make this even more analogous by specifying that, instead of $1000, Box A contains a white elephant that most people consider to have negative utility, but which you and I (or whoever the OP is directed at) contrarianly value at a positive $1000. (This matches the fact that most people want to reproduce anyway, but the OP only presents a conundrum to those of us who don't.) So God's prediction that everybody will one-box is likely to be correct for most people, but not for reasons that apply to you and me. Now what do you do?

Replies from: Nisan, wedrifid
comment by Nisan · 2011-02-07T18:59:46.841Z · LW(p) · GW(p)

it seems clear that almost everybody will pick both boxes, so whence comes God's good track record?

Indeed, in the problem you have specified, God seems to be an incompetent predictor. If there's no competent predictor involved, it's safe to two-box.

Setting aside the Azathoth problem for a moment, the transparent Newcomb's problem I had in mind does involve a competent predictor. You would one-box in that situation, yes? Even though Omega has given you two full boxes?

As you and wedrifid agree, one can make arguments for not reproducing in HonoreDB's original Azathoth problem; my point is simply that "I already know Azathoth's prediction" is not a good argument.

Replies from: TobyBartels
comment by TobyBartels · 2011-02-07T20:23:03.250Z · LW(p) · GW(p)

my point is simply that "I already know Azathoth's prediction" is not a good argument.

OK, I agree with that. What matters is not what I happen to know but that Azathoth's one-boxing prediction (right or wrong) is guaranteed by the formulation of the problem itself.

comment by wedrifid · 2011-02-07T14:24:26.512Z · LW(p) · GW(p)

I get the impression that we may decline to submit to Azathoth's breeding ultimatum for somewhat different reasons.

Replies from: TobyBartels
comment by TobyBartels · 2011-02-07T14:30:28.882Z · LW(p) · GW(p)

That may be, but my analogy is supposed to be irrelevant to that: I'm just hypothesising that we value our own existence at 1000 times the utility of not breeding. (Which is not true for me, personally, but I pretend for purposes of the argument.)

PS: I edited my previous post while you were writing your response, which may or may not make a difference.

comment by Nisan · 2011-02-05T07:23:56.019Z · LW(p) · GW(p)

Total utilitarians want to one-box in Prometheus' game, and average utilitarians want to two-box.

comment by mkehrt · 2011-02-04T02:24:55.396Z · LW(p) · GW(p)

Let me try to make my objection clearer. You seem to be concerned with things that make your existence less likely. But that is never going to be a problem. You already know the probability of your own existence is 1; you can't update it based on new data.

comment by Sniffnoy · 2011-02-01T08:09:02.738Z · LW(p) · GW(p)

I don't get it. Is this supposed to be some weird form of evidential or maybe timeless decision theory? It hardly matters; whatever decision theory you're using, you already know you exist; conditioning on the possibility that you don't is nonsensical. Hell, even if you're an AI using UDT you gain nothing from not assuming you exist; you were built to not update in the normal sense because whoever built you cared about all possible worlds you might end up in, but regardless, if you're standing there making the decision, you exist (i.e. this can be assumed at the start and taken into account).

Edit: Just for the purpose of explicitness, I should probably state that the conclusion here is that you should two-box in this case.

Replies from: wedrifid
comment by wedrifid · 2011-02-01T09:36:27.223Z · LW(p) · GW(p)

Edit: Just for the purpose of explicitness, I should probably state that the conclusion here is that you should two-box in this case.

And so as to demonstrate that the first part of the post is controversial enough to be interesting: Sniffnoy is wrong - you are better off one boxing.

Replies from: ArisKatsaris, ShardPhoenix
comment by ArisKatsaris · 2011-02-01T21:04:28.157Z · LW(p) · GW(p)

Rationalists should win.

In this scenario two-boxers get 200$ and exist, while one-boxers get 100$ and exist.

Two-boxers will be numerically fewer, because Prometheus is biased in favour of irrationality, but nonetheless it'll be two-boxers that'll be winning. That's the opposite of two-boxers in the Newcomb problem.

Replies from: HonoreDB
comment by HonoreDB · 2011-02-01T22:43:15.375Z · LW(p) · GW(p)

Replies from: ArisKatsaris, JenniferRM, wedrifid
comment by ArisKatsaris · 2011-02-01T23:34:27.607Z · LW(p) · GW(p)

Nice icon, though my reasoning is the exact opposite than that of Quantum Suicide. I have no shared identity with the people who would one-box here, so I don't need to one-box in order to increase their chances at having existed -- if anything such an action would increase the stupidity levels in the multiverse.

Even a one-boxer would have to be particularly weird to want to increase the amplitude of his universe's configuration, as if that would affect his own life at all.

Quantum Suicide on the other hand assumes a shared identity between the people who'll die and the people who'll suffer permanent brain damage with a bullet lodged on their brain, and the people who'll have their consciousness magically copied by magical aliens before they kill themselves. I don't assume shared identity, and that's why I two-box here, quantum suiciders on the other hand assume it and that's why they fail.

comment by JenniferRM · 2011-02-01T23:22:20.703Z · LW(p) · GW(p)

! ! ! !

Tangential Question: Would it be good or bad for the world if 4chan picked this up as a meme?

Replies from: David_Gerard, JGWeissman
comment by David_Gerard · 2011-02-02T00:18:16.709Z · LW(p) · GW(p)

The Friendly AI must be kept away from 4chan at all costs.

Replies from: JGWeissman, JenniferRM
comment by JGWeissman · 2011-02-02T00:28:05.338Z · LW(p) · GW(p)

FAI's don't run away from hard problems.

comment by JenniferRM · 2011-02-02T02:21:03.919Z · LW(p) · GW(p)

I should have been more specific.

I'm not wondering whether interacting with 4chan would poison the mind of a specific software construct. I'm wondering whether the long term political consequences would be good or bad if the 4chan community picked up the generic technique of adding photo-shopped text to MS Clippy images as a joke generating engine that involved re-purposing of LW's themes content (probably sometimes in troll-like or deprecating ways).

Would it raise interesting emotional critiques of moral arguments? Would it poison the discourse with jokes and confusion? Would it bring new people here with worthwhile insights? Would it reduce/increase the seriousness with which the wider world took AGI research... and which of those outcomes is even preferred?

I still don't really have a good theory of what kinds of mass opinion on the subject of FAI is possible or desirable and when I see something novel like the clippy image it sometimes makes me try to re-calculate the public relations angle of singularity stuff.

comment by JGWeissman · 2011-02-01T23:29:29.802Z · LW(p) · GW(p)

Which meme, MS Clippy jokes or quantum suicide?

Replies from: NihilCredo
comment by NihilCredo · 2011-02-03T02:46:10.189Z · LW(p) · GW(p)

I'm fine with 4channers picking up quantum suicide, especially since to me it will almost always look like regular suicide.

comment by wedrifid · 2011-02-01T23:21:09.676Z · LW(p) · GW(p)

That is brilliant. Did you create it manually?

Replies from: HonoreDB
comment by HonoreDB · 2011-02-02T00:38:27.897Z · LW(p) · GW(p)

Thanks, I did. I'm sure there are generators for it, though.

comment by ShardPhoenix · 2011-02-01T11:31:34.173Z · LW(p) · GW(p)

It seems to me that if you find yourself having a choice, you should two-box. If the premise is true then you probably won't feel like you have a choice, and your choice will be to one-box.

I guess you were selected by Prometheus :).

edit: this is related to the idea about going back in time and killing your grandfather. Either this is possible, or it's not. Either way you can't erase yourself and end up with the universe in an inconsistent state.

edit2: In other words, either the premise is impossible, or most people will one-box regardless of any recommendations or stratagems devised here or elsewhere.

edit3: I think this is different from the traditional Newcomb's problem in that by the time you know there's a problem, it's certainly too late to change anything. With Newcomb's you can pre-commit to one-boxing if you've heard about the problem beforehand.

Replies from: FAWS, wedrifid, Skatche, LauraABJ
comment by FAWS · 2011-02-01T15:55:13.463Z · LW(p) · GW(p)

If time travel to your own past (rather than creating an extra time line) is possible hypothetical people with access to time travel who are determined to kill their grandfathers (before their parent's conception) have (in the sense of actions in inconsistent hypothetical time lines that influence which possible stable time line comes about) all eventually created a stable time loop where they don't exist as people who are determined to kill their grandfathers.

(e. g. they succeed and influence the time line in such a way that their other parent has a different child with someone else instead who goes back in time and accidentally kills the would be grandfather of the first person. Or they die in a freak accident that influences which children their would be grandfather has, which means a different grandchild that time travels with different actions and influences what grandchildren the grandfather ends up with until a grand child comes into existence who coincidentally influences the time line in just exactly the right way to bring their own existence about. Or something more complicated.)

Since I prefer to exist I will not time travel in any way that seems likely to make my existence inconsistent and take actions to make it consistent when it seems to be inconsistent without such actions. For example if I learned that my grandmother's fiancé was murdered by someone who claimed to be his grandchild and I had access to time travel I would try to stage that murder and take the fiancé back to the future with me.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2011-02-02T00:38:38.537Z · LW(p) · GW(p)

My point is that you can't step outside the system and say that you're making a choice. Killing your own (true) grandfather in the past is simply impossible, so you won't be able to do it, for one reason or another. The details don't matter.

edit: I guess my position on Newcomb's is that you should precommit to one-boxing if you can, but if someone is put into that situation with no pre-knowledge, it is too late to bother talking about what they "should" do - their fate is already sealed.

comment by wedrifid · 2011-02-01T12:01:14.100Z · LW(p) · GW(p)

I think this is different from the traditional Newcomb's problem in that by the time you know there's a problem, it's certainly too late to change anything. With Newcomb's you can pre-commit to one-boxing if you've heard about the problem beforehand.

Newcomb's with precommitments? Next can we do Tic-tac-toe? ;)

Replies from: ShardPhoenix
comment by ShardPhoenix · 2011-02-01T12:23:45.427Z · LW(p) · GW(p)

If you haven't heard about the problem beforehand then asking what decision you "should" make is incoherent. You will get the result you were selected to get. There is no use talking as if you have some meta-choice.

edit: ie if you are selected on your decision process without having heard of such problems, then it is already too late to change your past decision process even if you fully understand the situation you are in. If you're capable of understanding the situation though, you presumably already had the right decision process on some level and will successfully one-box.

edit2: The probabilistic method of dealing with Newcomb's problem is to observe that one-boxers win, therefore you should one-box. This doesn't apply to the Prometheus problem; we can't observe that two-boxers probably never existed.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2011-02-01T12:57:06.798Z · LW(p) · GW(p)

The probabilistic method of dealing with Newcomb's problem is to observe that one-boxers win, therefore you should one-box. This doesn't apply to the Prometheus problem; we can't observe that two-boxers probably never existed.

Including observations of other people who have encountered Omega's game in the description of Newcomb's problem is sometimes helpful because it engages the intuitions of those who aren't familiar with the relevant kinds of reasoning. It is not, however, an important part of the problem or the critical part of the solution.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2011-02-02T00:33:49.548Z · LW(p) · GW(p)

I didn't claim it was - I was just pointing out another way that these two problems are different.

Replies from: wedrifid
comment by wedrifid · 2011-02-02T01:55:33.297Z · LW(p) · GW(p)

I was just pointing out another way that these two problems are different.

You claimed that The claimed "what decision you "should" make is incoherent". (This claim is false.)

Replies from: ShardPhoenix
comment by ShardPhoenix · 2011-02-02T02:16:48.676Z · LW(p) · GW(p)

I don't find it helpful that you just keep asserting that you're right without explaining your reasoning. Please explain why you think one-boxing is correct in the Prometheus case.

Replies from: wedrifid
comment by wedrifid · 2011-02-02T05:20:51.401Z · LW(p) · GW(p)

That you do not understand the explanations does not mean I have not given any. I refer you to the original post. From that link a search for 'wedrifid' will give you at least three explanations.

In this case of the grandparent you may (or may not) note that my reply speaks to the relevance of that comment's parent to the same comment's grandparent.

I also observe that when replying to a rebuttal (pre-edit) that consists of asserting an incorrect premise used to support reasoning that isn't quite relevant there is only so much you can do. The second edit contained what we could call a 'high quality mistake' so I attempted to explain to you why that line of reasoning does not influence the decision making here.

I suspect you will find it more enjoyable to engage with one of the other people who have also explained the reasoning behind one-boxing here (complete with pictures!) If you keep making replies to me that don't seem (to me) make any sense in the context it is natural that you will be unsatisfied with the response.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2011-02-03T00:50:10.749Z · LW(p) · GW(p)

Sorry if my original posts were unclear - I was drunk at the time :). However I've read the rest of this thread and I agree with the positions of the Prometheus two-boxers for the problem as given. If Prometheus's strategy involves simulating you to adulthood and giving you a sim-test before the "real" test, then things may be different.

Replies from: wedrifid
comment by wedrifid · 2011-02-03T10:41:51.387Z · LW(p) · GW(p)

I was drunk at the time :).

Taking drunken boxing to a whole new level! ;)

comment by wedrifid · 2011-02-01T12:34:36.466Z · LW(p) · GW(p)

I don't agree with any of this.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2011-02-01T12:35:45.170Z · LW(p) · GW(p)

Good for you? I guess we'll have to call up Omega and Prometheus and test it all..

comment by Skatche · 2011-02-01T13:29:10.288Z · LW(p) · GW(p)

Okay: originally I was leaning toward two-boxing, but now I'm not sure. Conceivably, for example, I am doomed to have a sudden cardiac arrest and die before actually getting to make my selection; this would kind of trivially satisfy Prometheus' criteria (depending, I suppose, on precisely how they're formulated). My death, in that case, would not be a consequence of my choosing both boxes, as I never actually get to make that decision.

Better not to tangle with the gods, I think. I'd take one box.

comment by LauraABJ · 2011-02-06T00:56:07.115Z · LW(p) · GW(p)

"I think this is different from the traditional Newcomb's problem in that by the time you know there's a problem, it's certainly too late to change anything. With Newcomb's you can pre-commit to one-boxing if you've heard about the problem beforehand."

Agreed. It would be like opening the first box, finding the million dollars, and then having someone explain Newcomb's problem to you as you consider whether or not to open the second. My thought would be, "Ha! Omega was WRONG!!!! " laughing as I dove into the second box.

edit: Because there was no contract made between TDT agents before the first box was opened, there seems to be no reason to honor that contract, which was drawn afterwards.

comment by [deleted] · 2013-01-15T03:16:30.465Z · LW(p) · GW(p)

Although I find one-boxing difficult to do in that scenario, as a human, it is apparent that a reflectively consistent decision theory would one-box, as that is what it would have precommitted to do if it had the option (prior to its not yet determined existence) to precommit. No backwards arrows of causality are needed, just a particular type of consistency (updatelessness or timelessness).

comment by Ghatanathoah · 2012-06-12T00:11:07.177Z · LW(p) · GW(p)

I've been trying to work on this problem based on my admittely poor understanding of Updateless Decision Theory and I think I've come to the conclusion that, while you should one-box in Newcomb's problem and in Transparent Newcomb's problem, you should two-box when dealing with Prometheus, ignore Azathoth, and ignore the desires of evil parents.

Why? My reasoning is based on these lines from cousin_it's explanation of UDT:

When you're faced with a decision, you find all copies of you in the entire "multiverse" that are faced with the same decision ("information set"), and choose the decision that logically implies the maximum sum of resulting utilities weghted by universe-weight......

.......For example, Counterfactual Mugging: by assumption, your decision logically affects both heads-universe and tails-universe, which (also by assumption) have equal weight, so by agreeing to pay you win more cookies overall.

I start by taking into account that there are some universes where I was created by Prometheus/Azathoth/evil parents and some where I was not. I then try to make the decision that will increase the utility of all my copies in all possible universes where a version of me is faced with this decision. If I two-box then all the existing copies of me faced with the same decision will also two-box, and get $200. The nonexistant mes in the universes where I was not created will keep right on not existing. If I one-box all the copies of me will get $100. Again, the nonexistant mes in the universes where I was not created will keep right on not existing, their nonexistant utility unchanged. So all my copies will get more utility if I two-box, ignore Azathoth, and tell my evil parents to do something anatomically improbable. The nonexistant mes cannot be affected.

The key is that you're supposed to consider the utility of "all copies of you in the entire "multiverse" that are faced with the same decision," and copies that were never created are obviously not faced with the same decision. This differs from the counterfactual mugging because there you exist in the heads and tails universe, so you have to take the utility of both copies into account. I believe that it differs from Newcomb's problem and Transparent Newcomb's problem for the same reason.

So it looks like, if I understand UDT correctly, in newcomblike-problems where you never having existed is part of the problem, one-boxing is not neccessarily rational. (An aside: I should also mention that I am assuming whatever method of prediction Prometheus used did not result in the creation of a morally signtificant copy of you in his head. That would be a whole other ballgame, and I think the spirit of the original post was that Prometheus' prediction method did not do this)

Did I get this right, or is my understanding of UDT wrong? I'm not very certain of this at all, and would like it if someone with a stronger understanding of UDT could confirm or disconfirm it.

UPDATE: I think that a possible flaw in my reasoning is that I have misunderstood UDT to mean "make all your copies make whatever decision maximizes their utility in all possible situations," when what it really means is more like "make your decision as if the Omega/Prometheus in those other universes is watching your universe and basing your decision on your behavior there, rather than the behavior of the you in their native universe; and act to maximize utility across all possible universes." I think my previous formulation implies two-boxing in Newcomb's, which seems wrong. With my second formulation it might indeed be better to one-box in the Prometheus problem because in some other universe Prometheus is "watching (i.e. simulating) the you of your universe and is going to decide to create you based on what you do.

I'm not sure my second formulation is quite right either. It still seems to me that these nonexistance problems are qualitatively different from other Newcomblike problems, it seems that the fact that in some universes I don't exist changes the nature of the problem in some way so that one-boxing is no longer rational, maybe because in those universes I'm not part of the same "information set."

That being said, even if UDT recommends one-boxing, there are still several strong objections to the original post's conclusions, and AnasKateris' disturbing "evil parent" variant:

  • Azathoth is not "watching your universe," it's not simulating anything, so it is not analogous to Prometheus
  • I'm not sure not updating the fact that you exist is logically coherent.
  • In any "evil parent" situation if the demands they make are sufficiently horrible it is better to not exist than to obey them.
  • These discussions focus primarily on what is individually rational, not what is collectively rational. It is collectively rational to two-box in the "evil parent" variant to deter evil parents from actually trying this, even if it isn't individually rational. Similarly, the collectively rational thing to do in the Prometheus problem is probably to ask Omega if it can help you track down Prometheus, chain him to a rock, and make a giant eagle eat his constantly-regenerating liver to make sure he stops pulling crap like this.
  • If UDT makes you "lose" in the Prometheus problem and the "evil parent" variant maybe it still needs some work.
comment by Desrtopa · 2011-02-07T20:30:49.329Z · LW(p) · GW(p)

For this specific formulation of the question, I think it may be relevant to know whether Prometheus updates on your decisions in order to improve his projections on whether future individuals will one box or two box.

Replies from: HonoreDB
comment by HonoreDB · 2011-02-07T23:24:17.173Z · LW(p) · GW(p)

Explain why? I see the connection to Azathoth, but not the relevance to your decision.

Replies from: Desrtopa
comment by Desrtopa · 2011-02-08T00:02:39.765Z · LW(p) · GW(p)

If he doesn't update, then his prediction abilities will not in the future prevent the birth of people like you, in addition to not having prevented your existence in the first place.

I'm not convinced the scenario makes much sense in the first place. Second guessing Omega doesn't do you any good, you'll simply find out after you make the decision that Omega predicted you correctly, but if you second guess Prometheus, you know ahead of time that you'll prove his prediction wrong. You can precommit to one boxing Omega, in order to maximize your expected utility at the time of the choice, but there's no point at which you can make the choice to one box in this scenario where it will increase your utility, unless you care about Prometheus preventing the existence of future individuals psychologically similar to yourself. If he doesn't update on your decision, then two boxing won't cause him to be more likely to prevent the existence of future two boxers.

Anyway, unless it's correlated with other values that I have a particular stake in, I don't care very much about the two-boxingness of past or future individuals, and he's already failed to prevent my existence, so I don't see that there's a compelling reason to one box.

comment by atucker · 2011-02-06T06:00:06.280Z · LW(p) · GW(p)

What was the point of reposting this after it was in the discussion section, without seeming to edit it in response to comments since then?

Replies from: HonoreDB
comment by HonoreDB · 2011-02-06T06:50:44.805Z · LW(p) · GW(p)

I've been tweaking it; if larger changes had seemed warranted I would've rewritten it, and if it hadn't gotten a decent karma score I wouldn't have reposted it. I was being cautious since I haven't posted an article before.

comment by MinibearRex · 2011-02-04T01:54:12.214Z · LW(p) · GW(p)

I think the primary reason why this Prometheus problem is flawed is that in Newcomb's problem, the presence or absence of the million dollars is unknown, while in this Prometheus problem, you already know what Prometheus did as a result of his prediction. Think of a variation on Newcomb's problem where Omega allows you to look inside box B before choosing, and you see that it is full. Only an idiot would take only one box in that scenario, and that's why this analysis is flawed.

Replies from: wedrifid
comment by wedrifid · 2011-02-04T02:58:14.698Z · LW(p) · GW(p)

I think the primary reason why this Prometheus problem is flawed is that in Newcomb's problem, the presence or absence of the million dollars is unknown, while in this Prometheus problem, you already know what Prometheus did as a result of his prediction. Think of a variation on Newcomb's problem where Omega allows you to look inside box B before choosing, and you see that it is full. Only an idiot would take only one box in that scenario, and that's why this analysis is flawed.

You are right that this scenario is comparable to Newcomb's Problem With Transparent Boxes. But you're wrong about the idiocy. Rational agents one box on Transparent Newcomb's too. ;)

Replies from: MinibearRex
comment by MinibearRex · 2011-02-04T21:31:42.434Z · LW(p) · GW(p)

I think in the classic Newcomb's problem, because Omega is a superintelligence and an astonishingly accurate predictor of human behavior, you have to assume that Omega predicted every thought you have, including that one. For that reason, we're assuming that it's just about impossible for you to "trick" Omega. However, if you know, for a fact, that both boxes are filled, then you know exactly what Omega modeled you doing. That doesn't mean that you have to do it. At this point, it is possible to trick Omega. Taking both boxes just means that Omega made a mistake about what you'd do.

I've heard people argue, as you are, that rational agents should one box on transparent Newcomb's, but I've never heard a good explanation for why they think that. Care to help me out?

Replies from: wedrifid
comment by wedrifid · 2011-02-05T06:15:48.630Z · LW(p) · GW(p)

I've heard people argue, as you are, that rational agents should one box on transparent Newcomb's, but I've never heard a good explanation for why they think that. Care to help me out?

Two points that may or may not be useful:

  • If you take one box you get $1,000,000. If you take both boxes you get $1,000. This seems like an overly simple intuition. But it terns out that it is a more pertinent intuition that the other simplistic intuition 'but I can already see two boxes with money in them'.
  • Omega is assumed to be trustworthy. Him telling you something is as informative as seeing it with your own eyes. At the time you take the one box you already know perfectly well that it will contain $1,000,000 and that the other box that you are leaving contains $1,000. You are knowingly picking up $1,000,000 and knowingly leaving $1,000 behind. You are in the same situation if the boxes are transparent, so in that case do the same thing and reap the rewards.
Replies from: MinibearRex
comment by MinibearRex · 2011-02-07T15:34:16.717Z · LW(p) · GW(p)

Thank you, that is helpful. I still have a slight problem with it, though. In the classic Newcomb's problem, I'm in a state of uncertainty about Omega's prediction. Only when I actually pick up either one box or two can I say with confidence what Omega did. At the moment that I pick up Box B, I do know that I am leaving behind $1000 in Box A. At this point, I might be tempted to think that I should grab that box as well, since I already "know" what's inside of it. The problem is that Omega probably predicted that temptation. Because I don't know Omega's decision while I'm considering the problem, I can't hope to outsmart it.

I would argue, though, that getting $1,001,000 out of Newcomb's problem is better than getting $1,000,000. If there's a way to make that happen, a rational agent should pursue it. This is only really possible if you can outsmart Omega, which does seem like a very difficult challenge. It's really only possible if you can think one level further than Omega. In classic Newcomb's, you have to presume that Omega is predicting every thought you have and thinking ahead of you, so you can't ever assume that you know what Omega will do, because Omega knows that you will assume that and do differently. In transparent Newcomb's, however, we can know what Omega has done, and so we have a chance to outsmart it.

Obviously, if we are anticipating being faced with this problem, we can decide to agree to only take one box, so that Omega fills it up with $1,000,000, but that's not what transparent Newcomb's is asking. In transparent Newcomb's, an alien flies up to you and drops off two transparent boxes that contain between them $1,001,000. It doesn't matter to me what algorithm Omega used to decide to do this. Rationalists should win. If I can outsmart Omega, and I have an opportunity to on transparent Newcomb's, I should do it.

comment by cousin_it · 2011-02-01T11:06:45.492Z · LW(p) · GW(p)

Good stuff, thanks.

Acausal version: "If you have goals that would be served by you existing, then try to have many kids because it increases the number of worlds in which you exist." Note how this completely ignores the "fact" that you "already exist" - of course you do, we're living in a multiverse! What's left to you is to increase the measure.

Causal version 1: "If you're a good person, and you believe the world needs more good people, then try to have many kids." Note that this argument doesn't rely on genetics only: how your kids will turn out depends on nurture too. Which brings us to...

Causal version 2: "If you have a good idea, and you believe the world needs more people who agree with that idea, then try to spread it." In other words, exactly what Eliezer is doing by creating LW :-)

comment by wedrifid · 2011-02-01T08:30:55.828Z · LW(p) · GW(p)

"You were created by a god: a being called Prometheus. Prometheus was neither omniscient nor particularly benevolent. He was given a large set of blueprints for possible human embryos, and for each blueprint that pleased him he created that embryo and implanted it in a human woman. Here was how he judged the blueprints: any that he guessed would grow into a person who would choose only Box B in this situation, he created. If he judged that the embryo would grow into a person who chose both boxes, he filed that blueprint away unused. Prometheus's predictive ability was not perfect, but it was very strong; he was the god, after all, of Foresight."

Do you take both boxes, or only Box B?

I take one box. Normal Newcomblike reasoning.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-02-01T16:10:38.250Z · LW(p) · GW(p)

Either I don't get it, or you are misapplying a cached thought. Please explain to me where my reasoning is wrong (or perhaps where I misunderstand the problem):

When answering Newcomb's problem, we believe Omega is a reliable predictor of what we will do, and based on that prediction places money accordingly.

In this problem, Prometheus always believes (by construction!) that we will one-box, and so will always place money according to that belief. In that case, the allocation of money will be the same for people who one-box (most people, since Prometheus is a good predictor), and the people who two-box.

You could make an alternate argument that even if you want to two-box, Prometheus' near infallibility means you are unlikely to (after all, if everyone did, he would be a terrible predictor), but that's different than answering what you should do in this situation.

Replies from: wedrifid, HonoreDB
comment by wedrifid · 2011-02-01T23:02:14.337Z · LW(p) · GW(p)

Either I don't get it, or you are misapplying a cached thought. Please explain to me where my reasoning is wrong (or perhaps where I misunderstand the problem)

It's not about the money this time - but the implications to utility are the same. The 'million dollars' in Newcomb's problem is allocated in the same way that life is allocated in this problem. In this problem the money is basically irrelevant because it is never part of Prometheus' decision. But existence in the world is part of the stakes.

The problem feels different to Newcomb's because the traditional problem was constructed to prompt the intuition 'but one boxers get the money!'. Then the intuition goes ahead and dredges up reasoning strategies (TDT for example) that are able to win the $1,000,000 rather than the the $1,000. But people's intuitions are notoriously baffled by anthropic like situations. No intuition "um, for some reason making the 'rational choice' is making me worse off" is prompted and so they merrily revert to CDT and fail.

Another way to look at that many people find helpful when considering standard Newcomb's it is that you don't know whether you are the actual person or the simulated person (or reasoning) that is occurring when Omega/Prometheus is allocating $1,000,000/life.

If consistent decision making strategy is applied for both Newcomb's and this problem then those who one box Newcomb's but two box in this problem are making the same intuitive mistake as those who think Quantum Suicide is a good idea based off MWI assumptions.

Replies from: ArisKatsaris, magfrump, datadataeverywhere
comment by ArisKatsaris · 2011-02-02T09:37:50.470Z · LW(p) · GW(p)

You're not answering the problem as it actually stands, you're instead using perceived similarities to argue it's some other problem, or to posit further elements (like simulated versions of yourself) that would affect the situation drastically.

With Newcomb's problem one properly one-boxes. The unknown state of the box is entagled with your decision, so by one-boxing you're acausally affecting the likelihood the non-transparent box has 1.000.000. This works even for Omegas with less than 100% probability of predictive success.

With this problem, your existence is a certain fact. You don't need to entagle anything, because you exist and you'll keep existing -- in any universe where you're actually making a decision, YOU EXIST. You only need to grab two boxes, and you'll have them both with no negative consequences.

This has absolutely NOTHING to do with Quantum suicide. These decisions don't even require a belief in MWI.

On the other hand, your argument essentially says that if your mother was a a Boston Celtics fan who birthed you because she was 99.9% certain you'd support the Boston Celtics, then even if you hate both her and the Celtics you must nonetheless support them, because you value your existence.

Or if your parents birthed you because they were 99.9% certain you'd be an Islamist jihadi, you must therefore go jihad. Even if you hate them, even if you don't believe in Islam, even if they have become secular atheists in the meantime. Because you value your existence.

That's insane.

You're not doing anything but invoking the concept of some imaginary debt to your ancestors. "We produced you, because we thought you'd act like this, so even if you hate our guts you must act like this, if you value your existence."

Nonsense. This is nothing but a arbitrary deontological demand, that has nothing to do with utility. I will one-box in the normal Newcomb's problem, and I can honorably decide to pay the driver in the Parfit's Hitchhiker's problem, and I can commit to taking Kavka's toxin -- but I have no motivation to commit to one-boxing in this problem. I exist. My existence is not in doubt. And I only have a moral obligation to those that created me under a very limited set of circumstances that don't apply here.

Replies from: None, MugaSofer
comment by [deleted] · 2011-02-02T16:59:59.832Z · LW(p) · GW(p)

With Newcomb's problem one properly one-boxes. The unknown state of the box is entagled with your decision, so by one-boxing you're acausally affecting the likelihood the non-transparent box has 1.000.000. This works even for Omegas with less than 100% probability of predictive success.

You should still one-box in Newcomb's problem with transparent boxes. There's no unknown state there. And if you think you shouldn't: when Omega presents you with two transparent boxes, one of them containing 1000$ and the other empty -- won't you regret being the kind of person who two-boxes in that problem?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-02-02T17:39:35.432Z · LW(p) · GW(p)

Can you link to a description of the Newcomb's problem with both boxes transparent?

If the problem is how you imply it to be, I don't know what Omega would do if I one-boxed in the case of an empty transparent box, and I two-boxed in the case of a full one. That seems an exceptionally easy way to contradict Omega's prediction, which in turn goes against the principle of Omega being Omega.

Also, what you're doing seems to be substituting an uncertainty of the content for the box with an uncertainty of whether Omega will appear to me and offer me a empty or full box. But there's an infinite number of hypothetical quasi-deities that might appear to me, and I can't commit to all their hypothetical arbitrary demands in advance.

Replies from: AlephNeil, Bongo
comment by AlephNeil · 2011-02-06T03:08:29.308Z · LW(p) · GW(p)

I'm slightly lost by all the different variations of "Newcomb's problem with transparent boxes", but for what it's worth, one can easily write down a version of "Newcomb's problem with transparent boxes" that is equivalent to Parfit's Hitchhiker:

First, Omega judges whether, if both boxes are full, you will take both or just one. Then it fills the first box accordingly. (To make it strictly 'isomorphic' we can stipulate so that Omega will leave both boxes empty if you decide to two-box, but this doesn't affect the decision theory.)

No doubt you will say that the difference between this and the "Prometheus problem" is that in the latter, you exist no matter what, and both boxes are full no matter what.

I agree that this seems intuitively to make all the difference in the world but consider this possibility: Perhaps the only way that Prometheus can predict your behaviour is by running a conscious simulation of you. If so, then choosing to two-box could cause your immediate "death" due to the fact that the simulation will be stopped, and Prometheus will not create a 'real world' copy of you.

(On the other hand, if Prometheus' prediction is based entirely on 'hard-wired' factors beyond your conscious control, like your genetic makeup or whatever, then the same logic that says you must smoke in the 'smoking lesion' problem can be used to say that you must two-box after all.)

comment by Bongo · 2011-02-03T05:35:05.538Z · LW(p) · GW(p)

Rules of (one version of) Transparent Newcomb.

Replies from: Vladimir_Nesov, ArisKatsaris
comment by Vladimir_Nesov · 2011-02-04T22:57:42.882Z · LW(p) · GW(p)

Incorrect rules. You don't need the "don't invite to his games" one, and you don't need randomization. Corrected here.

Replies from: Bongo
comment by Bongo · 2011-02-05T06:14:43.759Z · LW(p) · GW(p)

Both rules work. In both games, one-boxing no matter what is the winning strategy.

I designed my rules have the feature that by one-boxing upon seeing an empty box B you visibly prove Omega wrong. In the version you linked to, you don't necessarily: maybe Omega left box B empty because you would have two-boxed if it was full.

So both problems can be reasonably called "Transparent Newcomb". The one you linked to was invented first and is simpler, though.

comment by ArisKatsaris · 2011-02-03T08:06:25.579Z · LW(p) · GW(p)

I see. Thank you, but I'm unimpressed - by committing to one-boxing in the Transparent NewComb one still entagles uncertainty but just entagles the uncertainty of if and how Omega will appear to him. Now knowing the rules I can commit to one-boxing, thus increasing the chances Omega will appear to me -- but that's as meaningful as an Omega that says to people "I would have given you a million dollars, if you'd only worn a green hat", and therefore I'd have to wear a green hat. It's nothing but a meaningless modified Pascal's wager.

Transparent Newcomb therefore again isn't similar to the situation described in this thread. In this situation the decider exists no matter what: there's no uncertainty.

Replies from: Bongo
comment by Bongo · 2011-02-03T17:15:35.198Z · LW(p) · GW(p)

You know the rules. You choose your strategy with full knowledge. If you lose, it's your fault, you knowingly chose a bad strategy. Nothing arbitrary or meaningless here.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-02-03T18:29:05.112Z · LW(p) · GW(p)

Bongo, you didn't understand my objection: In the classical Newcomb, Omega explains the rules to you when he appears, and there's one uncertain element (the contents of the opaque box). You determine the strategy, which by Omega's predictive power has already been entangled with the uncertain contents of the box.

In the transparent Newcomb, you either don't know the rules (so you can't precommit to anything, and you can't commit to any strategy in which Omega2 wouldn't require the opposite strategy) or you know the rules in advance and therefore you can determine the strategy, which by Omega's predictive power has already been entangled with the uncertain element of whether he'll appear to you, and with how much money in the boxes.

In the problem that's posed on this thread however, there's no uncertainty whatsoever. You exist and that's certain. The entanglement has been already resolved in favor of your existence. You don't need to satisfy your mom's expectations of you in order to keep on existing. You don't need to become a musician if your dad expected you to be a musician, You don't need to be a scientist if your mom expected you to be a scientist. In ANY universe where you get to decide a strategy, YOU EXIST. Or you wouldn't be deciding anything.

People hopefully do understand that instead of "Omega and Prometheus speak of their predictions" we can quite easily have "Your mom and dad tell you of their pre-birth expectations for you"

If anyone here honestly thinks that by failing their parents' expectations they'll stop existing, then they're literally insane. It's exactly the same as with them foiling Prometheus' expectations.

Replies from: fr00t
comment by fr00t · 2011-02-07T22:50:23.833Z · LW(p) · GW(p)

This.

The only resolution for either scenario I can think of is that there is a very high chance that regardless of what you precommit to do here or otherwise, at the moment of decision, be compelled to choose to 1-box, or be unable to pull out.

But aside from that improbable outcome, these, along with transparent Newcomb, are nonsense; they're intractable. I can simply precommit to use the strategy that contradicts what Prometheus/Omega/Azathoth predicted, a la halting problem.

And because of the three, Azathoth is the one that most nearly exists, I am actually very likely to have children. An overwhelming majority of men actually do highly value sleeping with many women; the only reason this doesn't result in massive uncontrollable pregnancy is because Azathoth, being the slow thinker he is, hasn't had time to adjust for birth control. Plus I can't think of an outcome Azathoth would prefer to us creating AGI and proliferating across the universe.

comment by MugaSofer · 2013-01-15T11:24:03.174Z · LW(p) · GW(p)

With Newcomb's problem one properly one-boxes. The unknown state of the box is entagled with your decision, so by one-boxing you're acausally affecting the likelihood the non-transparent box has 1.000.000.

Hence the reference to Transparent Newcomb's*, in which the money is visible and yet, by some decision theories, it is still irrational to two-box. (Similar reasoning pertains to certain time-travel scenarios - is it rational to try and avoid driving if you know you will die in a car crash?)

*The reference:

For others, it's easy because you take both boxes in the variant of Newcomb where the boxes are transparent and you can see the million dollars; just as you would know that you had the million dollars no matter what, in this case you know that you exist no matter what.

EDIT: whoops, ninja'd. By almost two years.

Do you still two-box in this situation?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-01-15T11:55:41.567Z · LW(p) · GW(p)

Do you still two-box in this situation?

I've since decided that one-boxing in Transparent Newcomb is the correct decision -- because being the sort of agent that one-boxes is to be the sort of agent that gets given more frequently a filled first box (I think I only fully realized this after reading Eliezer's paper on TDT, which I hadn't at the time of this thread).

So the individual "losing" decision is actually part of a decision theory which is winning *overall". And is therefore the correct decision no matter how counterintuitive.

Mind you, as a practical matter, I think it's significantly harder for a human to choose to one-box in the case of Transparent Newcomb. I don't know if I could manage it if I was actually presented with the situation, though I don't think I'd have a problem with the case of classical Newcomb.

comment by magfrump · 2011-02-02T04:54:10.220Z · LW(p) · GW(p)

I didn't get it until I read this line:

Another way to look at that many people find helpful when considering standard Newcomb's it is that you don't know whether you are the actual person or the simulated person

So the question is: is Prometheus running this simulation? If so, he will create you only if you one-box.

So it's not that you were created by Prometheus, it's that you might currently be being created by Prometheus, in which case you want to get Prometheus to keep on creating you.

Or less specifically; if I enter into a situation which involves an acausal negotiation with my creator, I want to agree with my creator so as to be created. This type of decision is likely to increase my measure.

Due to my current beliefs about metaverses I would still two-box, but I now understand how different metaverse theories would lead me to one-box; because I assign a nontrivial chance that I will later be convinced of other theories, I'm wondering if a mixed strategy would be best... I don't really know.

Replies from: wedrifid
comment by wedrifid · 2011-02-02T05:31:28.350Z · LW(p) · GW(p)

So the question is: is Prometheus running this simulation? If so, he will create you only if you one-box.

Lest my words be a source of confusion note that I use 'simulation' as an example or 'proof of concept' for how the superintelligence may be doing the deciding. He may be using some other rule of inference that accurately models my decision making. But that doesn't matter to me.

Replies from: magfrump
comment by magfrump · 2011-02-03T05:38:10.005Z · LW(p) · GW(p)

I agree with you here I believe. I didn't mean to imply that Prometheus was literally running the simulation, just that phrasing it in this way made the whole thing "click" for me.

I think my phrasing is the potential source of confusion.

comment by datadataeverywhere · 2011-02-01T23:47:08.202Z · LW(p) · GW(p)

Well, I definitely am confused. What utility are you gaining or losing?

Is this an issue about your belief that you are created by Prometheus? Is this an issue about your belief in Omega or Prometheus' honesty? I'm very unclear what I can possibly stand to gain or lose by being in a universe where Prometheus is wrong versus one where he is right.

comment by HonoreDB · 2011-02-01T16:28:16.520Z · LW(p) · GW(p)

The allocation of money is unspecified in this version, but has nothing to do with anyone's predictions. You don't get more money by one-boxing. I'll edit to make that clearer.

Replies from: datadataeverywhere
comment by datadataeverywhere · 2011-02-01T16:35:20.235Z · LW(p) · GW(p)

Thanks. Unfortunately, now I'm horrendously confused. What's the point of choosing either? Unless Prometheus is apt to feel vengeful (or generous), it doesn't seem like there is any reason to prefer one course of action over another.

Replies from: Nornagest
comment by Nornagest · 2011-02-01T19:03:32.338Z · LW(p) · GW(p)

My understanding is that you get $200 by two-boxing and $100 by one-boxing, but with the caveat that you were created by Prometheus, God of One-Boxers. The allocation of money doesn't change based on Prometheus's predictions, because by Omega's testimony you already know what set of Newcomblike predictions you belong to: your choice is whether or not to subvert that prediction.

I one-box on standard Newcomb, but I'd choose two boxes here.

comment by Psychohistorian · 2011-02-06T00:09:41.844Z · LW(p) · GW(p)

This seems to highlight my main complaint with Newcomb's problem. It assumes reverse causation is possible. Perhaps I'm being narrow minded, but, "Assume reverse causation is possible. How do you deal with this hypothetical?" does not mean you should actually design a decision theory to take into account reverse causation, without adequate evidence it exists.

Replies from: shokwave
comment by shokwave · 2011-02-06T16:09:02.741Z · LW(p) · GW(p)

It assumes reverse causation is possible.

You should hear what evidence decision theories have to say about the smoking lesion problem. "Assume the evidence is wrong. How do you deal with this hypothetical?".

comment by timtyler · 2011-02-01T21:25:15.166Z · LW(p) · GW(p)

there can be no doubt that in many commonplace situations, Azathoth wants you to cheat, or rape, or murder.

It seems as though rape and murder often lead to prison sentences, which involves being confined in an environment with no members of the opposite sex. This is especially true in the modern surveillence-saturated world.

Baby makers who are positively inclined towards rape and murder are in a tiny minority, and are probably desperate - and desperate people often do bad things, largely irrespective of what their goals are - witness the priests and the choirboys.