Two-boxing, smoking and chewing gum in Medical Newcomb problems

post by Caspar Oesterheld (Caspar42) · 2015-06-29T10:35:58.162Z · LW · GW · Legacy · 93 comments

Contents

93 comments

I am currently learning about the basics of decision theory, most of which is common knowledge on LW. I have a question, related to why EDT is said not to work.

Consider the following Newcomblike problem: A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene). Now, Omega could put you into something like Newcomb's original problem, but instead of having run a simulation of you, Omega has only looked at your DNA: If you don't have the "two-boxing gene", Omega puts $1M into box B, otherwise box B is empty. And there is $1K in box A, as usual. Would you one-box (take only box B) or two-box (take box A and B)? Here's a causal diagram for the problem:



Since Omega does not do much other than translating your genes into money under a box, it does not seem to hurt to leave it out:


I presume that most LWers would one-box. (And as I understand it, not only CDT but also TDT would two-box, am I wrong?)

Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem? Chewing Gum (or smoking) seems to be like taking box A to get at least/additional $1K, the two-boxing gene is like the CGTA gene, the illness itself (the abscess or lung cancer) is like not having $1M in box B. Here's another causal diagram, this time for the chewing gum problem:

As far as I can tell, the difference between the two problems is some additional, unstated intuition in the classic medical Newcomb problems. Maybe, the additional assumption is that the actual evidence lies in the "tickle", or that knowing and thinking about the study results causes some complications. In EDT terms: The intuition is that neither smoking nor chewing gum gives the agent additional information.

93 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2015-06-30T07:04:46.523Z · LW(p) · GW(p)

I think UDT reasoning would go like this (if translated to human terms). There are two types of mathematical multiverse, only one of which is real (i.e., logically consistent). You as a UDT agent gets to choose which one. In the first one, UDT agents one-box in this Genetic Newcomb Problem (GNP), so the only genes that statistically correlate with two-boxing are those that create certain kinds of compulsions overriding deliberate decision making, or for other decision procedures that are not logically correlated with UDT. In the second type of mathematical multiverse, UDT agents two-box in GNP, so the list of genes that correlate with two-boxing also includes genes for UDT.

Which type of multiverse is better? It depends on how Omega chooses which gene to look at, which is not specified in the OP. To match the Medical Newcomb Problem as closely as possible, let's assume that in each world (e.g., Everett branch) of each multiverse, Omega picks a random gene look at (from a list of all human genes), and puts $1M in box B for you if you don't have that gene. You live in a world where Omega happened to pick a gene that correlates with two-boxing. Under this assumption, the second type of multiverse is better because the number and distribution of boxes containing $1M is exactly the same in both multiverses, but in the second type of multiverse UDT agents get the additional $1K.

I presume that most LWers would one-box.

I think the reason we have an intuition that we should one-box in the GNP is that when we first read the story, we implicitly assume something else about what Omega is doing. For example, suppose instead of the above, in each world Omega looks at the most common gene correlated with two-boxing and puts $1M in box B if you don't have that gene. If the gene for UDT is the most common such gene in the second multiverse (where UDT two-boxes), then the first multiverse is better because it has more boxes containing $1M, and UDT agents specifically all get $1M instead of $1K.

Replies from: Caspar42
comment by Caspar Oesterheld (Caspar42) · 2015-06-30T08:48:54.731Z · LW(p) · GW(p)

Thank you for this elaborate response!!

Omega picks a random gene look at (from a list of all human genes), and puts $1M in box B for you if you don't have that gene

Why would Omega look at other human genes and not the two-boxing (correlated) gene(s) in any world?

Under this assumption, the second type of multiverse is better because the number and distribution of boxes containing $1M is exactly the same in both multiverses, but in the second type of multiverse UDT agents get the additional $1K.

Maybe I overlook something or did not describe the problem very well, but in the second multiverse UDT agents two-box, therefore UDT agents (probably) have the two-boxing gene and don't get the $1M. In the first multiverse, UDT agents one-box, therefore UDT agents (probably) don't have the one-boxing gene and get the $1M. So, the first multiverse seems to be better than the second.

I think the reason we have an intuition that we should one-box in the GNP is that when we first read the story, we implicitly assume something else about what Omega is doing. For example, suppose instead of the above, in each world Omega looks at the most common gene correlated with two-boxing and puts $1M in box B if you don't have that gene.

Yes, this is more or less the scenario, I was trying to describe. Specifically, I wrote:

Omega has only looked at your DNA: If you don't have the "two-boxing gene", Omega puts $1M into box B, otherwise box B is empty.

So, it's part of the GNP that Omega has looked at the "two-boxing gene" or (more realistically perhaps) the "most common gene correlated with two-boxing".

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2015-06-30T11:04:01.122Z · LW(p) · GW(p)

Why would Omega look at other human genes and not the two-boxing (correlated) gene(s) in any world?

I was trying to create a version of the problem that corresponds more closely to MNP, where the fact that a single gene correlates with both chewing gum and abscess is a coincidence, not the result of some process looking for genes correlated with chewing gum, and giving people with those genes abscesses.

Maybe I overlook something or did not describe the problem very well, but in the second multiverse UDT agents two-box, therefore UDT agents (probably) have the two-boxing gene and don't get the $1M. In the first multiverse, UDT agents one-box, therefore UDT agents (probably) don't have the one-boxing gene and get the $1M. So, the first multiverse seems to be better than the second.

Do you see that assuming Omega worked the way I described, then the number and distribution of boxes containing $1M is exactly the same in the two multiverses, therefore the second multiverse is better?

So, it's part of the GNP that Omega has looked at the "two-boxing gene" or (more realistically perhaps) the "most common gene correlated with two-boxing".

I think this is what makes your version of GNP different from MNP, and why we have different intuitions about the two cases. If there is someone or something who looked the most common gene correlated with two-boxing (because it was the most common gene correlated with two-boxing, rather than due to a coincidence), then by changing whether you two-box, you can change whether other UDT agents two-box, and hence which gene is the most common gene correlated with two-boxing, and hence which gene Omega looked at, and hence who gets $1M in box B. In MNP, there is no corresponding process searching for genes correlated with gum chewing, so you can't try to influence that process by choosing to not chew gum.

Replies from: Caspar42
comment by Caspar Oesterheld (Caspar42) · 2015-06-30T15:45:11.891Z · LW(p) · GW(p)

Do you see that assuming Omega worked the way I described, then the number and distribution of boxes containing $1M is exactly the same in the two multiverses, therefore the second multiverse is better?

Yes, I think I understand that now. But in your version the two-boxing gene practically does not cause the $1M to be in box B, because Omega mostly looks at random other genes. Would that even be a Newcomblike problem?

I think this is what makes your version of GNP different from MNP, and why we have different intuitions about the two cases. If there is someone or something who looked the most common gene correlated with two-boxing (because it was the most common gene correlated with two-boxing, rather than due to a coincidence), then by changing whether you two-box, you can change whether other UDT agents two-box, and hence which gene is the most common gene correlated with two-boxing, and hence which gene Omega looked at, and hence who gets $1M in box B.

In EY's chewing gum MNP, it seems like CGTA causes both the throat abscess and influences people to chew gum. (See p.67 of the TDT paper ) (It gets much more complicated, if evolution has only produced a correlation between CGTA and another chewing gum gene.) The CGTA gene is always read, copied into RNA etc., ultimately leading to throat abscesses. (The rest of the DNA is used, too, but only determines the size of your nose etc.) In the GNP, the two-boxing gene is always read by Omega and translated into a number of dollars under box B. (Omega can look at the rest of the DNA, too, but does not care.) I don't get the difference, yet, unfortunately.

In MNP, there is no corresponding process searching for genes correlated with gum chewing, so you can't try to influence that process by choosing to not chew gum.

I don't understand UDT, yet, but it seems to me that in the chewing gum MNP, you could not chew gum, thereby changing whether other UDT agents chew gum, and hence whether UDT agents' genes contain CGTA. Unless you know that CGTA has no impact on how you ultimately resolve this problem, which is not stated in the problem description and which would make EDT also chew gum.

comment by Unknowns · 2015-06-29T16:52:08.444Z · LW(p) · GW(p)

The general mistake that many people are making here is to think that determinism makes a difference. It does not.

Let's say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.

I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.

Note that determinism is irrelevant. If a program couldn't use a decision theory or couldn't make a choice, just because it is a determinate program, then no AI will ever work in the real world, and there is no reason that people should work in the real world either.

Also note that the only good decision in these cases is to one-box, even though the programs are 100% determinate.

Replies from: ike, Jiro
comment by ike · 2015-06-29T17:29:16.637Z · LW(p) · GW(p)

You're describing regular Newcomb, not this gene version. (Also note that Omega needs to have more processing power than the programs to do what you want it to do, just like the human version.) The analogue would be defining a short program that Omega will run over the AIs code, that predicts what the AI will output correctly 99% of the time. Then it becomes a question of whether any given AI can outwit the program. If an AI thinks the program won't work on it, for whatever reason (by which I mean "conditioning on myself picking X doesn't cause my estimate of the prediction program outputting X to change, and vice-versa"), it's free to choose whatever it wants to.

Getting back to humans, I submit that a certain class of people that actually think about the problem will induce a far greater failure rate in Omega, and that therefore that severs the causal link between my decision and Omega's, in the same way as an AI might be able to predict that the prediction program won't work on it.

As I said elsewhere, were this incorrect, my position would change, but then you probably aren't talking about "genes" anymore. You shouldn't be able to get 100% prediction rates from only genes.

Replies from: Unknowns
comment by Unknowns · 2015-06-29T17:31:46.667Z · LW(p) · GW(p)

It should be obvious that there is no difference between regular Newcomb and genetic Newcomb here. I examine the source code to see whether the program will one-box or not; that is the same as looking at its genetic code to see if it has the one-boxing gene.

Replies from: Jiro
comment by Jiro · 2015-06-29T21:32:01.731Z · LW(p) · GW(p)

Regular Newcomb requires that, for certain decision algorithms, Omega solve the halting problem. Genetic Newcomb requires that Omega look for the gene, something which he can always do. The "regular equivalent" of genetic Newcomb is that Omega looks at the decision maker's source code, but it so happens that most decision makers work in ways which are easy to analyze.

Replies from: g_pepper, Jiro, Jiro
comment by g_pepper · 2015-07-01T18:44:01.693Z · LW(p) · GW(p)

Regular Newcomb requires that, for certain decision algorithms, Omega solve the halting problem.

How so? I have not been able to come up with a valid decision algorithm that would require Omega to solve the halting problem. Do you have an example?

Replies from: Jiro
comment by Jiro · 2015-07-01T20:28:36.733Z · LW(p) · GW(p)

"Predict what Omega thinks you'll do, then do the opposite".

Which is really what the halting problem amounts to anyway, except that it's not going to be spelled out; it's going to be something that is equivalent to that but in a nonobvious way.

Saying "Omega will determine what the agent outputs by reading the agent's source code", is going to implicate the halting problem.

Replies from: g_pepper
comment by g_pepper · 2015-07-02T03:36:30.784Z · LW(p) · GW(p)

Predict what Omega thinks you'll do, then do the opposite

I don't know if that is possible given Unknowns' constraints. Upthread Unknowns defined this variant of Newcomb as:

Let's say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.

Since the player is not allowed to look at its own (or, presumably, Omega's) code, it is not clear to me that it can implement a decision algorithm that will predict what Omega will do and then do the opposite. However, if you remove Unknowns' restrictions on the players, then your idea will cause some serious issues for Omega! In fact, a player than can predict Omega as effectively as Omega can predict the player seems like a reductio ad absurdum of Newcomb's paradox.

Replies from: Jiro
comment by Jiro · 2015-07-02T04:42:00.039Z · LW(p) · GW(p)

If Omega is a program too, then an AI that is playing can have a subroutine that is equivalent to "predict Omega". The AI doesn't have to actually look at its own source code to do things that are equivalent to looking at its own source code--that's how the halting problem works!

If Omega is not a program and can do things that a program can't do, then this isn't true,. but I am skeptical that such an Omega is a meaningful concept.

Of course, the qualifier "deterministic" means that Omega can pick randomly, which the program cannot do, but since Omega is predicting a deterministic program, picking randomly can't help Omega do any better.

comment by Jiro · 2015-07-02T14:05:04.143Z · LW(p) · GW(p)

Now that I think of it, it depends on exactly what it means for Omega to tell that you have a gene for two-boxing. If Omega has the equivalent of a textbook saying "gene AGTGCGTTACT leads to two-boxing" or if the gene produces a brain that is incapable of one-boxing at all in the same way that genes produce lungs that are incapable of breathing water, then what I said applies. If it's a gene for two-boxing because it causes the bearer to produce a specific chain of reasoning, and Omega knows it's a two-boxing gene because Omega has analyzed the chain and figured out that it leads to two-boxing, then there actually is no difference.

(This is complicated by the fact that the problem states that having the gene is statistically associated with two-boxing, which is neither of those. If the gene is only statistically associated with two-boxing, it might be that the gene makes the bearer likely to two-box in ways that are not implicated if the bearer reasons the problem out in full logical detail.)

comment by Jiro · 2015-06-30T21:32:18.055Z · LW(p) · GW(p)

Actually, there's another difference. The original Newcomb problem implies that it is possible for you figure out the correct answer. With genetic Newcomb, it may be impossible for you to figure out the correct answer.

It is true that having your decision determined by your genes is similar to having your decision determined by the algorithm you are executing. However, we know that both sets of genes can happen, but if your decision is determined by the algorithm you are using, certain algorithms may be contradictory and cannot happen. (Consider an algorithm that predicts what Omega will do and acts based on that prediction.) Although now that I think of it, that's pretty much the halting problem objection anyway.

comment by Jiro · 2015-06-29T21:29:13.246Z · LW(p) · GW(p)

I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.

Omega can solve the halting problem?

comment by Vaniver · 2015-06-29T16:45:38.109Z · LW(p) · GW(p)

I may as well repeat my thoughts on Newcomb's, decision theory, and so on. I come to this from a background in decision analysis, which is the practical version of decision theory.

You can see decision-making as a two-step, three-state problem: the problem statement is interpreted to make a problem model, which is optimized to make a decision.

If you look at the wikipedia definitions of EDT and CDT, you'll see they primarily discuss the optimization process that turns a problem model into a decision. But the two accept different types of problem models; EDT operates on joint probability distributions and CDT operates on causal models. Since the type of the interior state is different, the two imply different procedures to interpret problem statements and optimize those models into decisions.

To compare the two simply, causal models are just more powerful than joint probability distributions, and the pathway that uses the more powerful language is going to be better. A short attempt to explain the difference: in a Bayes net (i.e. just a joint probability distribution that has been factorized in an acyclic fashion), the arrows have no physical meaning--they just express which part of the map is 'up' and which is 'down.' In a causal model, the arrows have physical meaning--causal influence flows along those arrows only in directions with arrows, and so the arrows represent which direction gravity pulls in. One can turn a map upside down without changing its correspondence to the territory; one cannot reverse gravity without changing the territory.

Because there are additional restrictions on how the model can be written, one can get additional information out of reading the model.

comment by Manfred · 2015-06-29T15:24:04.514Z · LW(p) · GW(p)

Hm, this is a really interesting idea.

The trouble is that it's tricky to apply a single decision theory to this problem, because by hypothesis, this gene actually changes which decision theory you use! If I'm a TDT agent, then this is good evidence I have the "TDT-agent gene," but in this problem I don't actually know whether the TDT-gene is the one-box gene or the two-box gene. If TDT leads to one-boxing, then it recommends two-boxing - but if it provably two-boxes it is the "two-box gene" and gets the bad outcome. This is to some extent an "evil decision problem." Currently I'd one-box, based on some notion of resolving these sorts of problems through more UDT-ish proof-based reasoning (though it has some problems). Or in TDT-language, I'd be 'controlling' whether the TDT-gene was the two-box gene by picking the output of TDT.

However, this problem becomes a lot easier if most people are not actually using any formal reasoning, but are just doing whatever seems like a good idea at the time. Like, the sort of reasoning that leads to people actually smoking. If I'm dropped into this genetic Newcomb's problem, or into the smoking lesion problem, and I learn that almost all people in the data set I've seen were either bad at decision theory or didn't know the results of the data, then those people no longer have quite the same evidential impact about my current situation, and I can just smoke / two-box. It's only when those people and myself are in symmetrical situations (similar information, use similar decision-making processes) that I have to "listen" to them.

Replies from: Caspar42, Unknowns
comment by Caspar Oesterheld (Caspar42) · 2015-06-29T18:53:04.145Z · LW(p) · GW(p)

I am not entirely sure, I understand your TDT analysis, maybe that's because I don't understand TDT that well. I assumed that TDT would basically just do what CDT does, because there are no simulations of the agent involved. Or do you propose that checking for the gene is something like simulating the agent?

This is to some extent an "evil decision problem."

It does not seem to be more evil than Newcomb's problem, but I am not sure, what you mean by "evil". For every decision theory, it is possible, of course, to set up some decision problem, where this decision theory loses. Would you say that I set up the "genetic Newcomb problem" specifically to punish CDT/TDT?

Replies from: Manfred
comment by Manfred · 2015-06-29T23:54:13.859Z · LW(p) · GW(p)

because there are no simulations of the agent involved.

The role that would normally be played by simulation is here played by a big evidential study of what people with different genes do. This is why it matters whether the people in the study are good decision-makers or not - only when the people in the study are in a position similar to my own do they fulfill this simulation-like role.

It does not seem to be more evil than Newcomb's problem, but I am not sure, what you mean by "evil". For every decision theory, it is possible, of course, to set up some decision problem, where this decision theory loses. Would you say that I set up the "genetic Newcomb problem" specifically to punish CDT/TDT?

Yeah, that sentence is phrased poorly, sorry. But I'll try to explain. The easy way to construct an evil decision problem (say, targeting TDT) is to figure out what action TDT agents take, and then set the hidden variables so that that action is suboptimal - in this way the problem can be tilted against TDT agents even if the hidden variables don't explicitly care that their settings came from this evil process.

In this problem, the "gene" is like a flag on a certain decision theory that tells what action it will take, and the hidden variables are set such that people with that decision theory (the decision theory that people with the one-box gene use) act suboptimally (people with the one-box gene who two-box get more money). So this uses very similar machinery to an evil decision problem. The saving grace is that the other action also gets its own flag (the two-box gene), which has a different setting of the hidden variables.

Replies from: Caspar42, Unknowns
comment by Caspar Oesterheld (Caspar42) · 2015-06-30T20:17:09.835Z · LW(p) · GW(p)

The role that would normally be played by simulation is here played by a big evidential study of what people with different genes do. This is why it matters whether the people in the study are good decision-makers or not - only when the people in the study are in a position similar to my own do they fulfill this simulation-like role.

Yes, the idea is that they are sufficiently similar to you so that the study can be applied (but also sufficiently different to make it counter-intuitive to say it's a simulation). The subjects of the study may be told that there already exists a study, so that their situation is equivalent to yours. It's meant to be similar to the medical Newcomb problems in that regard.

I briefly considered the idea that TDT would see the study as a simulation, but discarded the possibility, because in that case the studies in classic medical Newcomb problems could also be seen as simulations of the agent to some degree. The "abstract computation that an agent implements" is a bit vague, anyway, I assume, but if one were willing to go this far, is it possible that TDT conflates with EDT?

Replies from: Manfred
comment by Manfred · 2015-07-01T03:03:58.844Z · LW(p) · GW(p)

Under the formulation that leads to one-boxing here, TDT will be very similar to EDT whenever the evidence is about the unknown output of your agent's decision problem. They are both in some sense trying to "join the winning team" - EDT by expecting the winning-team action to make them have won, and TDT only in problems where what team you are on is identical to what action you take.

comment by Unknowns · 2015-06-30T03:54:49.272Z · LW(p) · GW(p)

This is not an "evil decision problem" for the same reason original Newcomb is not, namely that whoever chooses only one box gets the reward, not matter what process he uses.

comment by Unknowns · 2015-06-29T15:42:21.134Z · LW(p) · GW(p)

Yes, all of this is basically correct. However, it is also basically the same in the original Newcomb although somewhat more intuitive. In the original problem Omega decides to put the one million or not depending on its estimate of what you will do, which likely depends on "what kind of person" you are, in some sense. And being this sort of person is also going to determine what kind of decision theory you use, just as the gene does in the genetic version. The original Newcomb is more intuitive, though, because we can more easily accept that "being such and such a kind of person" could make us use a certain decision theory, than that a gene could do the same thing.

Even the point about other people knowing the results or using certain reasoning is the same. If you find an Omega in real life, but find out that all the people being tested so far are not using any decision theory, but just choosing impulsively, and Omega is just judging how they would choose impulsively, then you should take both boxes. It is only if you know that Omega tends to be right no matter what decision theory people are using, that you should choose the one box.

comment by OrphanWilde · 2015-06-30T14:54:54.746Z · LW(p) · GW(p)

Upvoting: This is a very good post which has caused everybody's cached decision-theory choices to fail horribly because they're far too focused on getting the "correct" answer and then proving that answer correct and not at all focused on actually thinking about the problem at hand. Enthusiastic applause.

comment by hairyfigment · 2015-06-29T20:45:27.820Z · LW(p) · GW(p)

The OP does not sufficiently determine the answer, unless we take its simplified causal graph as complete, in which case I would two-box. I hope that if in fact "most LWers would one-box," we would only do so because we think Omega would be smarter than that.

comment by Brian_Tomasik · 2015-06-29T22:15:10.147Z · LW(p) · GW(p)

I assume that the one-boxing gene makes a person generically more likely to favor the one-boxing solution to Newcomb. But what about when people learn about the setup of this particular problem? Does the correlation between having the one-boxing gene and inclining toward one-boxing still hold? Are people who one-box only because of EDT (even though they would have two-boxed before considering decision theory) still more likely to have the one-boxing gene? If so, then I'd be more inclined to force myself to one-box. If not, then I'd say that the apparent correlation between choosing one-boxing and winning breaks down when the one-boxing is forced. (Note: I haven't thought a lot about this and am still fairly confused on this topic.)

I'm reminded of the problem of reference-class forecasting and trying to determine which reference class (all one-boxers? or only grudging one-boxers who decided to one-box because of EDT?) to apply for making probability judgments. In the limit where the reference class consists of molecule-for-molecule copies of yourself, you should obviously do what made the most of them win.

Replies from: Caspar42
comment by Caspar Oesterheld (Caspar42) · 2015-06-30T16:08:48.080Z · LW(p) · GW(p)

But what about when people learn about the setup of this particular problem? Does the correlation between having the one-boxing gene and inclining toward one-boxing still hold?

Yes, it should also hold in this case. Knowing about the study could be part of the problem and the subjects of the initial study could be lied to about a study. The idea of the "genetic Newcomb problem" is that the two-boxing gene is less intuitive than CGTA and that its workings are mysterious. It could make you be sure that you have or don't have the gene. It could make be comfortable with decision theories whose names start with 'C', interpret genetical Newcomb problem studies in a certain way etc. The only thing that we know is that is causes us to two-box, in the end. For CGTA, on the other hand, we have a very strong intuition that it causes a "tickle" or so that could be easily overridden by us knowing about the first study (which correlates chewing gum with throat abscesses). It could not possibly influence what we think about CDT vs. EDT etc.! But this intuition is not part of the original description of the problem.

Replies from: Brian_Tomasik, Brian_Tomasik
comment by Brian_Tomasik · 2015-07-01T21:52:11.985Z · LW(p) · GW(p)

If there were a perfect correlation between choosing to one-box and having the one-box gene (i.e., everyone who one-boxes has the one-box gene, and everyone who two-boxes has the two-box gene, in all possible circumstances), then it's obvious that you should one-box, since that implies you must win more. This would be similar to the original Newcomb problem, where Omega also perfectly predicts your choice. Unfortunately, if you really will follow the dictates of your genes under all possible circumstances, then telling someone what she should do is useless, since she will do what her genes dictate.

The more interesting and difficult case is when the correlation between gene and choice isn't perfect.

comment by Brian_Tomasik · 2015-07-01T21:51:57.203Z · LW(p) · GW(p)

(moved comment)

comment by philh · 2015-06-29T16:22:11.216Z · LW(p) · GW(p)

I think we need to remember here the difference between logical influence and causal influence?

My genes can cause me to be inclined towards smoking, and my genes can cause me to get lesions. If I choose to smoke, not knowing my genes, then that's evidence for what my genes say, and it's evidence about whether I'll get lesions; but it doesn't actually causally influence the matter.

My genes can incline me towards one-boxing, and can incline Omega towards putting $1M in the box. If I choose to two-box despite my inclinations, then that provides me with evidence about what Omega did, but it doesn't causally influence the matter.

If I don't know which of two worlds I'm in, I can't increase the probability of one by saying "in world A, I'm more likely to do X than in world B, so I'm going to do X". If nothing else, if I thought that worked, then I would do it whatever world I was in, and it would no longer be true.

In standard Newcomb, my inclination to one-box actually does make me one-box. In this version, my inclination to one-box is just a node that you've labelled "inclination to one-box", and you've said that Omega cares about the node rather than about whether or not I one-box. But you're still permitting me to two-box, so that node might just as well be "inclination to smoke".

Replies from: Unknowns
comment by Unknowns · 2015-06-29T16:37:45.461Z · LW(p) · GW(p)

In the original Newcomb's problem, am I allowed to say "in the world with the million, I am more likely to one-box than in the world without, so I'm going to one-box"? If I thought this worked, then I would do it no matter what world I was in, and it would no longer be true...

Except that it is still true. I can definitely reason this way, and if I do, then of course I had the disposition to one-box, and of course Omega put the million there; because the disposition to one-box was the reason I wanted to reason this way.

And likewise, in the genetic variant, I can reason this way, and it will still work, because the one-boxing gene is responsible for me reasoning this way rather than another way.

Replies from: philh
comment by philh · 2015-06-29T17:09:55.364Z · LW(p) · GW(p)

In the original, you would say: "in the world where I one-box, the million is more likely to be there, so I'll one-box".

the one-boxing gene is responsible for me reasoning this way rather than another way.

If there's a gene that makes you think black is white, then you're going to get killed on the next zebra crossing. If there's a gene that makes you misunderstand decision theory, you're going to make some strange decisions. If Omega is fond of people with that gene, then lucky you. But if you don't have the gene, then acting like you do won't help you.

Another reframing: in this version, Omega checks to see if you have the photic sneeze reflex, then forces you to stare at a bright light and checks whether or not you sneeze. Ve gives you $1k if you don't sneeze, and independently, $1M if you have the PSR gene.

If I can choose whether or not to sneeze, then I should not sneeze. Maybe the PSR gene makes it harder for me to not sneeze, in which case I can be really happy that I have to stifle the urge to sneeze, but I should still not sneeze.

But if the PSR gene just makes me sneeze, then why are we even asking whether I should sneeze or not?

Replies from: Unknowns
comment by Unknowns · 2015-06-29T17:14:31.670Z · LW(p) · GW(p)

I think this is addressed by my top level comment about determinism.

But if you don't see how it applies, then imagine an AI reasoning like you have above.

"My programming is responsible for me reasoning the way I do rather than another way. If Omega is fond of people with my programming, then I'm lucky. But if he's not, then acting like I have the kind of programming he likes isn't going to help me. So why should I one-box? That would be acting like I had one-box programming. I'll just take everything that is in both boxes, since it's not up to me."

Of course, when I examined the thing's source code, I knew it would reason this way, and so I did not put the million.

Replies from: philh, Creutzer
comment by philh · 2015-06-30T14:46:10.750Z · LW(p) · GW(p)

So I think where we differ is that I don't believe in a gene that controls my decision in the same way that you do. I don't know how well I can articulate myself, but:

As an AI, I can choose whether my programming makes me one-box or not, by one-boxing or not. My programming isn't responsible for my reasoning, it is my reasoning. If Omega looks at my source code and works out what I'll do, then there are no worlds where Omega thinks I'll one-box, but I actually two-box.

But imagine that all AIs have a constant variable in their source code, unhelpfully named TMP3. AIs with TMP3=true tend to one-box in Newcomblike problems, and AIs with TMP3=false tend to two-box. Omega decides whether to put in $1M by looking at TMP3.

(Does the problem still count as Newcomblike? I'm not sure that it does, so I don't know if TMP3 correlates with my actions at all. But we can say that TMP3 correlates with how AIs act in GNP, instead.)

If I have access to my source code, I can find out whether I have TMP3=true or false. And regardless of which it is, I can two-box. (If I can't choose to two-box, after learning that I have TMP3=true, then this isn't me.) Since I can two-box without changing Omega's decision, I should.

Whereas in the original Newcomb's problem, I can look at my source code, and... maybe I can prove whether I one- or two-box. But if I can, that doesn't constrain my decision so much as predict it, in the same way that Omega can; the prediction of "one-box" is going to take into account the fact that the arguments for one-boxing overwhelm the consideration of "I really want to two-box just to prove myself wrong". More likely, I can't prove anything. And I can one- or two-box, but Omega is going to predict me correctly, unlike in GNP, so I one-box.

The case where I don't look at my source code is more complicated (maybe AIs with TMP3=true will never choose to look?), but I hope this at least illustrates why I don't find the two comparable.

(That said, I might actually one-box, because I'm not sufficiently convinced of my reasoning.)

Replies from: Unknowns
comment by Unknowns · 2015-07-01T04:54:19.907Z · LW(p) · GW(p)

"I don't believe in a gene that controls my decision" refers to reality, and of course I don't believe in the gene either. The disagreement is whether or not such a gene is possible in principle, not whether or not there is one in reality. We both agree there is no gene like this in real life.

As you note, if an AI could read its source code and sees that it says "one-box", then it will still one-box, because it simply does what it is programmed to do. This first of all violates the conditions as proposed (I said the AIs cannot look at their sourcec code, and Caspar42 stated that you do not know whether or not you have the gene).

But for the sake of argument we can allow looking at the source code, or at the gene. You believe that if you saw you had the gene that says "one-box", then you could still two-box, so it couldn't work the same way. You are wrong. Just as the AI would predictably end up one-boxing if it had that code, so you would predictably end up one-boxing if you had the gene. It is just a question of how this would happen. Perhaps you would go through your decision process, decide to two-box, and then suddenly become overwhelmed with a sudden desire to one-box. Perhaps it would be because you would think again and change your mind. But one way or another you would end up one-boxing. And this "doesn't' constrain my decision so much as predict it", i.e. obviously both in the case of the AI and in the case of the gene, in reality causality does indeed go from the source code to one-boxing, or from the gene to one-boxing. But it is entirely the same in both cases -- causality runs only from past to future, but for you, it feels just like a normal choice that you make in the normal way.

Replies from: philh
comment by philh · 2015-07-01T09:25:50.733Z · LW(p) · GW(p)

I was referring to "in principle", not to reality.

You believe that if you saw you had the gene that says "one-box", then you could still two-box

Yes. I think that if I couldn't do that, it wouldn't be me. If we don't permit people without the two-boxing gene to two-box (the question as originally written did, but we don't have to), then this isn't a game I can possibly be offered. You can't take me, and add a spooky influence which forces me to make a certain decision one way or the other, even when I know it's the wrong way, and say that I'm still making the decision. So again, we're at the point where I don't know why we're asking the question. If not-me has the gene, he'll do one thing; if not, he'll do the other; and it doesn't make a difference what he should do. We're not talking about agents with free action, here.

Again, I'm not sure exactly how this extends to the case where an agent doesn't know whether they have the gene.

Replies from: Unknowns
comment by Unknowns · 2015-07-01T11:51:54.536Z · LW(p) · GW(p)

What if we take the original Newcomb, then Omega puts the million in the box, and then tells you "I have predicted with 100% certainty that you are only going to take one box, so I put the million there?"

Could you two-box in that situation, or would that take away your freedom?

If you say you could two-box in that situation, then once again the original Newcomb and the genetic Newcomb are the same.

If you say you could not, why would that be you when the genetic case would not be?

Replies from: philh
comment by philh · 2015-07-01T13:41:21.382Z · LW(p) · GW(p)

Unless something happens out of the blue to force my decision - in which case it's not my decision - then this situation doesn't happen. There might be people for whom Omega can predict with 100% certainty that they're going to one-box even after Omega has told them his prediction, but I'm not one of them.

(I'm assuming here that people get offered the game regardless of their decision algorithm. If Omega only makes the offer to people whom he can predict certainly, we're closer to a counterfactual mugging. At any rate, it changes the game significantly.)

Replies from: Unknowns
comment by Unknowns · 2015-07-01T14:03:05.820Z · LW(p) · GW(p)

I agree that in reality it is often impossible to predict someone's actions, if you are going to tell them your prediction. That is why it is perfectly possible that the situation where you know the gene is impossible. But in any case this is all hypothetical because the situation posed assumes you cannot know which gene you have until you choose one or both boxes, at which point you immediately know.

EDIT: You're really not getting the point, which is that the genetic Newcomb is identical to the original Newcomb in decision theoretic terms. Here you're arguing not about the decision theory issue, but whether or not the situations involved are possible in reality. If Omega can't predict with certainty when he tells his prediction, then I can equivalently say that the gene only predicts with certainty when you don't know about it. Knowing about the gene may allow you to two-box, but that is no different from saying that knowing Omega's decision before you make your choice would allow you to two-box, which it would.

Basically anything said about one case can be transformed into the other case by fairly simple transpositions. This should be obvious.

Replies from: philh
comment by philh · 2015-07-01T14:38:22.572Z · LW(p) · GW(p)

Sorry, tapping out now.

EDIT: but brief reply to your edit: I'm well aware that you think they're the same, and telling me that I'm not getting the point is super unhelpful.

comment by Creutzer · 2015-06-30T10:45:02.666Z · LW(p) · GW(p)

Of course, when I examined the thing's source code, I knew it would reason this way, and so I did not put the million.

Then you're talking about an evil decision problem. But neither in the original nor in the genetic Newcombe's problem is your source code investigated.

Replies from: Unknowns
comment by Unknowns · 2015-06-30T11:14:38.035Z · LW(p) · GW(p)

No, it is not an evil decision problem, because I did that not because of the particular reasoning, but because of the outcome (taking both boxes).

The original does not specify how Omega makes his prediction, so it may well be by investigating source code.

comment by Unknowns · 2015-06-29T10:55:52.298Z · LW(p) · GW(p)

I have never agreed that there is a difference between the smoking lesion and Newcomb's problem. I would one-box, and I would not smoke. Long discussion in the comments here.

Replies from: Caspar42
comment by Caspar Oesterheld (Caspar42) · 2015-06-29T11:16:58.690Z · LW(p) · GW(p)

Interesting, thanks! I thought that it was more or less consensus that the smoking lesion refutes EDT. So, where should I look to see EDT refuted? Absent-minded driver, Evidential Blackmail, counterfactual mugging or something else?

Replies from: Unknowns, hairyfigment
comment by Unknowns · 2015-06-30T03:59:47.518Z · LW(p) · GW(p)

Yes, as you can see from the comments on this post, there seems to be some consensus that the smoking lesion refutes EDT.

The problem is that the smoking lesion, in decision theoretic terms, is entirely the same as Newcomb's problem, and there is also a consensus that EDT gets the right answer in the case of Newcomb.

Your post reveals that the smoking lesion is the same as Newcomb's problem and thus shows the contradiction in that consensus. Basically there is a consensus but it is mistaken.

Personally I haven't seen any real refutation of EDT.

comment by hairyfigment · 2015-06-29T20:05:07.384Z · LW(p) · GW(p)

That does seem like the tentative consensus, and I was unpleasantly surprised to see someone on LW who would not chew the gum.

We should be asking what decision procedure gives us more money, e.g. if we're writing a computer program to make a decision for us. You may be tempted to say that if Omega is physical - a premise not usually stated explicitly, but one I'm happy to grant - then it must be looking at some physical events linked to your action and not looking at the answer given by your abstract decision procedure. A procedure based on that assumption would lead you to two-box. This thinking seems likely to hurt you in analogous real-life situations, unless you have greater skill at lying or faking signals than (my model of) either a random human being or a random human of high intelligence. Discussing it, even 'anonymously', would constitute further evidence that you lack the skill to make this work.

Now TDT, as I understand it, assumes that we can include in our graph a node for the answer given by an abstract logical process. For example, to predict the effects of pushing some buttons on a calculator, we would look at both the result of a "timeless" logical process and also some physical nodes that determine whether or not the calculator follows that process.

Let's say you have a similar model of yourself. Then if and only if your model of the world says that the abstract answer given by your decision procedure does not sufficiently determine Omega's action, then a counterfactual question about that answer will tell you to two-box. But if Omega when examining physical evidence just looks at the physical nodes which (sufficiently) determine whether or not you will use TDT (or whatever decision procedure you're using), then presumably Omega knows what answer that process gives, which will help determine the result. A counterfactual question about the logical output would then tell you to one-box. TDT I think asks that question and gets that answer. UDT I barely understand at all.

(The TDT answer to the OP's problem depends on how we interpret "two-boxing gene".)

comment by Kindly · 2015-06-29T19:21:54.009Z · LW(p) · GW(p)

In the classic problem, Omega cannot influence my decision; it can only figure out what it is before I do. It is as though I am solving a math problem, and Omega solves it first; the only confusing bit is that the problem in question is self-referential.

If there is a gene that determines what my decision is, then I am not making the decision at all. Any true attempt to figure out what to do is going to depend on my understanding of logic, my familiarity with common mistakes in similar problems, my experience with all the arguments made about Newcomb's problem, and so on; if, despite all that, the box I choose has been determined since my birth, then none of these things (none of the things that make up me!) are a factor at all. Either my reasoning process is overridden in one specific case, or it is irreparably flawed to begin with.

Replies from: Unknowns
comment by Unknowns · 2015-06-30T03:47:36.207Z · LW(p) · GW(p)

This is like saying "if my brain determines my decision, then I am not making the decision at all."

Replies from: Kindly
comment by Kindly · 2015-06-30T05:08:29.801Z · LW(p) · GW(p)

Not quite. I outlined the things that have to be going on for me to be making a decision.

Replies from: Unknowns
comment by Unknowns · 2015-06-30T07:11:54.408Z · LW(p) · GW(p)

You cannot assume that any of those things are irrelevant or that they are overridden just because you have a gene. Presumably the gene is arranged in coordination with those things.

comment by D_Malik · 2015-06-29T15:14:06.048Z · LW(p) · GW(p)

I think two-boxing in your modified Newcomb is the correct answer. In the smoking lesion, smoking is correct, so there's no contradiction.

One-boxing is correct in the classic Newcomb because your decision can "logically influence" the fact of "this person one-boxes". But your decision in the modified Newcomb can't logically influence the fact of "this person has the two-boxing gene".

Replies from: Unknowns
comment by Unknowns · 2015-06-29T15:17:09.274Z · LW(p) · GW(p)

Under any normal understanding of logical influence, your decision can indeed "logically influence" whether you have the gene or not. Let's say there is a 100% correlation between having the gene and the act of choosing -- everyone who chooses the one box has the one boxing gene, and everyone who chooses both boxes has the two boxing gene. Then if you choose to one box, this logically implies that you have the one boxing gene.

Or do you mean something else by "logically influence" besides logical implication?

Replies from: OrphanWilde
comment by OrphanWilde · 2015-06-30T14:00:00.447Z · LW(p) · GW(p)

No, your decision merely reveals what genes you have, your decision cannot change what genes you have.

Replies from: Unknowns
comment by Unknowns · 2015-06-30T14:11:04.474Z · LW(p) · GW(p)

Even in the original Newcomb you cannot change whether or not there is a million in the box. Your decision simply reveals whether or not it is already there.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-06-30T14:49:54.585Z · LW(p) · GW(p)

In the original Newcomb, causality genuinely flowed in the reverse. Your decision -did- change whether or not there is a million dollars in the box. The original problem had information flowing backwards in time (either through a simulation which, for practical purposes, plays time forward, then goes back to the origin, or through an omniscient being seeing into the future, however one wishes to interpret it).

In the medical Newcomb, causality -doesn't- flow in the reverse, so behaving as though causality -is- flowing in the reverse is incorrect.

Replies from: g_pepper, Unknowns
comment by g_pepper · 2015-07-01T14:55:36.871Z · LW(p) · GW(p)

In the original Newcomb, causality genuinely flowed in the reverse. Your decision -did- change whether or not there is a million dollars in the box.

I don't know about that. Nozick's article from 1969 states:

Suppose a being in whose power to predict your choices you have enormous confidence. ... You know that this being has often correctly predicted your choices in the past (and has never, so far as you know, made an incorrect prediction about your choices), and further-more you know that this being has often correctly predicted the choices of other people, many of whom are similar to you, in the particular situation to be described below. One might tell a longer story, but all this leads you to believe that almost certainly this being's prediction about your choice in the situation to be discussed will be correct.

Nothing in that implies that causality flowed in the reverse; it sounds like the being just has a really good track record.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-07-01T15:46:16.336Z · LW(p) · GW(p)

The "simulation" in this case could entirely be in the Predictor's head. But I concede the point, and shift to a weaker position:

In the original Newcomb problem, the nature of the boxes is decided by a perfect or near-perfect prediction of your decision; it's predicting your decision, and is for all intents and purposes taking your decision into account. (Yes, it -could- be using genetics, but there's no reason to elevate that hypothesis.)

In the medical Newcomb problem, the nature of the boxes is decided by your genetics, which have a very strong correlation with your decision; it is still predicting your decision, but by a known algorithm which doesn't take your decision into account.

Your decision in the first case should account for the possibility that it accurately predicts your decision - unless you place .1% or greater odds on it mis-calculating your decision, you should one-box. [Edited: Fixed math error that reversed calculation versus mis-calculation.]

Your decision in the second case should not - your genetics are already what your genetics are, and if your genetics predict two-boxing, you should two-box because $1,000 is better than nothing, and if your genetics predict one-boxing, you should two-box because $1,001,000 is better than $1,000,000.

Replies from: g_pepper
comment by g_pepper · 2015-07-01T17:30:21.413Z · LW(p) · GW(p)

In the original Newcomb problem, the nature of the boxes is decided by a perfect or near-perfect prediction of your decision; it's predicting your decision, and is for all intents and purposes taking your decision into account.

Actually, I am not sure about even that weaker position. The Nozick article stated:

If a state is part of the explanation of deciding to do an action (if the decision is made) and this state is already fixed and determined, then the decision, which has not yet been made, cannot be part of the explanation of the state's obtaining. So we need not consider the case where prob(state/action) is in the basic explanatory theory, for an already fixed state.

It seems to me that with this passage Nozick explicitly contradicts the assertion that the being is "taking your decision into account".

Replies from: OrphanWilde
comment by OrphanWilde · 2015-07-01T19:05:26.265Z · LW(p) · GW(p)

It is taking its -prediction- of your decision into account in the weaker version, and is good enough at prediction that the prediction is analogous to your decision (for all intents and purposes, taking your decision into account). The state is no longer part of the explanation of the decision, but rather the prediction of that decision, and the state derived therefrom. Introduce a .0001% chance of error and the difference is easier to see; the state is determined by the probability of your decision, given the information the being has available to it.

(Although, reading the article, it appears reverse-causality vis a vis the being being God is an accepted, although not canonical, potential explanation of the being's predictive powers.)

Imagine a Prisoner's Dilemma between two exactly precise clones of you, with one difference: One clone is created one minute after the first clone, and is informed the first clone has already made its decision. Both clones are informed of exactly the nature of the test (that is, the only difference in the test is that one clone makes a decision first). Does this additional information change your decision?

comment by Unknowns · 2015-07-01T04:46:24.310Z · LW(p) · GW(p)

In this case you are simply interpreting the original Newcomb to mean something absurd, because causality cannot "genuinely flow in reverse" in any circumstances whatsoever. Rather in the original Newcomb, Omega looks at your disposition, one that exists at the very beginning. If he sees that you are disposed to one-box, he puts the million. This is just the same as someone looking at the source code of an AI and seeing whether it will one-box, or someone looking for the one-boxing gene.

Then, when you make the choice, in the original Newcomb you choose to one-box. Causality flows in only one direction, from your original disposition, which you cannot change since it is in the past, to your choice. This causality is entirely the same as in the genetic Newcomb. Causality never goes any direction except past to future.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-07-01T13:59:11.590Z · LW(p) · GW(p)

Hypotheticals are not required to follow the laws of reality, and Newcomb is, in the original problem, definitionally prescient - he knows what is going to happen. You can invent whatever reason you would like for this, but causality flows, not from your current state of being, but from your current state of being to your future decision to Newcomb's decision right now. Because Newcomb's decision on what to put in the boxes is predicated, not on your current state of being, but on your future decision.

comment by SilentCal · 2015-06-29T17:25:17.417Z · LW(p) · GW(p)

I think your last paragraph is more or less correct. The way I'd show it would be to place a node labelled 'decision' between the top node and the left node, representing a decision you make based on decision-theoretical or other reasoning. There are then two additional questions: 1) Do we remove the causal arrow from the top node to the bottom one and replace it with an arrow from 'decision' to the bottom? Or do we leave that arrow in place? 2) Do we add a 'free will' node representing some kind of outside causation on 'decision', or do we let 'decision' be determined solely by the top node?

In the smoking problem, there's usually no causal arrow from 'decision' to the bottom; in Newcomb's, there usually is. Thus, if we're assuming an independent 'free will' node, you should one-box but smoke.

Unknowns seems to be pushing the case where there's no 'free will' node; your decision is purely a product of the top node. In that case, I say the answer is not to waste any mental effort trying to decide the right choice because whether or not you deliberate doesn't matter. If you could change the top node, you'd want to change it so that you one-box and don't smoke, but you can't. If you could change your decision node without changing the top node, you'd want to change it so you one-box and smoke, but you can't do that either. There's no meaningful answer to which impossible change is the 'right' one.

Replies from: Unknowns
comment by Unknowns · 2015-06-29T17:29:10.270Z · LW(p) · GW(p)

This is like saying a 100% determinate chess playing computer shouldn't look ahead, since it cannot affect its actions. That will result in a bad move. And likewise, just doing what you feel like here will result in smoking, since you (by stipulation) feel like doing that. So it is better to deliberate about it, like the chess computer, and choose both to one box and not to smoke.

comment by [deleted] · 2015-06-29T16:20:58.947Z · LW(p) · GW(p)

I would one-box if I had the one-boxing gene, and two-box if I had the two-boxing gene. I don't know what decision-making theory I'm using, because the problem statement didn't specify how the gene works.

I don't really see the point of asking people with neither gene what they'd do.

Replies from: Caspar42, Unknowns
comment by Caspar Oesterheld (Caspar42) · 2015-06-29T18:13:09.954Z · LW(p) · GW(p)

Maybe I should have added that you don't know which genes you have, before you make the decision, i.e. two-box or one-box.

Replies from: None
comment by [deleted] · 2015-06-29T19:10:59.427Z · LW(p) · GW(p)

I wasn't assuming that I knew beforehand.

It's just that, if I have the one-boxing gene, it will compel me (in some manner not stated in the problem) to use a decision algorithm which will cause me to one-box, and similarly for the two-box gene.

Replies from: Caspar42
comment by Caspar Oesterheld (Caspar42) · 2015-06-29T19:44:03.188Z · LW(p) · GW(p)

Ah, okay. Well, the idea of my scenario is that you have no idea how all of this works. So, for example, the two-boxing gene could make you be 100% sure that you have or don't have the gene, so that two-boxing seems like the better decision. So, until you actually make a decision, you have no idea which gene you have. (Preliminary decisions, as in Eells tickle defense paper, are also irrelevant.) So, you have to make some kind of decision. The moment you one-box you can be pretty sure that you don't have the two-boxing gene since it did not manage to trick into two-boxing, which it usually does. So, why not just one-box and take the money? :-)

Replies from: None
comment by [deleted] · 2015-06-29T20:00:42.863Z · LW(p) · GW(p)

My problem with all this is, if hypothetical-me's decisionmaking process is made by genetics, why are you asking real-me what the decisionmaking process should be?

Real-me can come up with whatever logic and arguments, but hypothetical-me will ignore all that and choose by some other method.

(Traditional Newcomb is different, because in that case hypothetical-me can use the same decisionmaking process as real-me)

Replies from: Caspar42, Unknowns
comment by Caspar Oesterheld (Caspar42) · 2015-06-29T20:09:44.937Z · LW(p) · GW(p)

So, what if one day you learned that hypothetical-you is the actual-you, that is, what if Omega actually came up to you right now and told you about the study etc. and put you into the "genetic Newcomb problem"?

Replies from: None
comment by [deleted] · 2015-06-29T22:01:41.099Z · LW(p) · GW(p)

Well, I can say that I'd two-box.

Does that mean I have the two-boxing gene?

comment by Unknowns · 2015-06-30T03:53:03.429Z · LW(p) · GW(p)

Hypothetical-me can use the same decisionmaking process as real-me also in genetic Newcomb, just as in the original. This simply means that the real you will stand for a hypothetical you which has the gene which makes you choose the thing that real you chooses, using the same decision process that the real you uses. Since you say you would two-box, that means the hypothetical you has the two-boxing gene.

I would one-box, and hypothetical me has the one-boxing gene.

comment by Unknowns · 2015-06-29T16:33:55.100Z · LW(p) · GW(p)

This is no different from responding to the original Newcomb's by saying "I would one-box if Omega put the million, and two-box if he didn't."

Both in the original Newcomb's problem and in this one you can use any decision theory you like.

Replies from: None
comment by [deleted] · 2015-06-29T16:54:38.838Z · LW(p) · GW(p)

There is a difference - with the gene case, there is a causal pathway via brain chemistry or whatnot from the gene to the decision. In the original Newcomb problem, omega's prediction does not cause the decision.

Replies from: Unknowns
comment by Unknowns · 2015-06-29T16:57:05.240Z · LW(p) · GW(p)

Even in the original Newcomb's problem there is presumably some causal pathway from your brain to your decision. Otherwise Omega wouldn't have a way to predict what you are going to do. And there is no difference here between "your brain" and the "gene" in the two versions.

In neither case does Omega cause your decision, your brain causes it in both cases.

comment by ike · 2015-06-29T14:55:05.514Z · LW(p) · GW(p)

I would two-box in that situation. Don't see a problem.

Replies from: Caspar42
comment by Caspar Oesterheld (Caspar42) · 2015-06-29T15:47:35.284Z · LW(p) · GW(p)

Well, the problem seems to be that this will not give you the $1M, just like in Newcomb's original problem.

Replies from: ike
comment by ike · 2015-06-29T15:51:47.937Z · LW(p) · GW(p)

Wait, you think I have the two-boxing gene? If that's the case, one-boxing won't help me; there's no causal link between my choice and which gene I have, unlike standard Newcomb, in which there is a causal link between my choice and the contents of the box, given TDT's definition of "causal link".

Replies from: Unknowns
comment by Unknowns · 2015-06-29T16:01:08.321Z · LW(p) · GW(p)

Sure there is a link. The gene causes you to make the choice, just like in the standard Newcomb your disposition causes your choices.

In the standard Newcomb, if you one-box, then you had the disposition to one-box, and Omega put the million.

In the genetic Newcomb, if you one-box, then you had the gene to one-box, and Omega put the million.

Replies from: ike
comment by ike · 2015-06-29T16:22:56.772Z · LW(p) · GW(p)

The gene causes you to make the choice, just like in the standard Newcomb your disposition causes your choices.

OP here said (emphasis added)

A study shows that most people

Which makes your claim incorrect. My beliefs about the world are that no such choice can be predicted by only genes with perfect accuracy; if you stipulate that they can, my answer would be different.

In the genetic Newcomb, if you one-box, then you had the gene to one-box, and Omega put the million.

Wrong; it's perfectly possible to have the gene to one-box but two-box.

(If the facts were as stated in the OP, I'd actually expect conditioning on certain aspects of my decision-making processes to remove the correlation; that is, people who think similarly to me would have less correlation with choice-gene. If that prediction was stipulated away, my choice *might* change; it depends on exactly how that was formulated.)

Replies from: Caspar42, Unknowns
comment by Caspar Oesterheld (Caspar42) · 2015-06-29T18:39:27.867Z · LW(p) · GW(p)

OP here said (emphasis added)

A study shows that most people

Which makes your claim incorrect. My beliefs about the world are that no such choice can be predicted by only genes with perfect accuracy; if you stipulate that they can, my answer would be different.

So, as soon as it's not 100% of people two-boxing having the two-boxing gene, but only 99.9%, you assume that you are in the 0.1%?

Replies from: ike
comment by ike · 2015-06-29T20:12:46.291Z · LW(p) · GW(p)

So, as soon as it's not 100% of people two-boxing having the two-boxing gene, but only 99.9%, you assume that you are in the 0.1%?

You didn't specify any numbers. If the actual number was 99.9%, I'd consider that strong evidence against some of my beliefs about the relationship between decisions and genes. I was implicitly assuming a slightly lower number (like 70ish area), which would be somewhat more compatible, and in which case I would expect to be part of that 30% (with greater than 30% probability).

If the number was, in fact, 99.9%, I'd have to assume that genes in general are far more related to specifics of how we think than I currently think, and it might be enough to make this an actual Newcomb's problem. The mechanism for the equivalency Newcomb would be that it creates a causal link from my reaching an opinion to my having a certain gene, in TDT terms. Gene would be another word for "brain state", as I've said elsewhere on this post.

comment by Unknowns · 2015-06-29T16:32:27.389Z · LW(p) · GW(p)

This is confusing the issue. I would guess that the OP wrote "most" because Newcomb's problem sometimes is put in such a way that the predictor is only right most of the time.

And in such cases, it is perfectly possible to remove the correlation in the same way that you say. If I know how Omega is deciding who is likely to one-box and who is likely to two-box, I can purposely do the opposite of what he expects me to do.

But if you want to solve the real problem, you have to solve it in the case of 100% correlation, both in the original Newcomb's problem and in this case.

Replies from: ike
comment by ike · 2015-06-29T17:14:17.316Z · LW(p) · GW(p)

And in such cases, it is perfectly possible to remove the correlation in the same way that you say. If I know how Omega is deciding who is likely to one-box and who is likely to two-box, I can purposely do the opposite of what he expects me to do.

Exactly; but since a vast majority of players won't do this, Omega can still be right most of the time.

But if you want to solve the real problem, you have to solve it in the case of 100% correlation, both in the original Newcomb's problem and in this case.

Can you formulate that scenario, then, or point me to somewhere it's been formulated? It would have to be a world with very different cognition than ours, if genes can determine choice 100% of the time; arguably, genes in that world would correspond to brain states in our world in a predictive sense, in which case this collapses to regular Newcomb, and I'd one-box.

The problem presented by the gene-scenario, as stated by OP, is

Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem?

However, as soon as you add in a 100% correlation, it becomes very different, because you have no possibility of certain outcomes. If the smoking lesion problem was also 100%, then I'd agree that you shouldn't smoke, because whatever "gene" we're talking about can be completely identified (in a sense) with my brain state that leads to my decision.

Replies from: Unknowns
comment by Unknowns · 2015-06-29T17:26:37.083Z · LW(p) · GW(p)

You are right that 100% correlation requires an unrealistic situation. This is true also in the original Newcomb, i.e. we don't actually expect anything in the real world to be able to predict our actions with 100% accuracy. Still, we can imagine a situation where Omega would predict our actions with a good deal of accuracy, especially if we had publicly announced that we would choose to one-box in such situations.

The genetic Newcomb requires an even more unrealistic scenario, since in the real world genes do not predict actions with anything close to 100% certitude. I agree with you that this case is no different from the original Newcomb; I think most comments here were attempting to find a difference, but there isn't one.

Replies from: ike
comment by ike · 2015-06-29T17:34:55.298Z · LW(p) · GW(p)

Still, we can imagine a situation where Omega would predict our actions with a good deal of accuracy, especially if we had publicly announced that we would choose to one-box in such situations.

We could, but I'm not going to think about those unless the problem is stated a bit more precisely, so we don't get caught up in arguing over the exact parameters again. The details on how exactly Omega determines what to do are very important. I've actually said elsewhere that if you didn't know how Omega did it, you should try to put probabilities on different possible methods, and do an EV calculation based on that; is there any way that can fail badly?

(Also, if there was any chance of Omega existing and taking cues from our public announcements, the obvious rational thing to do would be to stop talking about it in public.)

I agree with you that this case is no different from the original Newcomb; I think most comments here were attempting to find a difference, but there isn't one.

I think people may have been trying to solve the case mentioned in OP, which is less than 100%, and does have a difference.

comment by Gurkenglas · 2015-06-29T15:22:43.992Z · LW(p) · GW(p)
comment by shminux · 2015-06-29T15:07:20.637Z · LW(p) · GW(p)

Your "Newcomb-like" problem isn't. In the original Newcomb problem there is no situation where both boxes contain a reward, yet the naive CDT makes you act as though there were. In your setup there is such a possibility, so 2-boxing is the strictly better strategy. Any decision theory better make you 2-box.

EDIT: Thanks to those who pointed out my brain fart. Of course both boxes contain a reward in the one boxing case. It just doesn't help you any. I maintain that this is not a Newcomb-like problem, since here 2-boxing is a strictly better strategy. No one would one-box if they can help it.

Replies from: Caspar42, Unknowns, Unknowns
comment by Caspar Oesterheld (Caspar42) · 2015-06-29T15:26:17.959Z · LW(p) · GW(p)

I am sorry, but I am not sure about what you mean by that. If you are a one-boxing agent, then both boxes of Newcomb's original problem contain a reward, assuming that Omega is a perfect predictor.

comment by Unknowns · 2015-06-29T15:21:56.555Z · LW(p) · GW(p)

What are you talking about? In the original Newcomb problem both boxes contain a reward whenever Omega predicts that you are going to choose only one box.

comment by Unknowns · 2015-06-29T17:01:38.266Z · LW(p) · GW(p)

Re: the edit. Two boxing is strictly better from a causal decision theorist point of view, but that is the same here and in Newcomb.

But from a sensible point of view, rather than the causal theorist point of view, one boxing is better, because you get the million, both here and in the original Newcomb, just as in the AI case I posted in another comment.

comment by OrphanWilde · 2015-06-29T14:53:23.735Z · LW(p) · GW(p)

Anybody who one-boxes in the genetic-determinant of Omega are reversing causal flow.

Replies from: Unknowns
comment by Unknowns · 2015-06-29T15:24:06.901Z · LW(p) · GW(p)

Why? They one-box because they have the gene. So no reversal. Just as in the original Newcomb problem they choose to one-box because they were the sort of person who would do that.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-06-30T13:57:05.057Z · LW(p) · GW(p)

From the original post:

A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene).

If you one-box, you may or may not have the gene, but whether or not you have the gene is entirely irrelevant to what decision you should make. If, confronted with this problem, you say "I'll one-box", you're attempting to reverse causal flow - to determine your genetic makeup via the decisions you make, as opposed to the decision you make being determined by your genetic makeup. There is zero advantage conferred to declaring yourself a one-boxer in this arrangement.