Newcomb versus dust specks

post by ike · 2016-05-12T03:02:29.720Z · LW · GW · Legacy · 104 comments

Contents

104 comments

You're given the option to torture everyone in the universe, or inflict a dust speck on everyone in the universe. Either you are the only one in the universe, or there are 3^^^3 perfect copies of you (far enough apart that you will never meet.) In the latter case, all copies of you are chosen, and all make the same choice. (Edit: if they choose specks, each person gets one dust speck. This was not meant to be ambiguous.)

As it happens, a perfect and truthful predictor has declared that you will choose torture iff you are alone.

What do you do?

How does your answer change if the predictor made the copies of you conditional on their prediction?

How does your answer change if, in addition to that, you're told you are the original?

104 comments

Comments sorted by top scores.

comment by OrphanWilde · 2016-05-12T17:25:05.522Z · LW(p) · GW(p)

3^^^3 dust specks in everybody's eye?

So basically we're talking about turning all sentient life into black holes, or torturing everybody?

I mean, it depends on how good the torture we're talking about is, and how long it will last. If it's permanent and unchanging, eventually people will get used to it/evolve past it and move on. If it's short-term, eventually people will get past it. So in either of those cases, torture is the obvious choice.

If, on the other hand, it's permanent and adaptive such that all life is completely and totally miserable for perpetuity, and there is nothing remotely good about living, oblivion seems the obvious choice.

comment by gjm · 2016-05-12T11:05:46.167Z · LW(p) · GW(p)

Meta: I thought posting in Main was disabled?

[EDITED: so far as I know, it still is; this post is in fact in Discussion rather than Main; I was misled by a quirk of the LW user interface. I think following that link may also illustrate what confused me.]

Replies from: ike
comment by ike · 2016-05-12T11:46:35.809Z · LW(p) · GW(p)

I posted this to discussion.

Replies from: gjm
comment by gjm · 2016-05-12T14:23:46.384Z · LW(p) · GW(p)

Hmm, so you did. It turns out that

  • for any Discussion-section post under /r/discussion/lw/[code]/[title], the exact same post appears at /lw/[code]/[title] (and, when viewed there, looks as if it is in Main because the "MAIN" at the top is boldfaced and the "DISCUSSION" is not);
  • when you get a reply to a comment on a post in the Discussion section, in your inbox the "Context" link attached to that reply goes to the "Main-ified" version.

... which is how I came to think your post was in Main rather than Discussion. Sorry about that.

comment by HungryHobo · 2016-05-12T10:36:33.565Z · LW(p) · GW(p)

This seems like a weird mishmash of other hypotheticals on the site, I'm not really seeing the point of parts of your scenario.

Replies from: gjm
comment by gjm · 2016-05-12T11:02:06.320Z · LW(p) · GW(p)

I think the point may be: LW orthodoxy, in so far as there is such a thing, says to choose SPECKS over TORTURE [EDITED to add:] ... no, wait, I mean the exact opposite, TORTURE over SPECKS ... and ONE BOX over TWO BOXES, and that combining these in ike's rather odd scenario leads to the conclusion that we should prefer "torture everyone in the universe" over "dust-speck everyone in the universe" in that scenario, which might be a big enough bullet to bite to make some readers reconsider their adherence to LW orthodoxy.

My own view on this, for what it's worth, is that all my ethical intuitions -- including the one that says "torture is too awful to be outweighed by any number of dust specks" and the one that says "each of these vastly-many transitions from which we get from DUST SPECKS to TORTURE is a strict improvement" -- have been formed on the basis of experiences (my own, my ancestors', earlier people in the civilization I'm a part of) that come nowhere near to this sort of scenario, and I don't trust myself to extrapolate. If some incredibly weird sequence of events actually requires me to make such a choice for real then of course I'll have to make it (for what it's worth, I think I would choose TORTURE and ONE BOX in the separate problems and DUST SPECKS in this one, the apparent inconsistency notwithstanding, not least because I don't think I could ever actually have enough evidence to know something was a truly perfect truthful predictor) but I think its ability to tell me anything insightful about my values, or about the objective moral structure of the universe if it has one, is very very very doubtful.

Replies from: polymathwannabe, 9eB1
comment by polymathwannabe · 2016-05-12T21:49:38.869Z · LW(p) · GW(p)

LW orthodoxy, in so far as there is such a thing, says to choose SPECKS over TORTURE

No, Eliezer and Hanson are anti-specks.

Replies from: gjm
comment by gjm · 2016-05-12T22:58:09.168Z · LW(p) · GW(p)

Wow, did I really write that? It's the exact opposite of what I meant. Will fix.

comment by 9eB1 · 2016-05-12T15:28:30.455Z · LW(p) · GW(p)

I think your explanation may be correct, but I don't understand why torture would be the intuitive answer even so. First, if I select torture, everyone in the universe gets tortured, which means I get tortured. If instead I select dust speck, I get a dust speck, which is vastly preferable. Second, I would prefer a universe with a bunch of me to one with just me, because I'm pretty awesome so more me is pretty much just better. Basically I just fail to see a downside to the dust speck scenario.

Replies from: gjm, ike
comment by gjm · 2016-05-12T17:34:17.545Z · LW(p) · GW(p)

The downside to the dust speck scenario is that lots and lots and lots of you get dust-specked. But yes, I think the thought experiment is seriously impaired by the fact that the existence of more copies of you is likely a bigger deal than whether they get dust-specked.

Perhaps we can fix it, as follows: Omega has actually set up two toy universes, one with 3^^^^3 of you who may or may not get dust-specked, one with just one of you who may or may not get tortured. Now Omega tells you the same as in ike's original scenario, except that it's "everyone sharing your toy universe" who will be either tortured or dust-specked.

Replies from: ike
comment by ike · 2016-05-12T18:28:08.395Z · LW(p) · GW(p)

But yes, I think the thought experiment is seriously impaired by the fact that the existence of more copies of you is likely a bigger deal than whether they get dust-specked.

The idea was that your choice doesn't change the number of people, so this shouldn't affect the answer.

Replies from: gjm
comment by gjm · 2016-05-12T19:58:09.920Z · LW(p) · GW(p)

That seems, if you don't mind my saying so, an odd thing to say when discussing a version of Newcomb's problem. ("Your choice doesn't change what's in the boxes, so ...")

Replies from: ike
comment by ike · 2016-05-12T21:02:59.403Z · LW(p) · GW(p)

In the first version, there's no causal relation between your choice and the number of people in the world. In the third, there is, and in the middle one, anthropics must also be considered.

I gave multiple scenarios to make this point.

If the predictor in newcomb doesn't touch the boxes but just tells you that they predict your choice is the same as what's in the box, it turns into the smoking lesions scenario.

comment by ike · 2016-05-12T15:41:23.029Z · LW(p) · GW(p)

Specks is supposed to be the intuitive answer.

Second, I would prefer a universe with a bunch of me to one with just me, because I'm pretty awesome so more me is pretty much just better.

That's why I gave scenarios where your choice doesn't cause the number of people, which is where Newcomblike scenarios come in.

comment by ArisKatsaris · 2016-05-12T07:04:39.028Z · LW(p) · GW(p)

Well I personally don't want to be tortured, so I choose the dust speck.

Even if I wasn't personally involved, and I was to decide on morality alone rather than personal interest, average utilitarianism tells me that I should choose the dust speck. (Better that 100% of all people suffer from a dust speck, than 100% of all people suffer from torture)

Replies from: gjm
comment by gjm · 2016-05-12T11:05:14.211Z · LW(p) · GW(p)

Do you generally endorse average utilitarianism? E.g., if you can press a button to create a new world, completely isolated from all others, containing 10^10 people 10x happier than typical present-day Americans, do you press it if what currently exists is a world with 10^10 people only 9x happier than typical present-day Americans and refrain from pressing it if it's 11x instead?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2016-05-13T19:45:20.362Z · LW(p) · GW(p)

The answer is complex

  • First of all, the creation of people is a complex moral decision. Whether you espouse average utilitarianism or total utilitarianism or whatever other decision theory, if you ask someone "Would you press a button that would create a person", they'd normally be HESITANT, no matter whether you said it would be a very happy person or a moderately happy person. We tend to think of creating people as a big deal, that brings a big responsibility.

  • Secondly, my average utilitarianism is about the satisfaction of preferences, not happiness. This may seem a nitpick, though.

  • Thirdly, I can't help but notice that you're using the example of the creation of a world that in reality would increase average utility, even as you're using a hypothetical that states that in that particular case it would decrease average utility. This feels as a scenario designed to confuse the moral intuition into giving the wrong answer.

So using the current reality instead (rather than the one where people are 9x happier): Would I choose to create another universe happier than this one? In general, yes. Would I choose to create another universe, half as happy as this one? I general, no, not unless there's some additional value that the presence of that universe would provide to us, enough so that it would make up for the loss in average utility.

Replies from: gjm, Jiro
comment by gjm · 2016-05-13T23:31:54.749Z · LW(p) · GW(p)

the creation of people is a complex moral decision

True enough. But it seems to me that hesitation in such cases is usually because of uncertainty either about whether the new people would really have good lives or about their effect on others around them. In the scenarios I described, everyone involved gets a good life when ask their interactions with others are taken into account. So yeah, creating livres is complex, but I don't see that that invalidates my question at all.

preferences, not happiness

That happens to be my, er, preference too. I think I do think it's a nitpick; we can just take "10x happier" as a sort of shorthand for some corresponding statement about preferences.

designed to confuse the moral intuition

I promise I had absolutely no such intention. I took the levels higher than typical ones in our world to avoid distracting digressions about whether the typical life in our world is in fact better than nothing. (Note that this isn't the same question as whether it's worth continuing such a life once it's already in progress.)

Your example of a world half as happy as this seems like it has a similar but opposite problem: depending on what "half as happy" actually means, you might be describing a change that would be rejected by total utilitarianism as well as average. That's the problem I was trying to avoid.

comment by Jiro · 2016-05-13T20:51:55.843Z · LW(p) · GW(p)

Would I choose to create another universe happier than this one? In general, yes.

Okay, Now I reveal that just yesterday, we've discovered yet another universe which already exists and is a lot happier than the one you would choose to create. In fact it's so much happier that creating that universe would now drive the average down instead of up.

If you're using average utility, then whether this discovery has been made affects whether you want to create that other universe. Is that correct?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2016-05-14T14:52:12.142Z · LW(p) · GW(p)

If you're using average utility, then whether this discovery has been made affects whether you want to create that other universe. Is that correct?

With the standard caveats, yes that seems reasonable. Given the existence of that ultrahappy universe an average human life will be more likely to exist in happier circumstances than the ones in the multiversal reality I'd create if I chose to add that less-than-averagely-happy universe.

Same way as I'd not take 20% of actual existing happy people and force them to live less happy lives.

Think about all sentient lives as if they were part of a single mind, called "Sentience". We design portions of Sentience's life. We want as much a proportion of Sentience's existence to be as happy as possible, satisfying Sentience's preferences.

comment by woodchopper · 2016-05-17T08:59:17.172Z · LW(p) · GW(p)

This doesn't seem very coherent.

As it happens, a perfect and truthful predictor has declared that you will choose torture iff you are alone.

OK. Then that means if I choose torture, I am alone. If I choose the dust specks, I am not alone. I don't want to be tortured, and don't really care about 3 ^^^ 3 people getting dust specks in their eyes, even if they're all 'perfect copies of me'. I am not a perfect utilitarian.

A perfect utilitarian would choose torture though, because one person getting tortured is technically not as bad from a utilitarian point of view as 3 ^^^ 3 dust specks in eyes.

comment by RowanE · 2016-05-14T16:53:38.684Z · LW(p) · GW(p)

The way the problem reads to me, choosing dust specks means I live in a universe where 3^^^3 of me exist, and choosing torture means 1 of me exist. I prefer that more of myself exist than not, so I should choose specks in this case.

In a choice between "torture for everyone in the universe" and "specks for everyone in the universe", the negative utility of the former obviously outweighs that of the latter, so I should choose specks.

I don't see any incongruity or reason to question my beliefs? I suppose it's meant to be implied that it's other selves that exist because of the size of the universe, so there's either one of "everyone in the universe" or 3^^^3 copies of everyone, but in that case my other selves are too far outside my light-cone for "iff you are alone" to be a prediction that makes sense.

comment by OrphanWilde · 2016-05-13T15:34:19.784Z · LW(p) · GW(p)

For the case that dust specks aren't additive, assuming we treat copies of me as distinct entities with distinct moral weight, 3^^^3 copies of me is either a net negative - as a result of 3^^^3 lives not worth living - or a net positive - as a result of an additional 3^^^3 lives worth living. The point of the dust speck is that it has only a negligible effect; the weight of the dust speck moral issue is completely subsumed by the weight of the duplicate people issue.

If we don't treat them as distinct moral entities, well, the duplication and the dust speck doesn't enter into it.

I don't think your conceptual problem sufficiently isolates whatever moral quandary you're trying to express; there's just too much going on here.

Replies from: ike
comment by ike · 2016-05-13T17:10:06.727Z · LW(p) · GW(p)

3^^^3 copies of me is either a net negative - as a result of 3^^^3 lives not worth living - or a net positive - as a result of an additional 3^^^3 lives worth living. The point of the dust speck is that it has only a negligible effect; the weight of the dust speck moral issue is completely subsumed by the weight of the duplicate people issue.

If you smoke in the smoking lesions scenario, then you shouldn't choose your action here based on how many people would exist, because they would exist anyway. (At least in the first of three cases.)

Replies from: OrphanWilde
comment by OrphanWilde · 2016-05-13T17:33:05.158Z · LW(p) · GW(p)

Either you misunderstand the smoking lesions scenario and the importance between the difference between a correlation and a perfect predictor, or you're just trolling the board by throwing every decision theory edge case you can think of into a single convoluted mess.

Replies from: ike
comment by ike · 2016-05-13T19:33:27.438Z · LW(p) · GW(p)

I may be misunderstanding something, but isn't the standard LW position on smoking to smoke even if the gene's correlation to smoking and cancer is 1?

As long as the predictor doesn't cause anything but merely informs, they're equivalent to the gene. The reason why one-boxing is correct is because your choice causes the money, while the reason smoking is correct is because your choice doesn't cause cancer.

Replies from: OrphanWilde, entirelyuseless
comment by OrphanWilde · 2016-05-16T16:03:10.792Z · LW(p) · GW(p)

I may be misunderstanding something, but isn't the standard LW position on smoking to smoke even if the gene's correlation to smoking and cancer is 1?

If the mutual correlation to both is 1, you will smoke if and only if you have the gene, and you will have the gene if and only if you smoke, and in which case you shouldn't smoke. At the point at which the gene is a perfect predictor, if you have a genetic test and you don't have the gene, and then smoke - then the genetic test produced a false negative. Perfect predictors necessarily make a mess of causality.

Replies from: ike, entirelyuseless
comment by ike · 2016-05-16T17:09:27.500Z · LW(p) · GW(p)

you will smoke if and only if you have the gene, and you will have the gene if and only if you smoke, and in which case you shouldn't smoke

This implicitly assumes EDT.

At the point at which the gene is a perfect predictor, if you have a genetic test and you don't have the gene, and then smoke

But that's not what CDT counterfactuals do. You cut off previous nodes. As the choice to smoke doesn't causally affect the gene, smoking doesn't counterfactually contradict the prediction. If you would actually smoke, then yes, but counterfactuals don't imply there's any chance of it happening in reality.

Replies from: OrphanWilde
comment by OrphanWilde · 2016-05-16T18:30:43.048Z · LW(p) · GW(p)

This implicitly assumes EDT.

No it doesn't. It assumes a "perfect predictor" is what it is. I don't give a damn about evidence - we're specifying properties of a universe here.

But that's not what CDT counterfactuals do.

CDT assumes causality makes sense in the universe. Your hypotheticals don't take place in a universe with the kind of causality causal decision theory depends upon.

You cut off previous nodes. As the choice to smoke doesn't causally affect the gene, smoking doesn't counterfactually contradict the prediction.

In the case of a perfect predictor, yes, smoking specifies which gene you have. You don't get to say "Everybody who smokes has this gene" as a property of the universe, and then pretend to be an exception to a property of the universe because you have a bizarre and magical agency that gets to bypass properties of the universe. You're a part of the universe; if the universe has a law (which it does, in our hypotheticals), the law applies to you, too.

We have a perfect predictor. We do something the perfect predictor predicted we wouldn't. There is a contradiction there, in case you didn't notice; either it's not, in fact, the perfect predictor we specified, or we didn't do the thing. One or the other. And our hypothetical universe is constructed such that the perfect predictor is a perfect predictor; therefore, we don't get to violate its predictions.

Replies from: ike
comment by ike · 2016-05-16T18:41:31.055Z · LW(p) · GW(p)

No it doesn't. It assumes a "perfect predictor" is what it is. I don't give a damn about evidence - we're specifying properties of a universe here.

You said "you shouldn't smoke", which is a decision-theoretical claim, not a specification. It's consistent with EDT, but not CDT.

You don't get to say "Everybody who smokes has this gene" as a property of the universe, and then pretend to be an exception to a property of the universe because you have a bizarre and magical agency that gets to bypass properties of the universe.

In other words, you're denying the exact thing that CDT asserts.

There is a contradiction there

Which is what a counterfactual is.

Whatever your theory is, it is denying core claims that CDT makes, so you're denying CDT (and implicitly assuming EDT as the method for making decisions, your arguments literally map directly onto EDT arguments).

Replies from: OrphanWilde
comment by OrphanWilde · 2016-05-16T19:20:13.676Z · LW(p) · GW(p)

You said "you shouldn't smoke", which is a decision-theoretical claim, not a specification. It's consistent with EDT, but not CDT.

No it isn't, it's a statement about the universe: If you smoke, you'll get lesions. It's written into the specification of the universe; what decision theory you use doesn't change the characteristics of the universe.

In other words, you're denying the exact thing that CDT asserts.

No. You don't get to specify a universe without the kind of causality that the kind of CDT we use in our universe depends on, and then claim that this says something significant about decision theory. Causality in our hypothetical works differently.

Which is what a counterfactual is.

No it isn't.

Whatever your theory is, it is denying core claims that CDT makes, so you're denying CDT (and implicitly assuming EDT as the method for making decisions, your arguments literally map directly onto EDT arguments).

No it isn't. In terms of CDT, we can say that smoking causes the gene; this isn't wrong, because, according to the universe, anybody who smokes has the gene; if they didn't, they do now, because the correlation is guaranteed by the laws of the universe. No matter how much work you prepared to ensure you didn't have the gene in advance of smoking, the law of the universe says you have it now. No matter how many tests you ran, they were all wrong.

It may seem unintuitive and bizarre, because our own universe doesn't behave this way - but when you find yourself in an alien universe, stomping your foot and insisting that the laws of physics should behave the way you're used to them behaving is a fast way to die. Once you introduce a perfect predictor, the universe must bend to ensure the predictions work out.

Replies from: ike
comment by ike · 2016-05-16T20:18:36.680Z · LW(p) · GW(p)

You don't get to specify a universe without the kind of causality that the kind of CDT we use in our universe depends on, and then claim that this says something significant about decision theory.

What kind of causality is this, given that you assert that the correct thing to do in smoking lesions is refrain from smoking, and smoking lesions is one of the standard things where CDT says to smoke?

"A causes B, therefore B causes A" is a fallacy no matter what arguments you put forward.

In terms of CDT, we can say that smoking causes the gene

CDT asserts the opposite, and so if you claim this then you disagree with CDT.

You don't understand what counterfactuals are.

Replies from: OrphanWilde
comment by OrphanWilde · 2016-05-16T20:59:28.997Z · LW(p) · GW(p)

What kind of causality is this, given that you assert that the correct thing to do in smoking lesions is refrain from smoking, and smoking lesions is one of the standard things where CDT says to smoke?

Recursive causality.

"A causes B, therefore B causes A" is a fallacy no matter what arguments you put forward.

Perfect mutual correlation means both that A->B and that B->A.

CDT asserts the opposite, and so if you claim this then you disagree with CDT.

No it doesn't.

You don't understand what counterfactuals are.

A counterfactual is a state of existence which is not true of the universe. It is not a contradiction.

comment by entirelyuseless · 2016-05-16T16:23:36.067Z · LW(p) · GW(p)

"If you have a genetic test and you don't have the gene, and then smoke - then the genetic test produced a false negative."

If Omega makes the mistake of telling someone else that he predicted that you will one-box, and that person tells you, so you then take both boxes, knowing that the million is already there, then Omega's prediction was wrong.

Omega can be a perfect predictor, but he cannot tell you his prediction, at least not if you work the way normal humans do. Likewise, a gene could be a perfect predictor, but not if you know about it, at least not if you work the way normal humans do.

Replies from: OrphanWilde
comment by OrphanWilde · 2016-05-16T18:14:01.984Z · LW(p) · GW(p)

Trial problem:

Omega appears before you, and gives you a pencil. He tells you that, in universes in which you break this pencil in half in the next twenty seconds, the universe ends immediately. Not as a result of your breaking the pencil - it's pure coincidence that all universes in which you break the pencil, the universe ends, and in all universes in which you don't, it doesn't.

Do you break the pencil in half? It's not like you're changing anything by doing so, after all; some set of universes will end, some set won't, and you aren't going to change that.

You're just deciding which set of universes you happen to occupy. Which implies something.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-16T19:54:44.730Z · LW(p) · GW(p)

I don't break the pencil. But I already pointed out in Newcomb and in the Smoking Lesion that I don't care if I can change anything or not. So I don't care here either.

comment by entirelyuseless · 2016-05-14T00:42:45.238Z · LW(p) · GW(p)

We've had this discussion before. When you one-box, your choice does not cause the money. The money is already there or it is not. Causality does not go backwards in time.

In other words, Newcomb and the smoking lesion are identical in logical form.

Replies from: ArisKatsaris, ike
comment by ArisKatsaris · 2016-05-14T14:37:17.254Z · LW(p) · GW(p)

When you one-box, your choice does not cause the money.

Your decision algorithm will cause the choice. The prediction of that choice, by someone knowing your decision algorithm, will have caused money.

If you want the money you should therefore be a decision algorithm that makes the choice whose prediction will cause the money.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-14T15:20:59.733Z · LW(p) · GW(p)

You cannot make yourself into a certain decision algorithm, just as you cannot make yourself have or not have a lesion.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2016-05-15T11:56:42.648Z · LW(p) · GW(p)

You cannot make yourself into a certain decision algorithm

What, is this some sort of objection where you believe that determinism means we don't make 'real' choices'?

You could be convinced by my words and make yourself into a person who chooses to one-box. Or you could refuse to be convinced and remain a person who chooses to two-boxes.

Granted, by being "convinced" or "not convinced" it means that you're already the decision algorithm that would make that choice. So what? Whether you'll be convinced or not still affects your decision algorithm from then on.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-15T14:18:29.151Z · LW(p) · GW(p)

No, I don't believe that determinism means we don't make real choices. But it is also true, as you note yourself, that if I am convinced by your words, then I was already the kind of person who would be convinced, and I did not make myself into that sort of person. And likewise for the opposite case.

But I am consistent: I believe we make real choices even if Omega predicts our actions, and I also believe we make real choices even if a lesion causes them. The people arguing against my position are saying we don't make real choices in the second case, so they are the ones raising the determinism objection.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2016-05-17T19:29:52.451Z · LW(p) · GW(p)

Okay, can you just state clearly whether you one-box or two-box, and whether you smoke or not-smoke in the smoking lesion problem, so that I understand what your position is, before trying to understand why it is?

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-18T14:07:06.987Z · LW(p) · GW(p)

I take the one box in Newcomb, and I do not smoke in the smoking lesion.

My position is that they are the same problem. The million is already there or it is not, and the lesion is there or it is not. I cannot change that in either case. But I still make a real decision, one that will be correlated with the outcome, and I choose the winning one.

Replies from: Pimgd, Lumifer
comment by Pimgd · 2016-05-19T07:57:06.826Z · LW(p) · GW(p)

I can't even begin to model myself as "liking" smoking - it gives a disgusting smell that clings to everything and even being near second-hand smoke makes for uncomfortable breathing. If I try to model myself as someone who likes smoking, I don't see myself living, because I've been altered beyond recognition.

Add to that that it seems to be a problem without a correct answer ("yes" seems to be the preferred option, given that there is no statement that you prefer smoking without cancer over smoking with cancer, thus "you prefer to smoke" + "some cancer related stuff that you may or may not have an opinion about" = "go smoke already". But this isn't the direct correct answer because if you take another worldview and look at the problem, "to smoke is to admit that you have this genetic flaw and thus you have cancer"), and I have massive problems when it comes to understanding this sort of thing.

This question seems to have the same thing going on - pick one! A) "everyone is tortured" or B) "everyone gets a dust speck". But wait, there's some numbers going on in the background where there's either a lot of clones of you or only one of you. And if everyone gets tortured then there's only one of you. Here it is left unsaid that torture is far far far worse than the dust speck for a single individual, but the issue remains: I see "Do a really really really bad thing" or "Do a meh thing" and then some fancy attempts to trip up various logic systems - What about the logic that, hey, A is always worse than B? ... I guess you could fix this by there being OTHER people present, so that it's a "you get tortured" vs "you and everyone else (3^^^3) get a dust speck"... but then there'd be loopholes in the region of "yes, but my preferences prefer a world where there are people other than me, so I'll take torture if that means I get to exist in such a world".

As for one-box/two-box, I'd open B up, and if it was empty I'd take the contents of A home. If it contained the cash, well, I dunno. I guess I'd leave the 1000 behind, if the whole "if you take both then B is empty" idea was true. Maybe it's false. Maybe it's true! Regardless of that, I just got a million bucks, and an extra $1000, well, that's not all that much after receiving a whole million. (Yes, you could do stuff with that money, like buying malaria nets or something, but I am not an optimal rational agent, my thinking capacity is limited, and I'd rather bank the $1m than get tripped up by $1000 because I got greedy). ... weirdly enough, if you change the numbers so that A contained $1000 and B contained $1001, I'd open up B first... and then regardless of seeing the money, I'd take A home too.

Feel free to point out the holes in my thinking - I'd prefer examples that are not too "out there" because my answers tend to not be based on the numbers but on all the circumstances around it - that $1m would see me work on what I'd want to work on for the rest of my life, and that $1000 would reduce the time I'd need to spend working for doing what I wanna do by about a month (or 3 weeks).

Replies from: gjm, Pimgd, Pimgd
comment by gjm · 2016-05-19T14:47:15.510Z · LW(p) · GW(p)

I can't even begin to model myself as "liking" smoking

Then for the "smoking lesion" problem to be any use to you, you need to perform a sort of mental translation in which it isn't about smoking but about some other (perhaps imaginary) activity that you do enjoy but is associated with harmful outcomes. Maybe it's eating chocolate and the harmful outcome is diabetes. Maybe it's having lots of sex and the harmful outcome is syphilis. Maybe it's spending all your time sitting alone and reading and the harmful outcome is heart disease. The important thing is to keep the structure of the thing the same: doing X is associated with bad outcome Y, it turns out (perhaps surprisingly) that this is not because X causes Y but because some other thing causes both X and Y, you find yourself very much wanting to do X, so now what do you do?

Replies from: Jiro
comment by Jiro · 2016-05-21T19:21:45.449Z · LW(p) · GW(p)

Having a smoking lesion make you choose smoking is vague. Does it make you choose smoking by increasing the utility you gain from smoking, but not affecting your ability to reason based on this utility? Or does it make you choose smoking by affecting your ability to do logical reasoning?

In the former case, switching from nonsmoking to smoking because you made a logical conclusion should not affect your chances of dying, even though switching to smoking in general should affect your chance of dying.

In the latter case, switching to smoking should affect your chance of dying, but you are then asking a question which presupposes under some circumstances that you can't answer it.

comment by Pimgd · 2016-05-19T09:01:21.725Z · LW(p) · GW(p)

I went looking around on wikipedia and found Kavka's toxin puzzle which seems to be about "you can get a billion dollars if you intend to drink this poison (which will hurt a lot for a whole day similar to the worst torture imaginable but otherwise leave no lasting effects) tomorrow evening, but I'll pay you tonight"... but there I don't get the paradox either - whats stopping you from creating a sub agent (informing a friend) with the task of convincing you not to drink AFTER you've gotten the money? ... Possibly by force. Possibly by relying on saying things in a manner that you don't know that he knows he has to do this. Possibly with a whole lot of actors. Like scheduling a text "I am perfectly fine, there is nothing wrong with me" to parents and friends to be sent tomorrow morning.

Of course, this relies on my ability to raise the probability of intervention, but that seems like an easier challenge than engaging in willful doublethink... ... or you'd perhaps add various chemicals to your food the next day - I know I can be committed to an idea (I will do this task tonight), come home, eat dinner, and then I'd be totally uncommitted (that task can wait, I will play games first).

... A billion is a lot of money, perhaps I'd drink the poison and then have a hired person drug me to a coma, to be awoken the next day? You could hire a lot of medical staff with that kind of money.

Yet I get the feeling that all these "creative" solutions are not really allowed. Why is that?

Replies from: gjm, Lumifer, ArisKatsaris
comment by gjm · 2016-05-19T14:40:19.763Z · LW(p) · GW(p)

all these "creative" solutions are not really allowed. Why is that?

Because the point of these questions isn't to challenge you to find a good answer, it's that the process of answering them may lead to insight into your actual value system, understanding of causation, etc. Finding clever ways around the problem is a bit like cheating in an optician's eye test[1]: sure, maybe you can do that, but the result will be that you get less effective eyesight correction and end up worse off.

[1] e.g., maybe you have found a copy of whatever chart they use and memorized the letters on it.

So, e.g., the point of the toxin puzzle is to ask: can you, really, form an intention to do something when you know that when the time comes you will be able to choose and will have no reason to choose to do it and much reason not to? That's an interesting psychological and/or philosophical question. You can avoid answering it by saying "well, I'd find a way to make taking the toxin not actually do me any harm", and that might be an excellent idea if you ever find yourself in that bizarre situation -- but the point of the question isn't to plan for an actual future where you encounter a quirkily sadistic but generous billionaire, it's to help clarify your thinking about what happens when you form an intention to do something.

Of course you may repurpose the question, and then your "clever" answers may be entirely to the point. Suppose you decide that no, you cannot form an intention to do something that you will have good reason to choose not to do; well, situations might arise where it would be useful to do that (even though the precise situation Kavka describes is unlikely), so it's reasonable to think about how you might make it possible, and then some "clever" answers may become relevant. But others probably won't, and the "get drugged into a coma" solution is probably one of those.

(Incidentally, in the original puzzle the amount of money was a million rather than a billion. That's probably still enough to hire someone to drug you into a coma.)

Replies from: Pimgd
comment by Pimgd · 2016-05-19T23:07:40.790Z · LW(p) · GW(p)

It is indeed a million, woops. Thanks for explaining in detail about the purpose of such questions. I find that I get into "come up with a clever answer" mode faster if the question has losses - not getting money is "meh", a day worth of excruciating pain in exchange for money, well, that needs a workaround!

As for the puzzle itself, I don't know if I can form such an intention... but I seem to be really good at it in real life. I call it procrastinating. I make a commitment that fails to account for time discounting and then I end up going to bed later than I wanted. After dinner I intended to go to bed early; at midnight I wanted to see another episode. So apparently it's possible.

comment by Lumifer · 2016-05-19T14:29:15.722Z · LW(p) · GW(p)

Yet I get the feeling that all these "creative" solutions are not really allowed. Why is that?

There are reasons.

comment by ArisKatsaris · 2016-05-19T12:58:37.484Z · LW(p) · GW(p)

What's stopping you from creating a sub agent (informing a friend) with the task of convincing you not to drink AFTER you've gotten the money? ...

Like Odysseus with the Sirens, you'd have to "create a subagent"/hire a friend to convince you not to drink, before you intend to drink it, then actually change your intentions and want to drink it.

This doesn't seem possible for a human mind, though of course it's easier to imagine for artificial minds that can be edited at will.

Replies from: Pimgd
comment by Pimgd · 2016-05-19T14:00:15.704Z · LW(p) · GW(p)

How is it not possible? When force is allowed, the hired people could simply physically restrain me - I'd fight them with tooth and nail, their vastly superior training would have me on the floor within a minute, after which I'd be kept separate from the vial of toxin for the remainder of the day. ... Although I guess "separation for a period of time"-based arguments rely on you both being obsessive AND pedantic enough to not care about it on the next day. Being really passionate about something and then dropping the issue the next day because the window of opportunity has been closed is ... unlikely to occur, so my solution might end up making me rich but leaving me in the looney-bin.

I think a better argument against my ideas is logistics - how could I acquire everything I need in a span of (at most) 23 hours? (The wording is a such that at tonight as the day turns, you must intend to take the poison). A middle class worker generally doesn't have ties to any mercenaries, and payment isn't given until the morning after your intent has to be made.

I get your point, though - convincing someone to later convince you already carries massive penalties ("Why are you acting so weird?"), the situation carries massive penalties ("And you believe this guy?", "For HOW MUCH?!")...

My argument basically rests on turning the whole thing into a game: "Design a puzzle you cannot get out of. Then, a few minutes before midnight (to be safe), start doing your utmost best to break this puzzle."

Replies from: ArisKatsaris
comment by ArisKatsaris · 2016-05-19T17:52:27.415Z · LW(p) · GW(p)

How is it not possible? When force is allowed, the hired people

Why would you hire people to stop you from drinking it, if you intend to drink it, since you know that hiring such people will increase the chances you will end up not drinking it?

I get your point, though - convincing someone to later convince you already carries massive penalties

NO! That's not my point. My point isn't whether it's expensive or difficult to hire someone, My point is that you don't want to hire someone. Because you intend to drink the toxin, and hiring someone to stop you from doing that doesn't match your intention.

Replies from: Pimgd
comment by Pimgd · 2016-05-19T22:44:20.099Z · LW(p) · GW(p)

If I intend to do my best at an exam tomorrow, but stay up late playing games, does this somehow lift my intention to do well on my exam?

By the original problem statement, I have to have the intention of taking the poison AT midnight. Rephrased - when it is midnight, I must intend to take the poison that next day. BEFORE midnight, it is allowed to have OTHER intentions. I intend to use that time to set up hurdles for myself - and then to try my hardest. It would be especially helpful if these hurdles are also things like tricking myself that it won't actually hurt (via transquilizer to put me under straight afterward, for instance).

I know it sounds like doublethink, but that's only if you think there is no difference between me before midnight and me after midnight.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2016-05-20T06:44:58.784Z · LW(p) · GW(p)

By the original problem statement, I have to have the intention of taking the poison AT midnight. Rephrased - when it is midnight, I must intend to take the poison that next day. BEFORE midnight, it is allowed to have OTHER intentions. I intend to use that time to set up hurdles for myself - and then to try my hardest.

If you can change your intentions like that, that's indeed a fine solution.

I'm not sure that a human mind can, though. Most human minds anyway.

comment by Pimgd · 2016-05-19T08:25:15.506Z · LW(p) · GW(p)

I get the feeling maybe this ought to be two comments, one on the main thread and one here. But they're too entangled.

comment by Lumifer · 2016-05-18T14:30:27.766Z · LW(p) · GW(p)

But I still make a real decision

Leaving Newcomb aside for the moment, in the smoking lesion case your decision is predetermined and you have no choice in the matter. I don't see how that counts as "a real decision".

Replies from: ArisKatsaris, entirelyuseless
comment by ArisKatsaris · 2016-05-18T20:10:50.106Z · LW(p) · GW(p)

"your decision is predetermined and you have no choice in the matter."

Is LW now populated by the sort of people who haven't even heard of compatibilism and of the idea that determinism not only doesn't contradict having a choice, but is actually fundamental to the process of decision-making? You can only "choose", if your values and personality can determine the outcome.

Replies from: Lumifer
comment by Lumifer · 2016-05-18T20:18:53.521Z · LW(p) · GW(p)

By "heard of", do you actually mean "agree with"?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2016-05-18T20:26:18.322Z · LW(p) · GW(p)

No, I meant that you seemed confused by the fact that someone can think of 'predetermination' as compatible with choice, to the point that you seemingly felt that saying "your decision is predetermined and you have no choice in the matter" was an argument.. and you "don't see" how such predetermined choices are real choices.

It's fine if you state your position, but you bring confusion when you present it in terms of ignorance and failure to understand the other one's position.

Basically you spoke like I'd expect someone to speak who had indeed never heard of compatibilism, not merely disagreed with it.

Replies from: Lumifer
comment by Lumifer · 2016-05-18T20:58:58.957Z · LW(p) · GW(p)

you seemingly felt that saying "your decision is predetermined and you have no choice in the matter" was an argument

I was not trying to change entirelyuseless' mind. I was trying to figure out where exactly the disagreements between us are.

and you "don't see" how such predetermined choices are real choices

No, I do not see that. Is there anything wrong with that?

you bring confusion when you present it in terms of ignorance and failure to understand the other one's position.

LOL. Are you quite sure I am allowed to disagree with compatibilism for reasons other than being a confused ignorant fool?

Basically you spoke like I'd expect someone to speak who had indeed never heard of compatibilism, not merely disagreed with it.

Well, you speak like someone who does not understand why people could possibly disagree with compatibilism.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2016-05-18T22:16:25.868Z · LW(p) · GW(p)

Look, say whatever you like, I was genuinely, truly, sincerely, honestly not able to distinguish you from someone who hadn't even heard of compatibilism.

Feel free to mock and jeer and lol as you like, but take nonetheless this datapoint into consideration, about what you were communicating to me.

Replies from: Lumifer
comment by Lumifer · 2016-05-18T22:59:21.045Z · LW(p) · GW(p)

I was genuinely, truly, sincerely, honestly not able to distinguish you from someone who hadn't even heard of compatibilism.

If you think that's a problem, you probably should fix it :-) Or at least take this datapoint into consideration :-P

about what you were communicating to me

Actually, I was having a reasonable conversation with entirelyuseless when you jumped in and sneered at that "sort of people", those uncough peasants whose presence pollutes the rarefied air of LW with crass ignorance...

Replies from: ArisKatsaris
comment by ArisKatsaris · 2016-05-19T19:45:16.640Z · LW(p) · GW(p)

and sneered at that "sort of people", those uncough peasants whose presence pollutes the rarefied air of LW with crass ignorance...

Well, see, you understood I was sneering at you.

I on the other hand, still don't understand whether you were pretending at ignorance of compatibilism as a weird debating tactic ("1. Pretend that I don't know there exist people who think determinism is compatible with free will, 2. ??? 3.Profit!") or I just misread you that way.

Replies from: Lumifer
comment by Lumifer · 2016-05-19T20:07:31.042Z · LW(p) · GW(p)

Saying "But Bud Light is a bad beer" is not "pretending at ignorance" that there are people who like and drink Bud Light. It expresses my position which, absent other indicators, does NOT imply that all other positions are wrong and mistaken.

Speaking in expressions like "Bud Light is a bad beer, however I acknowledge the existence of people who like Bud Light and accept that there is nothing inherently wrong with them liking Bud Light and, moreover, the expression of my position should not be taken as disparagement of those aforementioned people who like Bud Light" is a bit unwieldy.

comment by entirelyuseless · 2016-05-18T14:48:28.974Z · LW(p) · GW(p)

I agree that this is what most people think, but it is a mistake.

I don't agree to leave Newcomb aside in considering this, because my position is that they are the same problem. If I have no choice in the smoking lesion, I have no choice in Newcomb.

Consider the Newcomb case.

I exist, and my brain and body are in a certain condition. I did not put them in that condition. I cannot make them not have been in that condition.

Omega looks at me. Using the condition of my brain and body -- conditions over which I have no control whatsoever -- he determines whether I am going to choose one box or two boxes. He has 100% accuracy, and this implies that the situation is completely determined by the condition of my brain and body.

In other words, "the condition of my brain and body" functions exactly like the lesion. It completely "predetermines" the outcome. If I have no choice in the lesion case, I have no choice in Newcomb.

Nonetheless, I say I have a choice in Newcomb, because the condition of my brain and body imply that I will engage in a certain process of reasoning, considering the alternatives of one boxing and two boxing, and choose one of them.

Likewise, I have a choice in the lesion case, because the lesion implies that I will engage in a certain process of reasoning, considering the alternatives of smoking and not smoking, and choose one of them.

In both cases, the outcome is predetermined. In both cases, the outcome is the result of a choice that results from a process of thought.

Replies from: Lumifer
comment by Lumifer · 2016-05-18T15:10:23.695Z · LW(p) · GW(p)

I don't agree to leave Newcomb aside in considering this, because my position is that they are the same problem.

If they are the same problem, you shouldn't care about leaving one aside. The smoking lesion is a simpler and clearer problem because it doesn't need to postulate a supernatural entity.

In other words, "the condition of my brain and body" functions exactly like the lesion. It completely "predetermines" the outcome.

So you're a determinist. OK.

Nonetheless, I say I have a choice in Newcomb, because the condition of my brain and body imply that I will engage in a certain process of reasoning, considering the alternatives of one boxing and two boxing, and choose one of them.

That, to me, doesn't follow at all. You don't choose, you're just an automaton going through the motions. It is, as you say, similar to the lesion -- there might well be complicated intermediate steps but there is no choice involved. You literally do not have a choice.

In which way your choice is different from the choice of a calculator which also goes through a bunch of processes before deciding to output 4 as a response to 2+2?

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-18T15:37:50.866Z · LW(p) · GW(p)

This is all in the context of discussing Newcomb and the smoking lesion. It is possible that libertarian free will is true. If it is, neither Newcomb nor the smoking lesion is possible in the real world, at least in the 100% way.

So I do not assert that determinism is necessarily true (although I do not know that it is not). But if it is true, it is equally true in Newcomb and in the smoking lesion, and if it is false, it is equally false in both cases.

The situation is different from the calculator because the calculator does not consider various possible answers, but just calculates a single answer directly. However, the determinist choice would be similar to a chess computer, which considers various possible moves, but still computes a determinate outcome.

Replies from: Lumifer
comment by Lumifer · 2016-05-18T16:53:24.837Z · LW(p) · GW(p)

This is all in the context of discussing Newcomb and the smoking lesion. It is possible that libertarian free will is true. If it is, neither Newcomb nor the smoking lesion is possible in the real world, at least in the 100% way.

Hold on. Are you saying that determinism is a precondition, an axiom built into the formulation of the Newcomb and the lesion problems? That they make no sense unless you accept determinism?

Besides, I don't think Newcomb is possible in the real world anyway since, again, it requires a supernatural entity.

The situation is different from the calculator because the calculator does not consider various possible answers, but just calculates a single answer directly. However, the determinist choice would be similar to a chess computer

This implies that the gap between a chess computer and a human is smaller than a gap between a calculator and a chess computer. I am not sure I'm willing to accept that :-/

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-18T17:23:29.143Z · LW(p) · GW(p)

Yes, I am saying that 100% predictive accuracy does not make sense apart from determinism. I agree that lower degrees of accuracy could happen without complete determinism. Even lower degrees of accuracy would not necessarily change my decision in the scenarios (although it would change the decision once the degree of accuracy became too low.)

I agree with you about the gap between humans, calculators, and chess computers. I am just saying that "making a choice" just implies considering several possibilities before selecting one of them. So it isn't true that "you don't have a choice" if you consider several possibilities, even if there are reasons why you will definitely choose a particular one.

So for example, even if determinism is false, I am quite sure that I am not going to kill myself tomorrow. That doesn't change the fact that it is one possibility that I could consider. So I have a choice between killing myself and not killing myself, even though I know which one I am going to choose.

Replies from: Lumifer
comment by Lumifer · 2016-05-18T17:40:45.463Z · LW(p) · GW(p)

I am just saying that "making a choice" just implies considering several possibilities before selecting one of them. So it isn't true that "you don't have a choice" if you consider several possibilities,

That's a straightforward logical fallacy. You're saying "If A then B, therefore if B then A" where A="making a choice" and B="considering several possibilities".

Besides, you just moved the heavy-lifting part to the word "considering". If I'm going to count to 3, whether I will consider 2 or 4 (five is right out) is quite irrelevant because I will count to 3 regardless.

A considered alternative is one you could choose, but in the situation we're talking about you could not (since your choice is predetermined). And in this case, it's merely something your attention slides over before settling on the inevitable.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-18T18:19:37.286Z · LW(p) · GW(p)

I am saying "making a choice" is nothing more and nothing less than "considering two or more possibilities and selecting one of them." Each one implies the other.

If you are planning to count to three, you do not consider stopping at two, so there is no choice.

Your objection is that there are not really two or more possibilities, but only one. But that is not the way consideration works. When you consider two possible choices, they are both possible as far as you know, since you do not know which one you are going to choose. So from your point of view, you are making a choice, even if more fundamentally something is determining which choice you are making.

Replies from: Lumifer
comment by Lumifer · 2016-05-18T18:26:51.584Z · LW(p) · GW(p)

I am saying "making a choice" is nothing more and nothing less than "considering two or more possibilities and selecting one of them."

Ah, so you're defining the expression "making a choice" as "considering and selecting". OK.

So from your point of view, you are making a choice

Am I making a choice from an external point of view?

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-18T18:49:21.007Z · LW(p) · GW(p)

I'm not sure what you mean by "a choice from an external point of view." Other people can see that you considered several possibilities and selected one of them. It may be (if this deterministic theory is true) that someone can figure out in advance which one you are going to select, and perhaps that person wouldn't describe it as a choice. That's just a question of how they are using the word.

Replies from: Lumifer
comment by Lumifer · 2016-05-18T19:58:40.385Z · LW(p) · GW(p)

I'm not sure what you mean by "a choice from an external point of view."

The usual -- e.g. in the "perfect predictor" version of the Newcomb problem you might think you're making a choice, but Omega knows what you are going to choose and so from its point of view ("external" to you) you don't actually have a choice and will do what you are predetermined to do.

In any case, we've dug down to the more or less standard free-will debate...

comment by ike · 2016-05-14T00:47:51.002Z · LW(p) · GW(p)

I'm referring to TDT, which disagrees.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-14T14:35:11.395Z · LW(p) · GW(p)

Eliezer disagrees, but no formal decision theory disagrees, because the two situations are formally identical.

Replies from: ike
comment by ike · 2016-05-14T17:24:29.993Z · LW(p) · GW(p)

They're formally identical only if you consider the choice to not counterfactually affect the outcome. Asserting that counterfactuals don't go backwards in time makes the choice not affect it, but that's just question begging.

It hasn't been formalized because we don't know how to deal with logical uncertainty fully yet.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-14T21:25:49.117Z · LW(p) · GW(p)

If I have the 100% version of the lesion, it is true to say, "If I had decided not to smoke, I would not have had the lesion," because that is the only way I could have decided not to smoke, in the same way that in Newcomb it is true to say, "If I had picked one-box, I would have been a one-boxer," because that is the only way I could have picked one box.

Replies from: ike
comment by ike · 2016-05-14T21:54:27.288Z · LW(p) · GW(p)

In one there's counterfactual dependence and in the other there isn't. If your model doesn't take into account counterfactuals then you can't even tell the difference between smoking lesions and the case where smoking really does cause cancer.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-15T01:49:14.022Z · LW(p) · GW(p)

Exactly. There is no difference; either way you should not smoke.

Also, what do you mean by saying that there is "counterfactual dependence" in one case and not in the other? Do you disagree with my previous comment? Do you think that I would have had the lesion no matter what I decided, in a situation where having the lesion has a 100% chance of causing smoking?

Replies from: ike
comment by ike · 2016-05-15T02:15:20.341Z · LW(p) · GW(p)

So you're not just arguing with Eliezer, you're arguing with the entirety of causal decision theory.

I strongly suspect you don't understand causal decision theory at this point, or counterfactuals as used by it. If this is the case, see https://en.wikipedia.org/wiki/Causal_decision_theory, or http://lesswrong.com/lw/164/timeless_decision_theory_and_metacircular/, or https://wiki.lesswrong.com/wiki/Causal_Decision_Theory

Those links explain it better than I can quickly, but I'll try anyway: counterfactuals ask "if you reached into the universe from outside and changed A, what would happen?" Only things caused by A change, not things merely correlated with A.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-15T02:18:57.054Z · LW(p) · GW(p)

I understand causal decision theory, and yes, I disagree with it. That should be obvious since I am in favor of both one-boxing and not smoking.

(Also, if you reach inside and change your decision in Newcomb, that will not change what it is in the box anymore than changing your decision will change whether you have a lesion.)

Replies from: ike
comment by ike · 2016-05-15T02:29:02.542Z · LW(p) · GW(p)

So why did you ask me what I meant about counterfactuals? If you take the TDT assumption that identical copies of you counterfactually effect each other, then Newcomb has counterfactual dependence and Lesions doesn't.

I'm not sure of your point here.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-15T02:51:39.449Z · LW(p) · GW(p)

I don't think there is any difference even with that assumption. Newcomb and the Lesion are entirely equivalent. Modify it to the situation discussed in the previous discussion of this topic. The Lesion case works like this: the lesion causes people to take two boxes, and the absence of the lesion causes people to take one box. The other parts are the same, except that Omega just checks whether you have the lesion in order to make his prediction. Then we have the two cases:

  1. Regular Newcomb. I am a certain kind of algorithm, either one that is going to one-box, or one that is going to two-box.
  2. Lesion Newcomb. I either have the lesion and am going to take both boxes, or I don't and am going to take only one.

  3. Regular Newcomb. Omega checks my algorithm and decides whether to put the million.

  4. Lesion Newcomb. Omega checks the lesion and decides whether to put the million.

  5. Regular Newcomb. I decide whether to take one or two boxes.

  6. Lesion Newcomb. I decide whether to take one or two boxes.

  7. Regular Newcomb. If I decided to take one box, it turns out that I had the one-boxing algorithm, that Omega predicted it, and I get the million. If I decided to take both boxes, the opposite occurs.

  8. Lesion Newcomb. If I decided to take one box, it turns out that I did not have the lesion, Omega saw I did not, and I get the million. If I decided to take both boxes, it turns out that I had the lesion etc.

This is a simple case of substituting terms. The cases are identical.

Replies from: ike
comment by ike · 2016-05-15T03:05:51.132Z · LW(p) · GW(p)

Well it depends on what procedure omega uses: you can't change the procedure and assert the same result obtains! If they predict you by simulating you, that creates a causal dependence, but not if they predict you by your genes or similar. You're not accounting for the causal relationship in your comparison.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-15T14:13:53.028Z · LW(p) · GW(p)

In the lesion case, I am assuming that the lesion has 100% chance of causing you to make a certain decision. If that is not assumed, we are not discussing the situation I am talking about.

So the causal process is like this:

  1. Lesion exists.
  2. Lesion causes certain thought process (e.g. "I really, really, want to smoke. And according to TDT, I should smoke, because smoking doesn't cause cancer. So I think I will.")
  3. Thought process causes smoking and lesion causes cancer.

I just simulated the lesion process by thinking about it. Omega does the same thing; the details of 2 are irrelevant, as long as we know that the lesion will cause a thought process that will cause smoking.

Replies from: ike
comment by ike · 2016-05-15T14:56:52.846Z · LW(p) · GW(p)

In the lesion case, I am assuming that the lesion has 100% chance of causing you to make a certain decision.

Sure.

Omega does the same thing; the details of 2 are irrelevant, as long as we know that the lesion will cause a thought process that will cause smoking.

The details of 2 is irrelevant, but the details of how Omage works are relevant. If Omega checks for the lesion, then your choice has no counterfactual causal effect on Omega. If Omega simulates your mind, then your choice does have a counterfactual causal effect.

Lesion -> thought process -> choice.

TDT says choose as if you're determining the outcome of your thought process. If Omega predicts from there, your optimal choice differs from when Omega predicts from Lesion.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-15T15:10:07.915Z · LW(p) · GW(p)

So you're saying that if Omega predicts from your thought process, you choose one-boxing or not smoking, but if Omega predicts directly from the lesion, you choose two-boxing or smoking?

I don't see how that is relevant. The description I gave above still applies. If you choose one-boxing / not smoking, it turns out that you get the million and didn't have the lesion. If you choose two-boxing / smoking, it turns out that you don't get the million, and you had the lesion. This is true whether you followed the rule you suggest or any other. So if TDT recommends smoking when Omega predicts from the lesion, then TDT gives the wrong answer in that case.

Replies from: ike
comment by ike · 2016-05-15T15:28:50.338Z · LW(p) · GW(p)

If you choose one-boxing / not smoking, it turns out that you get the million and didn't have the lesion. If you choose two-boxing / smoking, it turns out that you don't get the million, and you had the lesion.

Well as I said above, this ignores causality. Of course if you ignore causality, you'll get the EDT answers.

And if you define the right answer as the EDT answer, then whenever it differs from another decision theory you'll think the other theory gets the wrong answer.

None of this is particularly interesting, and I already made these points above.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-15T16:24:27.277Z · LW(p) · GW(p)

When you say, "this ignores causality," do you intend to make the opposite statements?

Do you think that if a lesion has a 100% chance to cause you to decide to smoke, and you do not decide to smoke, you might have the lesion anyway?

Replies from: ike
comment by ike · 2016-05-15T16:43:13.519Z · LW(p) · GW(p)

Do you think that if a lesion has a 100% chance to cause you to decide to smoke, and you do not decide to smoke, you might have the lesion anyway?

No. But the counterfactual probability of having the lesion given that you smoke is identical to the counterfactual probability given that you don't smoke. This follows directly from the meaning of counterfactual, and you claimed to know what they are. Are you just arguing against the idea of counterfactual probability playing a role in decisions?

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-15T17:21:46.874Z · LW(p) · GW(p)

"Counterfactual probability", in the way you mean it here, should not play a role in decisions where your decision is an effect of something else without taking that thing into account.

In other words, the counterfactual you are talking about is this: "If I could change the decision without the lesion changing, the probability of having the lesion is the same."

That's true, but entirely irrelevant to any reasonable decision, because the decision cannot be different without the lesion being different.

Replies from: ike
comment by ike · 2016-05-15T17:30:51.176Z · LW(p) · GW(p)

So all you're doing is denying CDT and asserting EDT is the only reasonable theory, like I thought.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-15T17:37:49.960Z · LW(p) · GW(p)

I'm denying CDT, but it is a mistake to equate CDT with Eliezer's opinion anyway. CDT says you should two-box in Newcomb; Eliezer says you should one-box (and he is right about that.)

More specifically: you assert that in Newcomb, you cause Omega's prediction. That's wrong. Omega's prediction is over and done with, a historical fact. Nothing you can do will change that prediction.

Instead, it is true that "Thinking AS THOUGH I could change Omega's prediction will get good results, because I will choose to take one-box, and it will turn out that Omega predicted that."

It is equally true that "Thinking AS THOUGH I could change the lesion will get good results, because I will choose not to smoke, and it will turn out that I did not have the lesion."

In both cases your real causality is zero. In both cases thinking as though you can cause something has good results.

Replies from: ike
comment by ike · 2016-05-15T17:40:00.926Z · LW(p) · GW(p)

I'm not equating them. TDT is CDT with some additional claims about causality for logical uncertainties.

You deny those claims, but causality doesn't matter to you anyway, because you deny CDT.

comment by Luke_A_Somers · 2016-05-12T22:00:01.506Z · LW(p) · GW(p)

It makes a huge difference whether the dust speck choices add up or not. If they do, OrphanWilde's objection applies and the only path to survival is to be tortured.

If they don't, so each one of me gets one dust speck total, then dust specks for sure. All of the copies of me (whether there are one or 3^^^3 of us) are experiencing what amounts to a choice between individually being dust-specked or individually being tortured. We get what we ask for either way, and no one else is actually impacted by the choice.

There's no need to drag average utilitarianism in.

Replies from: HungryHobo
comment by HungryHobo · 2016-05-13T13:06:35.985Z · LW(p) · GW(p)

Computational theory of identity so some large number of exact copies of the same individual experiencing the same thing don't sum, they only count as once instance?

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2016-05-13T16:55:03.252Z · LW(p) · GW(p)

That too. But my reasoning holds in the more general case, where instead of it being 3^^^3 copies of me, it was 3^^^3 entities from the pool of people who would choose specks.

comment by CronoDAS · 2016-05-12T16:57:06.225Z · LW(p) · GW(p)

I choose torture if and only if I'm alone. Otherwise the predictor would be wrong, contrary to the assumptions of the hypothetical. But I'd rather be in the world where dust specks gets chosen.

Replies from: ike
comment by ike · 2016-05-12T17:34:39.487Z · LW(p) · GW(p)

You don't know whether you're alone.

Replies from: CronoDAS
comment by CronoDAS · 2016-05-14T16:04:24.244Z · LW(p) · GW(p)

Doesn't matter - I'll still end up doing it, regardless of what algorithm I try to implement!

Replies from: entirelyuseless
comment by entirelyuseless · 2016-05-14T21:42:40.379Z · LW(p) · GW(p)

It is true in general that you will end up implementing the algorithm that you actually are. That doesn't mean you don't have to make a decision.

comment by Furcas · 2016-05-13T15:51:44.186Z · LW(p) · GW(p)

IMO since people are patterns (and not instances of patterns), there's still only one person in the universe regardless of how many perfect copies of me there are. So I choose dust specks. Looks like the predictor isn't so perfect. :P