Posts

Comments

Comment by brianm on 2011 Survey Results · 2011-12-14T15:03:43.302Z · LW · GW

I would expect the result to be a more accurate estimation of the success, combined with more sign-ups . 2 is an example of this if, in fact, the more accurate assessment is lower than the assessment of someone with a different level of information.

I don't it's true that everyone starts from "that won't ever work" - we know some people think it might work, and we may be inclined to some wishful thinking or susceptability to hype to inflate our likelihood above the conclusion we'd reach if we invest the time to consider the issue in more depth, It's also worth noting that we're not comparing the general public to those who've seriously considered signing up, but the lesswrong population, who are probably a lot more exposed to the idea of cryonics.

I'd agree that it's not what I would have predicted in advance (having no more expectation for the likelihood assigned to go up as down with more research), but it would be predictable for someone proceeding from the premise that the lesswrong community overestimates the likelihood of cryonics success compared to those who have done more research.

Comment by brianm on Mundane Magic · 2011-04-30T22:38:21.515Z · LW · GW

The Humans are Special trope here gives a lot of examples of this. Reputedly, it was a premise that John Campbell, editor of Amazing Stories, was very fond of, accounting for its prevalence.

Comment by brianm on Offense versus harm minimization · 2011-04-21T08:51:00.198Z · LW · GW

and as such makes bullets an appropriate response to such acts, whereas they were not before.

Ah, I think I've misunderstood you - I thought you were talking about the initiating act (ie. that it was as appropriate to initiate shooting someone as to insult them), whereas you're talking about the response to the act: that bullets are an appropriate response to bullets, therefore if interchangable, they're an appropriate response to speech too. However, I don't think you can take the first part of that as given - many (including me) would disagree that bullets are an appropriate response to bullets, but rather that they're only an appropriate response to the specific case of averting an immediate threat (ie. shoot if it prevents killing, but oppose applying the death penalty once out of danger), and some pacifists may disagree even with violence to prevent other violence.

However, it seems that it's the initiating act that's the issue here: is it any more justified to causing offence as to shoot someone. I think it could be argued that they are equivalent issues, though of lesser intensity (ie. back to continuums, not bright lines).

Comment by brianm on Offense versus harm minimization · 2011-04-20T10:20:43.009Z · LW · GW

If they are interchangeable it follows that answering an argument with a bullet may be the efficient solution.

That's clearly not the case. If they're interchangable, it merely means they'd be equally appropriate, but that doesn't say anything about their absolute appropriateness level. If neither are appropriate responses, that's just as interchangable as both being appropriate - and it's clearly that more restrictive route being advocated here (ie. moving such speech into the bullet category, rather than moving the bullet category into the region of such speech).

The brits are feeling the pain of a real physical assault, under the skin.

So what distinguishes that from emotional pain? It's all electrochemistry in the end after all. Would things change if it were extreme emotional torment being inflicted by pictures of salmon, rather than pain receptors being stimulated? Eg. inducing an state equivalent to clinical depression, or the feeling of having been dumped by a loved-one. I don't see an inherent reason to treat these differently - there are occassions where I'd gladly have traded such feelings for a kick in the nuts, so from a utlitarian perspective they seem to be at least as bad.

The intensity in this case is obviously different - offence vs depression is obviously a big difference, so it may be fine to say that one's OK and the other not because it falls into a tolerable level - but that certainly moves away from the notion of a bright line towards a grey continuum.

A crucial difference is that we can change our minds about what offends us but we cannot choose not to respond to electrodes

This is a better argument (indeed it's one brought up by the post). I'm not sure it's entirely valid though, for the reasons Yvain gave there. We can't entirely choose what hurts us without a much better control over our emotional state than I, at least, posess. If I were brought up in a society where this was the ultimate taboo, I don't think I could simply choose not to be, anymore than I could choose to be offended by them now. You say "It is within my power to feel zero pain from anything you might say", but I'll tell you, it's not within mine. That may be a failing, but it's one shared by billions. Further, I'm not sure it would be justified to go around insulting random strangers on the grounds that they can choose to take no harm, which suggests to me that offending is certainly not morally neutral.

Personally, I think one answer we could give to why the situations are different is a more pragmatic one. Accept that causing offence is indeed a bad action, but that it's justified collateral damage in support of a more important goal. Ie. free speech is important enough that we need to establish that even trying to prevent it will be met by an indescriminate backlash doing the exact opposite. (Though there are also pragmatic grounds to oppose this, such that it's manipulable by rabble-rousers for political ends).

Comment by brianm on Offense versus harm minimization · 2011-04-19T14:26:47.811Z · LW · GW

Is that justified though? Suppose a subset of British go about demanding restriction on salmon image production. Would that justify you going out of your way to promote the production of such images, making them more likely to be seen by the subset not making such demands?

Comment by brianm on Offense versus harm minimization · 2011-04-19T14:22:53.322Z · LW · GW

But the argument here is going the other way - less permissive, not more. The equivalent analogy would be:

To hold that speech is interchangeable with violence is to hold that certain forms of speech are no more an appropriate answer than a bullet.

The issue at stake is why. Why is speech OK, but a punch not? Presumably because one causes physical pain and the other not. So, in Yvain's salmon situation, when such speech does now cause pain should we treat it the same or different from violence? Why or why not? What then about other forms of mental torment, such as emotional pain, hurt feelings or offence? There are times I've had my feelings hurt by mere words that frankly, I'd have gladly exchanged for a kicking, so mere intensity doesn't seem the relevant criteria. So what is, and why is it justified?

To just repeat "violence is different from speech" is to duck the issue, because you haven't answered this why question, which was the whole point of bringing it up.

Comment by brianm on The Presumptuous Philosopher's Presumptuous Friend · 2009-10-07T11:44:21.684Z · LW · GW

Newcomb's scenario has the added wrinkle that event B also causes event A

I don't see how. Omega doesn't make the prediction because you made the action - he makes it because he can predict that a person of a particular mental configuration at time T will make decision A at time T+1. If I were to play the part of Omega, I couldn't achieve perfect prediction, but might be able to achieve, say, 90% by studying what people say they will do on blogs about Newcombe's paradox, and performing observation as to what such people actually do (so long as my decision criteria weren't known to the person I was testing).

Am I violating causality by doing this? Clearly not - my prediction is caused by the blog post and my observations, not by the action. The same thing that causes you to say you'd decide one way is also what causes you to act one way. As I get better and better, nothing changes, nor do I see why something would if I am able to simulate you perfectly, achieving 100% accuracy (some degree of determinism is assumed there, but then it's already in the original thought experiment if we assume literally 100% accuracy).

Assuming I'm understanding it correctly, the same would be true for a manipulationist definition. If we can manipulate your mental state, we'd change both the prediction (assuming Omega factors in this manipulation) and the decision, thus your mental state is a cause of both. However if we could manipulate your action without changing the state that causes it in a way that would affect Omega's prediction, our actions would not change the prediction. In practice, this may be impossible (it requires Omega not to factor in our manipulation, which is contradicted by assuming he is a perfect predictor), but in principle it seems valid.

Comment by brianm on The Presumptuous Philosopher's Presumptuous Friend · 2009-10-06T12:01:28.350Z · LW · GW

I don't see why Newcombe's paradox breaks causality - it seems more accurate to say that both events are caused by an earlier cause: your predisposition to choose a particular way. Both Omega's prediction and your action are caused by this predisposition, meaning Omega's prediction is merely correlated with, not a cause of, your choice.

Comment by brianm on Privileging the Hypothesis · 2009-09-29T12:49:15.808Z · LW · GW

It's not actually putting it forth as a conclusion though - it's just a flaw in our wetware that makes us interpret it as such. We could imagine a perfectly rational being who could accurately work out the probability of a particular person having done it, then randomly sample the population (or even work through each one in turn) looking for the killer. Our problem as humans is that once the idea is planted, we overreact to confirming evidence.

Comment by brianm on Avoiding doomsday: a "proof" of the self-indication assumption · 2009-09-24T12:55:18.223Z · LW · GW

Thinking this through a bit more, you're right - this really makes no difference. (And in fact, re-reading my post, my reasoning is rather confused - I think I ended up agreeing with the conclusion while also (incorrectly) disagreeing with the argument.)

Comment by brianm on Avoiding doomsday: a "proof" of the self-indication assumption · 2009-09-24T09:47:49.098Z · LW · GW

The doomsday assumption makes the assumptions that:

  1. We are randomly selected from all the observers who will ever exist.
  2. The observers increase expoentially, such that there are 2/3 of those who have ever lived at any particular generation
  3. They are wiped out by a catastrophic event, rather than slowly dwindling or other

(Now those assumptions are a bit dubious - things change if for instance, we develop life extension tech or otherwise increase rate of growth, and a higher than 2/3 proportion will live in future generations (eg if the next generation is immortal, they're guaranteed to be the last, and we're much less likely depending on how long people are likely to survive after that. Alternatively growth could plateau or fluctuate around the carrying capacity of a planet if most potential observers never expand beyond this) However, assuming they hold, I think the argument is valid.

I don't think your situation alters the argument, it just changes some of the assumptions. At point D, it reverts back to the original doomsday scenario, and the odds switch back.

At D, the point you're made aware, you know that you're in the proportion of people who live. Only 50% of the people who ever existed in this scenario learn this, and 99% of them are blue-doors. Only looking at the people at this point is changing the selection criteria - you're only picking from survivors, never from those who are now dead despite the fact that they are real people we could have been. If those could be included in the selection (as they are if you give them the information and ask them before they would have died), the situation would remain as in A-C.

Making creating the losing potential people makes this more explicit. If we're randomly selecting from people who ever exist, we'll only ever pick those who get created, who will be predominantly blue-doors if we run the experiment multiple times.

Comment by brianm on Counterfactual Mugging v. Subjective Probability · 2009-07-21T12:15:28.585Z · LW · GW

The various Newcombe situations have fairly direct analogues in everyday things like ultimatum situations, or promise keeping. They alter it to reduce the number of variables, so the "certainty of trusting other party" dial gets turned up to 100% of Omega, "expectation of repeat" to 0 etc, in order to evaluate how to think of such problems when we cut out certain factors.

That said, I'm not actually sure what this question has to do with Newcombe's paradox / counterfactual mugging, or what exactly is interesting about it. If it's just asking "what information do you use to calculate the probability you plug into the EU calculation?" and Newcombe's paradox is just being used as one particular example of it, I'd say that the obvious answer is "the probability you believe it is now." After all, that's going to already be informed by your past estimates, and any information you have available (such as that community of rationalists and their estimates). If the question is something specific to Newcombe's paradox, I'm not getting it.

Comment by brianm on Shut Up And Guess · 2009-07-21T11:36:54.918Z · LW · GW

I think the problem is that people tend to conflate intention with effect, often with dire effect, (eg. "Banning drugs == reducing harm from drug use"). Thus when they see a mechanism in place that seems intended to penalise guessing, they assume that its the same as actually penalising guessing, and that anything that shows otherwise must be a mistake.

This may explian the "moral" objection of the one student: The test attempts to penalise guessing, so working against this intention is "cheating" by exploiting a flaw in the test. With the no-penalty multiple choice, theres no such intent so the assumption is that the benefits of guessing are already factored in.

This may not in fact be as silly as it sounds. Suppose that the test is unrelated to mathematics, and that there is no external motive to doing well. Eg. you are taking a test on Elizabethan history with no effect on your final grade, and want to calibrate yourself against the rest of the class. Here, this kind of test is a flaw, because the test isn't measuring solely what it intends to, but will be biased towards those who spot this advantage. If you are interested solely in an accurate result, and you think the rest of the class won't realise the advantage of guessing, taking the extra marks will just introducing noise, so it is not to your advantage to take them.

For a mathematics or logic based test, the extra benefit could be considered an extra, hidden question. For something else, it could be considered as immoral as taking advantage of any other unintentional effect (a printing error that adds a detectable artifact on the right answer for instance). Taking advantage of it means you are getting extra marks for something the test is not supposed to be counting. I don't think I'd consider it immoral (certainly not enough to forgo the extra marks in something important), but Larry's position may not be as inconsistent as you think.

Comment by brianm on Hardened Problems Make Brittle Models · 2009-05-07T16:53:26.412Z · LW · GW

I don't see the purpose of such thought experiments as being to model reality (we've already got a perfectly good actual reality for that), but to simplify it. Hypothesizing omnipotent beings and superpowers may not seem like simplification, but it is in one key aspect: it reduces the number of variables.

Reality is messy, and while we have to deal with it eventually, it's useful to consider simpler, more comprehensible models, and then gradually introduce complexity once we understand how the simpler system works. So the thought experiments arbitrarily set certain variables (such as predictive ability) to 100% or 0% simply to remove that aspect from consideration.

This does give a fundamentally unrealistic situation, but that's really the point - they are our equivalent of spherical cows. Dealing with all those variables at once is too hard. In the situations where it isn't and we have "real" situations we can fruitfully consider, there's no need for the thought experiment in the first place. Once we can understand the simpler system, we have somewhere to start from once we start adding back in the complexity.

Comment by brianm on Re-formalizing PD · 2009-04-30T11:12:08.689Z · LW · GW

Ah sorry, I'd thought this was in relation to the source available situation. I think this may still be wrong however. Consider the pair of programs below:

A: 
    return Strategy.Defect.

B: 
    if(random(0, 1.0) <0.5) {return Strategy.Cooperate; }

    while(true)
    {
        if(simulate(other, self) == Strategy.Cooperate) { return Strategy.Cooperate; }
    }

simulate(A,A) terminates immediately. simulate(B,B) eventually terminates. simulate(B,A) will not terminate 50% of the time.

Comment by brianm on Re-formalizing PD · 2009-04-30T09:44:04.171Z · LW · GW

I don't think this holds. Its clearly possible to construct code like:

if(other_src == my_sourcecode) { return Strategy.COOPERATE; }
if(simulate(other_src, my_sourcecode) == Strategy.COOPERATE)
{
  return Strategy.COOPERATE;
}
else
{
  return Strategy.DEFECT;
}

B is similar, with slightly different logic in the second part (even a comment difference would suffice).

simulate(A,A) and simulate(B,B) clearly terminate, but simulate(A,B) still calls simulate(B,A) which calls simulate(A,B) ...

Comment by brianm on Formalizing Newcomb's · 2009-04-06T12:19:42.017Z · LW · GW

Type 3 is just impossible.

No - it just means it can't be perfect. A scanner that works 99.9999999% of the time is effectively indistinguishable from a 100% for the purpose of the problem. One that is 100% except in the presence of recursion is completely identical if we can't construct such a scanner.

My prior is justified because a workable Omega of type 3 or 4 is harder for me to imagine than 1 or 2. Disagree? What would you do as a good Bayesian?

I would one-box, but I'd do so regardless of the method being used, unless I was confident I could bluff Omega (which would generally require Omega-level resources on my part). It's just that I don't think the exact implementation Omega uses (or even whether we know the method) actually matter.

Comment by brianm on Formalizing Newcomb's · 2009-04-06T11:55:25.324Z · LW · GW

Aren't these rather ducking the point? The situations all seem to be assuming that we ourselves have Omega-level information and resources, in which case why do we care about the money anyway? I'd say the relevant cases are:

3b) Omega uses a scanner, but we don't know how the scanner works (or we'd be Omega-level entities ourselves).

5) Omega is using one of the above methods, or one we haven't thought of, but we don't know which. For all we know he could be reading the answers we gave on this blog post, and is just really good at guessing who will stick by what they say, and who won't. Unless we actually know the method with sufficient confidence to risk losing the million, we should one-box. ([Edit]: Originally wrote two-box here - I meant to say one-box)

Comment by brianm on Precommitting to paying Omega. · 2009-03-22T10:14:11.194Z · LW · GW

It doesn't seem at all sensible to me that the principle of "acting as one would formerly have liked to have precommitted to acting" should have unbounded utility.

Mostly agreed, though I'd quibble that it does have unbounded utility, but that I probably don't have unbounded capability to enact the strategy. If I were capable of (cheaply) compelling my future self to murder in situations where it would be a general advantage to precommit, I would.

Comment by brianm on Counterfactual Mugging · 2009-03-22T09:53:52.479Z · LW · GW

From my perspective now, I expect the reality to be the winning case 50% of the time because we are told this as part of the question: Omega is trustworthy and said it tossed a fair coin. In the possible futures where such an event could happen, 50% of the time my strategy would have paid off to a greater degree than it would lose the other 50% of the time. If omega did not toss a fair coin, then the situation is different, and my choice would be too.

There is no value in being the kind of person who globally optimizes because of the expectation to win on average.

There is no value in being such a person if they happen to lose, but that's like saying there's no value in being a person who avoids bets that lose on average by only posing the 1 in several million time they would have won the lottery. On average they'll come out ahead, just not in the specific situation that was described.

Comment by brianm on Individual Rationality Is a Matter of Life and Death · 2009-03-21T20:34:38.102Z · LW · GW

Rationality can be life and death, but that applies to collective and institutional decisions just as much as for our individual ones. Arguably more so: the decisions made by governments, cultures and large institutions have far larger effects than any decision I'll ever make. Investment into improving my individual rationality is more valuable purely due to self-interest - we may invest more to providing a 1% improvement to our own lives than we do to reducing collective decision making mistakes that costs thousands of lives a year. But survival isn't the only goal we have! Even if it were, there are good reasons to put more emphasis on collective rational decision making - the decisions of others can also affect us.

Comment by brianm on Tolerate Tolerance · 2009-03-21T12:07:48.115Z · LW · GW

I think there are other examples with just as much agreement on their wrongness, many of which have a much lower degree of investment even for their believers. Astrology for instance has many believers, but they tend to be fairly weak beliefs, and don't produce such a defensive reaction when criticized. Lots of other superstitions also exist, so sadly I don't think we'll run out of examples any time soon.

Comment by brianm on Precommitting to paying Omega. · 2009-03-21T11:49:21.701Z · LW · GW

It all depends on how the hack is administered. If future-me does think rationally, he will indeed come to the conclusion that he should not pay. Any brain-hack that will actually be successful must then be tied to a superseding rational decision or to something other than rationality. If not tied to rationality, it needs to be a hardcoded response, immediately implemented, rather than one that is thought about.

There are obvious ways to set up a superseding condition: put $101 in escrow, hire an assassin to kill you if you renege, but obviously the cost from doing this now is far higher than is justified by the probability of the situation, so we need something completely free. One option is to tie it to something internally valued. eg, you value your given word or self-honesty sufficiently that living with yourself after compromising it is worse than a negative $100 utility. (This only scales to the point where you value integrity however: you may be able to live with yourself better after finding you're self deluding than after murdering 15 people to prove a point)

Had we access to our own source code, and capacity for self-modification, we could put a hardcoded path when this decision arises. Currently we have to work with the hardware we have, but I believe our brains do have mechanisms for tying future decisions to then-irrational decisions . Making credible threats requires us to back up what we say, even to someone who we will never encounter again afterwards, so similar situations (without the absolute predictive ability) are quite common in life. I know in the past I have acted perversely against my own self-interest to satisfy a past decision / issued threat. In most cases this should be considered irrationality to be removed from myself, but I think I can reuse the same mechanism to achieve an improvement here.

Obviously I can only guess whether this will in fact work in practice. I believe it will for the $100 case, but suspect that with some of the raised stakes examples given (committing murder etc), my future self may wiggle out of the emotional trap I've set for him. This is a flaw with my brain-hacking methods however - hardcoding would still be the right thing to do if possible, if the payoff were one that I would willingly trade the cost for.

Comment by brianm on Precommitting to paying Omega. · 2009-03-21T00:06:28.306Z · LW · GW

Yes, exactly. I think this post by MBlume gives the best description of the most general such hack needed:

If there is an action to which my past self would have precommited, given perfect knowledge, and my current preferences, I will take that action.

By adopting and sticking to such a strategy, I will on average come out ahead in a wide variety of Newcomblike situations. Obviously the actual benefit of such a hack is marginal, given the unlikeliness of an Omega-like being appearing, and me believing it. Since I've already invested the effort through considering the optimal route for the thought experiment though, I believe I am now in fact hacked to hardcode the future-irrational decision if it does occur.

Comment by brianm on Counterfactual Mugging · 2009-03-20T23:55:30.783Z · LW · GW

If you think that through and decide that way, then your precommitting method didn't work. The idea is that you must somehow now prevent your future self from behaving rationally in that situation - if they do, they will perform exactly the thought process you describe. The method of doing so, whether making a public promise (and valuing your spoken word more than $100), hiring a hitman to kill you if you renege or just having the capability of reliably convincing yourself to do so (effectively valuing keeping faith with your self-promise more than $100) doesn't matter so long as it is effective. If merely deciding now is effective, then that is all that's needed.

If you do then decide to take the rational course in the losing coinflip case, it just means you were wrong by definition about your commitment being effective. Luckily in this one case, you found it out in the loss case rather than the win case. Had you won the coin flip, you would have found yourself with nothing though.

Comment by brianm on Precommitting to paying Omega. · 2009-03-20T23:17:52.242Z · LW · GW

Yes - it is effectively the organisational level of such a brain hack (though it would be advantageous if the officers were performing such a hack on their own brains, rather than being irrational in general - rationality in other situations is a valuable property in those with their fingers on the button.)

In the MAD case, it is deliberately arranged that retaliation is immediate and automatic

Isn't that exactly the same as the desired effect of your brain-hack in the mugging situation? Instead of removing the ability to not retaliate, we want to remove the ability to not pay. The methods differ (selecting pre-hacked / appropriately damaged brains to make the decisions, versus hacking our own), but the outcome seems directly analogous. Nor is there any further warning: the mugging situation finds you directly in the loss case (as you'd presumably be directly in the win case if the coin flip went differently) potentially before you'd even heard of Omega. Any brain-hacking must occur before the situation comes up unless you're already someone who would pay.

Comment by brianm on Precommitting to paying Omega. · 2009-03-20T20:48:52.005Z · LW · GW

But that fooling can only go so far. The better your opponent is at testing your irrational mask, the higher the risk of them spotting a bluff, and thus the closer the gap between acting irrational and being irrational. Only by being irrational can you be sure they won't spot the lie.

Beyond a certain payoff ratio, the risk from being caught out lying is bigger than the chance of having to carry through. For that reason, you end up actually appointing officers who are will actually carry through - even to the point of blind testing them with simulated tests and removing those who don't fire in such positions (even if it was the right choice )and letting your opponent know and verify this as much as possible.

Comment by brianm on Precommitting to paying Omega. · 2009-03-20T17:54:37.721Z · LW · GW

That would seem to be a very easy thing for them to test. Unless we keep committing atrocities every now and again to fool them, they're going to work out that it's false. Even if they do believe us (or it's true), that would itself be a good argument why our leaders would want to start the war - leading to the conclusion that they should do so to get the first strike advantage, maximising their chances.

It would seem better to convince them in some way that doesn't require us to pay such a cost if possible: and to convince the enemy that we're generally rational, reasonable people except in such circumstances where they attack us.

Comment by brianm on Precommitting to paying Omega. · 2009-03-20T16:47:06.417Z · LW · GW

I don't think that's true. I mentioned one real-world case that is very close to the hypothesised game in the other post: the Mutually Assured Destruction policy, or ultimatums in general.

First note that Omega's perfection as a predictor is not neccessary. With an appropriate payoff matrix even a 50.1% accurate omega doesn't change the optimal strategy. (One proviso on this is that the method of prediction must be such that it is non-spoofable. For example, I could perhaps play Omega with a 90% success rate, but knowing that I don't have access to brain-scanning abilities, you could probably conclude that I'm using more mundane ways (like reading the responses people give on blog posts about Newcomb's paradox) and so would be able to fool me (though this might not hurt my percentage much if I predict people smart enough to do this will two-box, it does change the optimal strategy because you now know you've already lost no matter what))

With MAD, the situation is similar:

  • In the event that the enemy launch a nuclear attack, it is irrational (from a life valuing sense) to destroy millions of innocent civilians when it won't help you. This corresponds to the "pay $100 when the coin comes up tails".

  • Prior to war, it is advantageous for the enemy to predict that you would destroy the world. If he believes that, then a first attack is a net loss for them, so he doesn't destroy your half of the world (the win $10000 case)

The object then is to convince the enemy that you would make the irrational decision in the loss case. But you must assume an intelligent enemy, with access to whatever technology or situations that might possibly be developed / occur in the future (truth drugs? Kidnapping and fooling a decision maker under a controlled environment and seeing what they do? something we haven't even thought of?) The only way to be sure that no test will reveal you're bluffing is not to bluff.

Comment by brianm on Counterfactual Mugging · 2009-03-20T14:48:51.853Z · LW · GW

Then take my bet situation. I announce your attendance, and cut you in with a $25 stake in attendance. I don't think it would be unusual to find someone who would indeed appear 99.99% of the time - does that mean that person has no free will?

People are highly, though not perfectly, predictable under a large number of situations. Revealing knowledge about the prediction complicates things by adding feedback to the system, but there are lots of cases where it still doesn't change matters much (or even increases predictability). There are obviously some situations where this doesn't happen, but for Newcombe's paradox, all that is needed is a predictor for the particular situation described, not any general situation. (In fact Newcombe's paradox is equally broken by a similar revelation of knowledge. If Omega were to reveal its prediction before the boxes are chosen, a person determined to do the opposite of that prediction opens it up to a simple Epimenides paradox.)

Comment by brianm on Counterfactual Mugging · 2009-03-20T12:25:56.435Z · LW · GW

To make that claim, you do need to first establish that he would accept a bet of 15 lives vs some reward in the first place, which I think is what he is claiming he would not do. There's a difference between making a bet and reneging, and not accepting the bet. If you would not commit murder to save a million lives in the first place, then the refusal is for a different reason than just the fact that the stakes are raised.

Comment by brianm on Counterfactual Mugging · 2009-03-20T12:17:13.985Z · LW · GW

At that point, it's no longer a precommittal - it's how you face the consequences of your decision whether to precommit or not.
Note that the hypothetical loss case presented in the post is not in fact the decision point - that point is when you first consider the matter, which is exactly what you are doing right now. If you would really change your answer after considering the matter, then having now done so, have you changed it?

If you want to obtain the advantage of someone who makes such a precommittal (and sticks to it), you must be someone who would do so. If you are not such a person (and given your answer, you are not) it is advantageous to change yourself to be such a person, by making that precommitment (or better, a generalised "I will always take the path would have maximised returns across the distribution of counterfactual outcomes in Newcomblike situations") immediately.

Such commitments change the dynamics of many such thought experiments, but usually they require that that commitment be known to the other person, and enforced some way (The way to win at Chicken is to throw your steering wheel out the window). Here though, Omega's knowledge of us removes the need to explicit announcement, and it is in our own interests to be self-enforcing (or rather we wish to reliably enforce the decision on our future selves), or we will not receive the benefit. For that reason, a silent decision is as effective as having a conversation with Omega and telling it how we decide.

Explicitly announcing our decision thus only has an effect insofar as it keeps your future self honest. Eg. if you know you wouldn't keep to a decision idly arrived at, but value your word such that you would stick to doing what you said you would despite its irrationality in that case, then it is currently in your interest to give your word. It's just as much in your interest to give your word now though - make some public promise that you would keep. Alternatively if you have sufficient mechanisms in your mind to commit to such future irrational behaviour without a formal promise, it becomes unneccessary.

Comment by brianm on Counterfactual Mugging · 2009-03-20T09:38:36.226Z · LW · GW

The problem only asks about what you would do in the failure case, and I think this obscures the fact that the relevant decision point is right now. If you would refuse to pay, that means that you are the type of person who would not have won had the coin flip turned out differently, either because you haven't considered the matter (and luckily turn out to be in the situation where your choice worked out better), or because you would renege on such a commitment when it occurred in reality.

However at this point, the coin flip hasn't been made. The globally optimal person to be right now is one that does precommit and doesn't renege. This person will come out behind in the hypothetical case as it requires we lock ourselves into the bad choice for that situation, but by being a person who would act "irrationally" at that point, they will outperform a non-committer/reneger on average.

Comment by brianm on Counterfactual Mugging · 2009-03-19T21:27:01.159Z · LW · GW

Sure - all bets are off if you aren't absolutely sure Omega is trustworthy.

I think this is a large part of the reason why the intuitive answer we jump to is rejection. Being told we believe a being making such extraordinary claims is different to actually believing them (especially when the claims may have unpleasant implications to our beliefs about ourselves), so have a tendency to consider the problem with the implicit doubt we have for everyday interactions lurking in our minds.

Comment by brianm on Counterfactual Mugging · 2009-03-19T21:11:24.118Z · LW · GW

That level of precomitting is only neccessary if you are unable to trust yourself to carry through with a self-imposed precommitment. If you are capable of this, you can decide now to act irrationally to certain future decisions in order to benefit to a greater degree than someone who can't. If the temptation to go back on your self-promise is too great in the failure case, then you would have lost in the win case - you are simply a fortunate loser who found out the flaw in his promise in the case where being flawed was beneficial. It doesn't change the fact that being capable of this decision would be a better strategy on average. Making yourself conditionally less rational can actually be a rational decision, and so the ability to do so can be a strength worth acquiring.

Ultimately the problem is the same as that of an ultimatum (eg. MAD). We want the other party to believe we will carry through even if it would be clearly irrational to do so at that point. As your opponent becomes better and better at predicting, you must become closer and closer to being someone who would make the irrational decision. When your opponent is sufficiently good (or you have insufficient knowledge as to how they are predicting), the only way to be sure is to be someone who would actually do it.

Comment by brianm on Counterfactual Mugging · 2009-03-19T16:56:42.950Z · LW · GW

Yes, then, following the utility function you specified, I would gladly risk $100 for an even chance at $10000. Since Omega's omniscient, I'd be honest about it, too, and cough up the money if I lost.

If it's rational to do this when Omega asks you in advance, isn't it also rational to make such a commitment right now? Whether you make the commitment in response to Omega's notification, or on a whim when considering the thought experiment in response to a blog post makes no difference to the payoff. If you now commit to a "if this exact situation comes up, I will commit to paying the $100 if I lose the coinflip", and p(x) is the probability of this situation occurring, you will achieve a net gain of $4950*p(x) over a non-committer (a very small number admittedly given that p(x) is tiny, but for the sake of the thought experiment all that matters is that it's positive.)

Given that someone who makes such a precommitment comes out ahead of someone who doesn't - shouldn't you make such a commitment right now? Extend this and make a precommitment to always make the decision to perform the action that would maximise your average returns in all such newcombelike situations and you're going to come off even better on average.

Comment by brianm on Counterfactual Mugging · 2009-03-19T14:07:23.460Z · LW · GW

Chances are I can predict such a response too, and so won't tell you of my prediction (or tell you in such a way that you will be more likely to attend: eg. "I've a $50 bet you'll attend tomorrow. Be there and I'll split it 50:50"). It doesn't change the fact that in this particular instance I can fortell the future with a high degree of accuracy. Why then would it violate free will if Omega could predict your accuracy in this different situation (one where he's also able to predict the effects of him telling you) to a similar precision?

Comment by brianm on Counterfactual Mugging · 2009-03-19T14:02:34.940Z · LW · GW

I would one-box on Newcombe, and I believe I would give the $100 here as well (assuming I believed Omega).

With Newcombe, if I want to win, my optimal strategy is to mimic as closely as possible the type of person Omega would predict would take one box. However, I have no way of knowing what would fool Omega: indeed if it is a sufficiently good predictor there may be no such way. Clearly then the way to be "as close as possible" to a one-boxer is to be a one-boxer. A person seeking to optimise their returns will be a person who wants their response to such stimulus to be "take one box". I do want to win, so I do want my response to be that, so it is: I'm capable of locking my decisions (making promises) in ways that forgo short-term gain for longer term benefit.

The situation here is the same, even though I have already lost. It is beneficial for me to be that type of person in general (obscured by the fact that the situation is so unlikely to occur). Were I not the type of person who made the decision to pay out on loss, I would be the type of person that lost $10000 in an equally unlikely circumstance. Locking that response in now as a general response to such occurrances means I'm more likely to benefit than those who don't.

Comment by brianm on Counterfactual Mugging · 2009-03-19T13:40:38.652Z · LW · GW

Not really - all that is neccessary is that Omega is a sufficiently accurate predictor that the payoff matrix, taking this accuracy into question, still amounts to a win for the given choice. There is no need to be a perfect predictor. And if an imperfect, 99.999% predictor violates free will, then it's clearly a lost cause anyway (I can predict with similar precision many behaviours about people based on no more evidence than their behaviour and speech, never mind godlike brain introspection) Do you have no "choice" in deciding to come to work tomorrow, if I predict based on your record that you're 99.99% reliable? Where is the cut-off that free will gets lost?