Desirable Dispositions and Rational Actions

post by RichardChappell · 2010-08-17T03:20:06.657Z · LW · GW · Legacy · 184 comments

Contents

184 comments

A common background assumption on LW seems to be that it's rational to act in accordance with the dispositions one would wish to have. (Rationalists must WIN, and all that.)

E.g., Eliezer:

It is, I would say, a general principle of rationality - indeed, part of how I define rationality - that you never end up envying someone else's mere choices.  You might envy someone their genes, if Omega rewards genes, or if the genes give you a generally happier disposition.  But [two-boxing] Rachel, above, envies [one-boxing] Irene her choice, and only her choice, irrespective of what algorithm Irene used to make it.  Rachel wishes just that she had a disposition to choose differently.

And more recently, from AdamBell:

I [previously] saw Newcomb’s Problem as proof that it was sometimes beneficial to be irrational. I changed my mind when I realized that I’d been asking the wrong question. I had been asking which decision would give the best payoff at the time and saying it was rational to make that decision. Instead, I should have been asking which decision theory would lead to the greatest payoff.

Within academic philosophy, this is the position advocated by David Gauthier.  Derek Parfit has constructed some compelling counterarguments against Gauthier, so I thought I'd share them here to see what the rest of you think.

First, let's note that there definitely are possible cases where it would be "beneficial to be irrational".  For example, suppose an evil demon ('Omega') will scan your brain, assess your rational capacities, and torture you iff you surpass some minimal baseline of rationality.  In that case, it would very much be in your interests to fall below the baseline!  Or suppose you're rewarded every time you honestly believe the conclusion of some fallacious reasoning.  We can easily multiply cases here.  What's important for now is just to acknowledge this phenomenon of 'beneficial irrationality' as a genuine possibility.

This possibility poses a problem for the Eliezer-Gauthier methodology. (Quoting Eliezer again:)

Rather than starting with a concept of what is the reasonable decision, and then asking whether "reasonable" agents leave with a lot of money, start by looking at the agents who leave with a lot of money, develop a theory of which agents tend to leave with the most money, and from this theory, try to figure out what is "reasonable".

The problem, obviously, is that it's possible for irrational agents to receive externally-generated rewards for their dispositions, without this necessarily making their downstream actions any more 'reasonable'.  (At this point, you should notice the conflation of 'disposition' and 'choice' in the first quote from Eliezer.  Rachel does not envy Irene her choice at all.  What she wishes is to have the one-boxer's dispositions, so that the predictor puts a million in the first box, and then to confound all expectations by unpredictably choosing both boxes and reaping the most riches possible.)

To illustrate, consider (a variation on) Parfit's story of the threat-fulfiller and threat-ignorer.  Tom has a transparent disposition to fulfill his threats, no matter the cost to himself.  So he straps on a bomb, walks up to his neighbour Joe, and threatens to blow them both up unless Joe shines his shoes.  Seeing that Tom means business, Joe sensibly gets to work.  Not wanting to repeat the experience, Joe later goes and pops a pill to acquire a transparent disposition to ignore threats, no matter the cost to himself. The next day, Tom sees that Joe is now a threat-ignorer, and so leaves him alone.

So far, so good.  It seems this threat-ignoring disposition was a great one for Joe to acquire.  Until one day... Tom slips up.  Due to an unexpected mental glitch, he threatens Joe again.  Joe follows his disposition and ignores the threat.  BOOM.

Here Joe's final decision seems as disastrously foolish as Tom's slip up.  It was good to have the disposition to ignore threats, but that doesn't necessarily make it good idea to act on it.  We need to distinguish the desirability of a disposition to X from the rationality of choosing to do X.

184 comments

Comments sorted by top scores.

comment by Perplexed · 2010-08-17T05:23:10.532Z · LW(p) · GW(p)

Thanks for posting. Your analysis is an improvement over the LW conventional wisdom, but you still doesn't get it right, where right, to me, means the way it is analyzed by the guys who won all those Nobel prizes in economics. You write:

First, let's note that there definitely are possible cases where it would be "beneficial to be irrational".

But in every example you supply, what you really want is not exactly to be irrational; rather it is to be believed irrational by the other player in the game. But you don't notice this because in each of your artificial examples, the other player is effectively omniscient, so the only way to be believed irrational is to actually be irrational. But then, once the other player really believes, his strategies and actions are modified in such a way the your expected behavior (which would have been irrational if the other player had not come to believe you irrational) is now no longer irrational!

But, better yet, lets Taboo the word irrational. What you really want him to believe is that you will play some particular strategy. If he does, in fact, believe, then he will choose a particular strategy, and your own best response is to use the strategy he believes you are going to use. To use the technical jargon, you two are in a Nash equilibrium.

So, the standard Game Theory account is based on the beliefs each player has about the other player's preferences and strategies. And, because it deals with (Bayesian) belief, it is an incredibly flexible explanatory framework. Pick up a standard textbook or reference and marvel at the variety of applications that are covered rigorously, quantitatively, and convincingly.

I suspect that the LW interest in scenarios involving omniscient agents arises from considerations of one AI program being able to read another program's source code. However, I don't understand why there is an assumption of determinism. For example, in a Newcomb-type problem, suppose I decide to resolve the question of one box or two by flipping a coin? Unless I am supposed to believe that Omega can foretell the results of future coin flips, I think the scenario collapses. Has anyone written anything on LW about responding to Omega by randomizing? [Edited several times for minor cleanups]

Replies from: SilasBarta, komponisto, timtyler, JamesAndrix, wedrifid, KrisC
comment by SilasBarta · 2010-08-20T15:53:39.138Z · LW(p) · GW(p)

But in every example you supply, what you really want is not exactly to be irrational; rather it is to be believed irrational by the other player in the game.

I don't think that's the real problem: after all, Parfit's Hitchhiker and Newcomb's problem also eliminate this distinction by positing an Omega that will not be wrong in its predictions.

The real problem is that Chappell has delineated a failure mode that we don't care about. TDT/UDT are optimized for situations in which the world only cares about what you would do, not why you decide to do so. In Chappell's example's, there's no corresponding action that forms the basis of the failure; the "ritual of cognition" alone determines your punishment.

The EY article he linked to ("Newcomb's Problem and the Regret of Rationality") makes the irrelevance of these cases very clear:

Next, let's turn to the charge that Omega favors irrationalists. I can conceive of a superbeing who rewards only people born with a particular gene, regardless of their choices. I can conceive of a superbeing who rewards people whose brains inscribe the particular algorithm of "Describe your options in English and choose the last option when ordered alphabetically," but who does not reward anyone who chooses the same option for a different reason. But Omega rewards people who choose to take only box B, regardless of which algorithm they use to arrive at this decision, and this is why I don't buy the charge that Omega is rewarding the irrational. Omega doesn't care whether or not you follow some particular ritual of cognition; Omega only cares about your predicted decision.

...It is precisely the notion that Nature does not care about our algorithm, which frees us up to pursue the winning Way - without attachment to any particular ritual of cognition, apart from our belief that it wins. Every rule is up for grabs, except the rule of winning. [bold added]

So Chappell has not established a benefit to being irrational, and any mulitplication of his examples would be predicated on the same error.

Of course, as I said here, it's true that there are narrow circumstances where the decision theory "always jump off the nearest cliff" will win -- but it won't win on average, and any theory designed specifically for such scenarios will quickly lose.

(I really wish I had joined this conversation earlier to point this out.)

comment by komponisto · 2010-08-17T08:53:04.708Z · LW(p) · GW(p)

For example, in a Newcomb-type problem, suppose I decide to resolve the question of one box or two by flipping a coin? Unless I am supposed to believe that Omega can foretell the results of future coin flips, I think the scenario collapses. Has anyone written anything on LW about responding to Omega by randomizing?

It's not from LW, but here's Scott Aaronson:

(Incidentally, don’t imagine you can wiggle out of this by basing your decision on a coin flip! For suppose the Predictor predicts you’ll open only the first box with probability p. Then he’ll put the $1,000,000 in that box with the same probability p. So your expected payoff is 1,000,000p^2 + 1,001,000p(1-p) + 1,000(1-p)^2 = 1,000,000p + 1,000(1-p), and you’re stuck with the same paradox as before.)

comment by timtyler · 2010-08-17T06:15:53.361Z · LW(p) · GW(p)

| Has anyone written anything on LW about responding to Omega by randomizing?

Yes. It is often explicity ruled out by the supplied scenario.

comment by JamesAndrix · 2010-08-17T07:03:24.063Z · LW(p) · GW(p)

But, better yet, lets Taboo the word irrational.

You get a point for that.

comment by wedrifid · 2010-08-17T05:42:32.587Z · LW(p) · GW(p)

For example, in a Newcomb-type problem, suppose I decide to resolve the question of one box or two by flipping a coin? Unless I am supposed to believe that Omega can foretell the results of future coin flips, I think the scenario collapses. Has anyone written anything on LW about responding to Omega by randomizing?

Yes, back when we discussed Newcomblike problems frequently I more or less used a form letter to reply to that objection. Any useful treatment of Newcomblike problems will specify explicitly or implicitly how Omega will handle (quantum) randomness if it is allowed. The obvious response for Omega is to either give you nothing (or maybe a grenade!) for being a smart ass or, more elegantly, handle the reward given in commensurate manner to the probabilities. If probabilistic decisions are to be allowed then an Omega that can handle probabilistic decisions quite clearly needs to be supplied.

Thanks for posting. Your analysis is an improvement over the LW conventional wisdom, but you still doesn't get it right, where right, to me, means the way it is analyzed by the guys who won all those Nobel prizes in economics.

I downvoted the parent. How on earth is Perplexed comparing LW conventional wisdom to that of Nobel prize winning economists when he thinks coin tossing is a big deal?

Replies from: Perplexed
comment by Perplexed · 2010-08-17T06:42:22.150Z · LW(p) · GW(p)

Any useful treatment of Newcomblike problems will specify explicitly or implicitly how Omega will handle (quantum) randomness.

At the risk of appearing stupid, I have to ask: exactly what is a "useful treatment of Newcomb-like problems" used for?

So far, the only effect that all the Omega-talk has had on me is to make me honestly suspect that you guys must be into some kind of mind-over-matter quantum woo.

Seriously, Omega is not just counterfactual, he is impossible. Why do you guys keep asking us to believe so many impossible things before breakfast? Jaynes says not to include impossible propositions among the conditions in a conditional probability. Bad things happen if you do. Impossible things need to have zero-probability priors. Omega just has no business hanging around with honest Bayesians.

When I read that you all are searching for improved decision theories that "solve" the one-shot prisoner's dilemma and the one-shot Parfit hitchhiker, I just cringe. Surely you shouldn't change the standard, well-established, and correct decision theories. If you don't like the standard solutions, you should instead revise the problems from unrealistic one-shots to more realistic repeated games or perhaps even more realistic games with observers - observers who may play games with you in the future.

In every case I have seen so far where Eliezer has denigrated the standard game solution because it fails to win, he has been analyzing a game involving a physically and philosophically impossible fictional situation.

Let me ask the question this way: What evidence do you have that the standard solution to the one-shot PD can be improved upon without creating losses elsewhere? My impression is that you are being driven by wishful thinking and misguided intuition.

Replies from: cousin_it, JamesAndrix, Kaj_Sotala, prase, Mitchell_Porter, thomblake, timtyler, ocr-fork, timtyler, Emile
comment by cousin_it · 2010-08-17T10:11:11.931Z · LW(p) · GW(p)

Here's another way of looking at the situation that may or may not be helpful. Suppose I ask you, right here and now, what you'd do in the hypothetical future Parfit's Hitchhiker scenario if your opponent was a regular human with Internet access. You have several options:

  1. Answer truthfully that you'd pay $100, thus proving that you don't subscribe to CDT or EDT. (This is the alternative I would choose.)

  2. Answer that you'd refuse to pay. Now you've created evidence on the Internet, and if/when you face the scenario in real life, the driver will Google your name, check the comments on LW and leave you in the desert to die. (Assume the least convenient possible world where you can't change or delete your answer once it's posted.)

  3. Answer that you'd pay up, but secretly plan to refuse. This means you'd be lying to us here in the comments - surely not a very nice thing to do. But if you subscribe to CDT with respect to utterances as well as actions, this is the alternative you're forced to choose. (Which may or may not make you uneasy about CDT.)

Replies from: TobyBartels, Perplexed
comment by TobyBartels · 2010-08-18T05:34:47.445Z · LW(p) · GW(p)

Answer that you'd pay up, but secretly plan to refuse. This means you'd be lying to us here in the comments - surely not a very nice thing to do. But if you subscribe to CDT with respect to utterances as well as actions, this is the alternative you're forced to choose. (Which may or may not make you uneasy about CDT.)

What makes me uneasy is the assumption I wouldn't want to pay $100 to somebody who rescued me from the desert. Given that, lying to people whom I don't really know should be a piece of cake!

comment by Perplexed · 2010-08-17T18:20:41.227Z · LW(p) · GW(p)

I would of course choose option #1, adding that, due to an affliction giving me a trembling hand, I tend to get stranded in the desert and the like a lot and hence that I would appreciate it if he would spread the story of my honesty among other drivers. I might also promise to keep secret the fact of his own credulity in this case, should he ask me to. :)

I understand quite well that the best and simplest way to appear honest is to actually be honest. And also that, as a practical matter, you never really know who might observe your selfish actions and how that might hurt you in the future. But these prudential considerations can already be incorporated into received decision theory (which, incidentally, I don't think matches up with either CDT or EDT - at least as those acronyms seem to be understood here.) We don't seem to need TDT and UDT to somehow glue them in to the foundations.

Hmmm. Is EY perhaps worried that an AI might need need even stronger inducements toward honesty? Maybe it would, but I don't see how you solve the problem by endowing the AI with a flawed decision theory.

comment by JamesAndrix · 2010-08-17T07:15:18.640Z · LW(p) · GW(p)

So far, the only effect that all the Omega-talk has had on me is to make me honestly suspect that you guys must be into some kind of mind-over-matter quantum woo.

...What?

Also, it doesn't matter if he's impossible. He's an easy way to tack on arbitrary rules to hypotheticals without overly tortured explanations, because people are used to getting arbitrary rules from powerful agents.

It's also impossible for a perfectly Absent Minded Driver to come to one of only two possible intersections with 3 destinations with known payoffs and no other choices. To say nothing of the impossibly horrible safety practices of our nation's hypothetical train system.

Replies from: Perplexed
comment by Perplexed · 2010-08-17T07:47:51.110Z · LW(p) · GW(p)

it doesn't matter if he's impossible

Are you sure? I'm not objecting to the arbitrary payoffs or complaining because he doesn't seem to be maximizing his own utility. I'm objecting to his ability to predict my actions. Give me a scenario which doesn't require me to assign a non-zero prior to woo and in which a revisionist decision theory wins. If you can't, then your "improved" decision theory is no better than woo itself.

Regarding the Absent Minded Driver, I didn't recognize the reference. Googling, I find a .pdf by one of my guys (Nobelist Robert Aumann) and an LW article by Wei-Dai. Cool, but since it is already way past my bedtime, I will have to read them in the morning and get back to you.

Replies from: thomblake, Kingreaper, Kevin, Perplexed
comment by thomblake · 2010-08-17T17:55:23.906Z · LW(p) · GW(p)

I'm objecting to his ability to predict my actions. Give me a scenario which doesn't require me to assign a non-zero prior to woo

The only 'woo' here seems to be your belief that your actions are not predictable (even in principle!). Even I can predict your actions within some tolerances, and we do not need to posit that I am a superintelligence! Examples: you will not hang yourself to death within the next five minutes, and you will ever make another comment on Less Wrong.

Replies from: Perplexed
comment by Perplexed · 2010-08-17T20:49:42.442Z · LW(p) · GW(p)

...you will ever make another comment on Less Wrong.

"ever"? No, "never".

Replies from: thomblake
comment by thomblake · 2010-08-18T00:43:44.469Z · LW(p) · GW(p)

Wha?

In case it wasn't clear, it was a one-off prediction and I was already correct.

Replies from: Perplexed
comment by Perplexed · 2010-08-19T02:51:18.074Z · LW(p) · GW(p)

In case mine wasn't clear, it was a bad Gilbert & Sullivan joke. Deservedly downvoted. Apparently.

Replies from: Alicorn, Cyan
comment by Alicorn · 2010-08-19T02:55:51.076Z · LW(p) · GW(p)

You need a little more context/priming or to make the joke longer for anyone to catch this. Or you need to embed it in a more substantive and sensible reply. Otherwise it will hardly ever work.

Replies from: Perplexed
comment by Perplexed · 2010-08-19T04:56:38.436Z · LW(p) · GW(p)

Counterexample

Replies from: Alicorn
comment by Alicorn · 2010-08-19T04:59:51.882Z · LW(p) · GW(p)

I'd call that a long joke, wouldn't you?

Replies from: Perplexed
comment by Perplexed · 2010-08-19T05:05:29.895Z · LW(p) · GW(p)

See what I mean? I made it long and it still didn't work. :)

comment by Cyan · 2010-08-19T04:51:22.042Z · LW(p) · GW(p)

I wasn't sure, so I held off posting my reply (a decision I now regret). It would have been, "Well, hardly ever."

comment by Kingreaper · 2010-08-18T00:53:56.717Z · LW(p) · GW(p)

I'm objecting to his ability to predict my actions.

Why? What about you is fundamentally logically impossible to predict?

Do you not find that you often predict the actions of others? (ie. giving them gifts that you know they'll like) And that others predict your reactions? (ie. choosing not to give you spider-themed horror movies if you're arachnophobic)

comment by Kevin · 2010-08-18T01:16:37.750Z · LW(p) · GW(p)

Give me a scenario which doesn't require me to assign a non-zero prior to woo and in which a revisionist decision theory wins.

Omega is a perfect super-intelligence, existing in a computer simulation like universe that can be modeled by a set of physical laws and a very long string of random numbers. Omega knows the laws and the numbers.

comment by Perplexed · 2010-08-17T18:56:35.576Z · LW(p) · GW(p)

Ok, I've read the paper(most of it) and Wei-Dai's article now. Two points.

  1. In a sense, I understand how you might think that the Absent Minded Driver is no less contrived and unrealistic than Newcomb's Paradox. Maybe different people have different intuitions as to what toy examples are informative and which are misleading. Someone else (on this thread?) responded to me recently with the example of frictionless pulleys and the like from physics. All I can tell you is that my intuition tells me that the AMD, the PD, frictionless pulleys,and even Parfit's Hitchhiker all strike me as admirable teaching tools, whereas Newcomb problems and the old questions of irrestable force vs immovable object in physics are simply wrong problems which can only create confusion.

  2. Reading Wei-Dai's snarking about how the LW approach to decision theory (with zero published papers to date) is so superior to the confusion in which mere misguided Nobel laureates struggle - well, I almost threw up. It is extremely doubtful that I will continue posting here for long.

Replies from: Wei_Dai, cousin_it, JamesAndrix, Perplexed, Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-18T00:00:11.478Z · LW(p) · GW(p)

It wasn't meant to be a snark. I was genuinely trying to figure out how the "LW approach" might be superior, because otherwise the most likely explanation is that we're all deluded in thinking that we're making progress. I'd be happy to take any suggestions on how I could have reworded my post so that it sounded less like a snark.

Replies from: Perplexed
comment by Perplexed · 2010-08-20T23:39:52.892Z · LW(p) · GW(p)

Wei-Dai wrote a post entitled The Absent-Minded Driver which I labeled "snarky". Moreover, I suggested that the snarkiness was so bad as to be nauseating, so as to drive reasonable people to flee in horror from LW and SAIA. I here attempt to defend these rather startling opinions. Here is what Wei-Dai wrote that offended me:

This post examines an attempt by professional decision theorists to treat an example of time inconsistency, and asks why they failed to reach the solution (i.e., TDT/UDT) that this community has more or less converged upon. (Another aim is to introduce this example, which some of us may not be familiar with.) Before I begin, I should note that I don't think "people are crazy, the world is mad" (as Eliezer puts it) is a good explanation. Maybe people are crazy, but unless we can understand how and why people are crazy (or to put it more diplomatically, "make mistakes"), how can we know that we're not being crazy in the same way or making the same kind of mistakes?

The paper that Wei-Dai reviews is "The Absent-Minded Driver" by Robert J. Aumann, Sergiu Hart, and Motty Perry. Wei-Dai points out, rather condescendingly:

(Notice that the authors of this paper worked for a place called Center for the Study of Rationality, and one of them won a Nobel Prize in Economics for his work on game theory. I really don't think we want to call these people "crazy".)

Wei-Dai then proceeds to give a competent description of the problem and the standard "planning-optimality" solution of the problem. Next comes a description of an alternative seductive-but-wrong solution by Piccione and Rubinstein. I should point that everyone - P&R, Aumann, Hart, and Perry, Wei-Dai, me, and hopefully you who look into this - realizes that the alternative P&R solution is wrong. It gets the wrong result. It doesn't win. The only problem is explaining exactly where the analysis leading to that solution went astray, and in explaining how it might be modified so as to go right. Making this analysis was, as I see it, the whole point of both papers - P&R and Aumann et al. Wei-Dai describes some characteristics of Aumann et al's corrected version of the alternate solution. Then he (?) goes horribly astray:

In problems like this one, UDT is essentially equivalent to planning-optimality. So why did the authors propose and argue for action-optimality despite its downsides ..., instead of the alternative solution of simply remembering or recomputing the planning-optimal decision at each intersection and carrying it out?

But, as anyone who reads the paper carefully should see, they weren't arguing for action-optimality as the solution. They never abandoned planning optimality. Their point is that if you insist on reasoning in this way, (and Seldin's notion of "subgame perfection" suggests some reasons why you might!) then the algorithm they call "action-optimality" is the way to go about it.

But Wei-Dai doesn't get this. Instead we get this analysis of how these brilliant people just haven't had the educational advantages that LW folks have:

Well, the authors don't say (they never bothered to argue against it), but I'm going to venture some guesses:

  • That solution is too simple and obvious, and you can't publish a paper arguing for it.
  • It disregards "the probability of being at X", which intuitively ought to play a role.
  • The authors were trying to figure out what is rational for human beings, and that solution seems too alien for us to accept and/or put into practice.
  • The authors were not thinking in terms of an AI, which can modify itself to use whatever decision theory it wants to.
  • Aumann is known for his work in game theory. The action-optimality solution looks particularly game-theory like, and perhaps appeared more natural than it really is because of his specialized knowledge base.
  • The authors were trying to solve one particular case of time inconsistency. They didn't have all known instances of time/dynamic/reflective inconsistencies/paradoxes/puzzles laid out in front of them, to be solved in one fell swoop.

Taken together, these guesses perhaps suffice to explain the behavior of these professional rationalists, without needing to hypothesize that they are "crazy". Indeed, many of us are probably still not fully convinced by UDT for one or more of the above reasons.

Let me just point out that the reason it is true that "they never argued against it" is that they had already argued for it. Check out the implications of their footnote #4!

Ok, those are the facts, as I see them. Was Wei-Dai snarky? I suppose it depends on how you define snarkiness. Taboo "snarky". I think that he was overbearingly condescending without the slightest real reason for thinking himself superior. "Snarky" may not be the best one-word encapsulation of that attitude, but it is the one I chose. I am unapologetic. Wei-Dai somehow came to believe himself better able to see the truth than a Nobel laureate in the Nobel laureate's field. It is a mistake he would not have made had he simply read a textbook or taken a one-semester course in the field. But I'm coming to see it as a mistake made frequently by SIAI insiders.

Let me point out that the problem of forgetful agents may seem artificial, but it is actually extremely important. An agent with perfect recall playing the iterated PD, knowing that it is to be repeated exactly 100 times, should rationally choose to defect. On the other hand, if he cannot remember how many iterations remain to be played, and knows that the other player cannot remember either, should cooperate by playing Tit-for-Tat or something similar.

Well, that is my considered response on "snarkiness". I still have to respond on some other points, and I suspect that, upon consideration, I am going to have to eat some crow. But I'm not backing down on this narrow point. Wei-Dai blew it in interpreting Aumann's paper. (And also, other people who know some game theory should read the paper and savor the implications of footnote #4. It is totally cool).

Replies from: Tyrrell_McAllister, Wei_Dai
comment by Tyrrell_McAllister · 2010-08-20T23:49:27.175Z · LW(p) · GW(p)

The paper that Wei-Dai reviews is "The Absent-Minded Driver" by Robert J. Aumann, Sergiu Hart, and Motty Perry. Wei-Dai points out, rather condescendingly:

(Notice that the authors of this paper worked for a place called Center for the Study of Rationality, and one of them won a Nobel Prize in Economics for his work on game theory. I really don't think we want to call these people "crazy".)

How is Wei Dai being condescending there? He's pointing out how weak it is to dismiss people with these credentials by just calling them crazy. ETA: In other words, it's an admonishment directed at LWers.

That, at any rate, was my read.

Replies from: Perplexed
comment by Perplexed · 2010-08-21T00:24:20.941Z · LW(p) · GW(p)

I'm sure it would be Wei-Dai's read as well. The thing is, if Wei-Dai had not mistakenly come to the conclusion that the authors are wrong and not as enlightened as LWers, that admonishment would not be necessary. I'm not saying he condescends to LWers. I say he condescends to the rest of the world, particularly game theorists.

Replies from: wedrifid, Tyrrell_McAllister
comment by wedrifid · 2010-08-21T01:27:07.299Z · LW(p) · GW(p)

Are you essentially saying you are nauseated because Wei Dai disagreed with the authors?

Replies from: Perplexed
comment by Perplexed · 2010-08-21T03:54:51.419Z · LW(p) · GW(p)

No. Not at all. It is because he disagreed through the wrong channels, and then proceeded to propose rather insulting hypotheses as to why they had gotten it wrong.

Just read that list of possible reasons! And there are people here arguing that "of course we want to analyze the cause of mistakes". Sheesh. No wonder folks here are so in love with Evolutionary Psychology.

Ok, I'm probably going to get downvoted to hell because of that last paragraph. And, you know what, that downvoting impulse due to that paragraph pretty much makes my case for why Wei Dai was wrong to do what he did. Think about it.

Replies from: wedrifid
comment by wedrifid · 2010-08-21T05:02:45.515Z · LW(p) · GW(p)

Ok, I'm probably going to get downvoted to hell because of that last paragraph. And, you know what, that downvoting impulse due to that paragraph pretty much makes my case for why Wei Dai was wrong to do what he did. Think about it.

Interestingly enough I think that it is this paragraph that people will downvote, and not the one above. Mind you, the premise in "No wonder folks here are so in love with Evolutionary Psychology." does seem so incredibly backward that I almost laughed.

No. Not at all. It is because he disagreed through the wrong channels, and then proceeded to propose rather insulting hypotheses as to why they had gotten it wrong.

I can understand your explanation here. Without agreeing with it myself I can see how it follows from your premises.

comment by Tyrrell_McAllister · 2010-08-21T00:37:51.241Z · LW(p) · GW(p)

I'm having trouble following you.

I'm sure it would be Wei-Dai's read as well.

Are you saying that you read him differently, and that he would somehow be misinterpreting himself?

The thing is, if Wei-Dai had not mistakenly come to the conclusion that the authors are wrong and not as enlightened as LWers, that admonishment would not be necessary.

The admonishment is necessary if LWers are likely to wrongly dismiss Aumann et al. as "crazy". In other words, to think that the admonishment is necessary is to think that LWers are too inclined to dismiss other people as crazy

I'm not saying he condescends to LWers. I say he condescends to the rest of the world, particularly game theorists.

I got that. Who said anything about condescending to LWers?

Replies from: Perplexed
comment by Perplexed · 2010-08-21T00:53:28.520Z · LW(p) · GW(p)

I'm having trouble following you.

I'm sure it would be Wei-Dai's read as well.

Are you saying that you read him differently, and that he would somehow be misinterpreting himself?

Huh?? Surely, you troll. I am saying that Wei-Dai's read would likely be the same as yours: that he was not condescending; that he was in fact cautioning his readers against looking down on the poor misguided Nobelists who, after all, probably had good reasons for being so mistaken. There, but for the grace of EY, go we.

Or was I really that unclear?

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-08-21T01:29:07.400Z · LW(p) · GW(p)

Condescension is a combination of content and context. When you isolated that quote as especially condescending, I thought that you read something within it that was condescending. I was confused, because the quote could just as well have come from a post arguing that LWers ought to believe that Aumann et al. are right.

It now looks like you and I read the intrinsic meaning of the quote in the same way. The question then is, does that quote, placed in context, somehow make the overall post more condescending than it already was? Wei had already said that his treatment of the AMD was better than that of Aumann et al.. He had already said that these prestigious researchers got it wrong. Do you agree that if this were true, if the experts got it wrong, then we ought to try to understand how that happened, and not just dismiss them as crazy?

Whatever condescension occurred, it occurred as soon as Wei said that he was right and Aumann et al. were wrong. How can drawing a rational inference from that belief make it more condescending?

Replies from: wedrifid, Perplexed
comment by wedrifid · 2010-08-21T01:56:25.167Z · LW(p) · GW(p)

In this light I can see where 'condescension' fits in. There is a difference between 'descending to be with' and just plain 'being way above'. For example we could label "they are wrong" as arrogant, "they are wrong but we can empathise with them and understand their mistake" as condescending and "They are wrong, that's the kind of person Nobel prizes go to these days?" as "contemptuous" - even though they all operate from the same "I consider myself above in this instance" premise. Wei's paragraph could then be considered to be transferring weight from arrogance and contempt into condescension.

(I still disapprove of Perplexed's implied criticism.)

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-08-21T02:03:57.232Z · LW(p) · GW(p)

Okay, I can see this distinction. I can see how, as a matter of social convention, "they are wrong but we should understand their mistake" could come across as more condescending than just "they are wrong". But I really don't like that convention. If an expert is wrong, we really do have an obligation to understand how that happened. Accepting that obligation shouldn't be stigmatized as condescending. (Not that you implied otherwise.)

comment by Perplexed · 2010-08-21T03:30:33.402Z · LW(p) · GW(p)

the question then is, does that quote, placed in context, somehow make the overall post more condescending than it already was?

"They are probably not crazy" strikes me as "damning with faint praise". IMHO, it definitely raises the overall condescension level.

Whatever condescension occurred, it occurred as soon as Wei said that he was right and Aumann et al. were wrong.

No. Peons claim lords are wrong all the time. It is not even impolite, if you are willing to admit your mistake and withdraw your claim reasonably quickly.

Condescension starts when you attempt to "charitably" analyze the source of the error.

Do you agree that if this were true, if the experts got it wrong, then we ought to try to understand how that happened, and not just dismiss them as crazy?

Of course. But if I merely had good reason to believe they were wrong, then my most urgent next step would be to determine whether it were true that they got it wrong. I would begin by communicating with the experts, either privately or through the peer-reviewed literature, so as to get some feedback as to whether they were wrong or I was mistaken. If it does indeed turn out that they were wrong, I would let them take the first shot at explaining the causes of their mistake. I doubt that I would try to analyze the cause of the mistake myself unless I were a trained historian dealing with a mistake at least 50 years old. Or, if I did try (and I probably have), I would hope that someone would point out my presumption.

comment by Wei Dai (Wei_Dai) · 2010-08-21T00:56:49.830Z · LW(p) · GW(p)

Preliminary notes: You can call me "Wei Dai" (that's firstname lastname). "He" is ok. I have taken a graduate level course in game theory (where I got a 4.0 grade, in case you suspect that I coasted through it), and have Fudenberg and Tirole's "Game Theory" and Joyce's "Foundations of Causal Decision Theory" as two of the few physical books that I own.

Their point is that if you insist on reasoning in this way, (and Seldin's notion of "subgame perfection" suggests some reasons why you might!) then the algorithm they call "action-optimality" is the way to go about it.

I can't see where they made this point. At the top of Section 4, they say "How, then, should the driver reason at the action stage?" and go on directly to describe action-optimality. If they said something like "One possibility is to just recompute and apply the planning-optimal solution. But if you insist ..." please point out where. See also page 108:

In our case, there is only one player, who acts at different times. Because of his absent-mindedness, he had better coordinate his actions; this coordination can take place only before he starts out}at the planning stage. At that point, he should choose p*1 . If indeed he chose p*1 , there is no problem. If by mistake he chose p*2 or p*3 , then that is what he should do at the action stage. (If he chose something else, or nothing at all, then at the action stage he will have some hard thinking to do.)

If Aumann et al. endorse using planning-optimality at the action stage, why would they say the driver has some hard thinking to do? Again, why not just recompute and apply the planning-optimal solution?

I also do not see how subgame perfection is relevant here. Can you explain?

Let me just point out that the reason it is true that "they never argued against it" is that they had already argued for it. Check out the implications of their footnote #4!

This footnote?

Formally, (p*, p*) is a symmetric Nash equilibrium in the (symmetric) game between ‘‘the driver at the current intersection’’ and ‘‘the driver at the other intersection’’ (the strategic form game with payoff functions h.)

Since p* is the action-optimal solution, they are pointing out the formal relationship between their notion of action-optimality and Nash equilibrium. How is this footnote an argument for "it" (it being "recomputing the planning-optimal decision at each intersection and carrying it out")?

Replies from: Perplexed, Perplexed
comment by Perplexed · 2010-08-21T01:26:16.951Z · LW(p) · GW(p)

I have taken a graduate level course in game theory (where I got a 4.0 grade, in case you suspect that I coasted through it), and have Fudenberg and Tirole's "Game Theory" and Joyce's "Foundations of Causal Decision Theory" as two of the few physical books that I own.

Ok, so it is me who is convicted of condescending without having the background to justify it. :( FWIW I have never taken a course, though I have been reading in the subject for more than 45 years.

My apologies. More to come on the substance.

comment by Perplexed · 2010-08-21T02:19:09.619Z · LW(p) · GW(p)

Relevance of Subgame perfection. Seldin suggested subgame perfection as a refinement of Nash equilibrium which requires that decisions that seemed rational at the planning stage ought to still seem rational at the action stage. This at least suggests that we might want to consider requiring "subgame perfection" even if we only have a single player making two successive decisions.

Relevance of Footnote #4. This points out that one way to think of problems where a single player makes a series of decisions is to pretend that the problem has a series of players making the decisions - one decision per player, but that these fictitious players are linked in that they all share the same payoffs (but not necessarily the same information). This is a standard "trick" in game theory, but the footnote points out that in this case, since both fictitious players have the same information (because of the absent-mindedness) the game between driver-version-1 and driver-version-2 is symmetric, and that is equivalent to the constraint p1 = p2.

Does Footnote #4 really amount to "they had already argued for [just recalculating the planning-optimal solution]"? Well, no it doesn't really. I blew it in offering that as evidence. (Still think it is cool, though!)

Do they "argue for it" anywhere else? Yes, they do. Section 5, where they apply their methods to a slightly more complicated example, is an extended argument for the superiority of the planning-optimal solution to the action-optimal solutions. As they explain, there can be multiple action-optimal solutions, even if there is only one (correct) planning-optimal solution, and some of those action-optimal solutions are wrong *even though they appear to promise a higher expected payoff than does the planning optimal solution.

I can't see where they made this point. At the top of Section 4, they say "How, then, should the driver reason at the action stage?" and go on directly to describe action-optimality. If they said something like "One possibility is to just recompute and apply the planning-optimal solution. But if you insist ..." please point out where. See also page 108:

In our case, there is only one player, who acts at different times. Because of his absent-mindedness, he had better coordinate his actions; this coordination can take place only before he starts out at the planning stage. At that point, he should choose p1 . If indeed he chose p1 , there is no problem. If by mistake he chose p2 or p3 , then that is what he should do at the action stage. (If he chose something else, or nothing at all, then at the action stage he will have some hard thinking to do.)

If Aumann et al. endorse using planning-optimality at the action stage, why would they say the driver has some hard thinking to do? Again, why not just recompute and apply the planning-optimal solution?

I really don't see why you are having so much trouble parsing this. "If indeed he chose p1 , there is no problem" is an endorsement of the correctness of the planning-optimal solution. The sentence dealing with p2 and p3 asserts that, if you mistakenly used p2 for your first decision, then you best follow-up is to remain consistent and use p2 for your remaining two choices. The paragraph you quote to make your case is one I might well choose myself to make my case.

Edit: There are some asterisks in variable names in the original paper which I was unable to make work with the italics rules on this site. So "p2" above should be read as "p 2"

Replies from: Wei_Dai, WrongBot
comment by Wei Dai (Wei_Dai) · 2010-08-21T02:27:43.762Z · LW(p) · GW(p)

It is a statement that the planning-optimal action is the correct one, but it's not an endorsement that it is correct to use the planning-optimality algorithm to compute what to do when you are already at an intersection. Do you see the difference?

ETA (edited to add): According to my reading of that paragraph, what they actually endorse is to compute the planning-optimal action at START, remember that, then at each intersection, compute the set of action-optimal actions, and pick the element of the set that coincides with the planning-optimal action.

BTW, you can use "\" to escape special characters like "*" and "_".

Replies from: Perplexed
comment by Perplexed · 2010-08-21T02:42:44.787Z · LW(p) · GW(p)

Thx for the escape character info. That really ought to be added to the editing help popup.

Yes, I see the difference. I claim that what they are saying here is that you need to do the planning-optimal calculation in order to find p*1 as the unique best solution (among the three solutions that the action-optimal method provides). Once you have this, you can use it at the first intersection. But at the other intersections, you have some choices: either recalculate the planning-optimal solution each time, or write down enough information so that you can recognize that p*1 is the solution you are already committed to among the three (in section 5) solutions returned by the action-optimality calculation.

ETA in response to your ETA. Yes they do. Good point. I'm pretty sure there are cases more complicated than this perfectly amnesiac driver where that would be the only correct policy. (ETA:To be more specific, cases where the planning-optimal solution is not a sequential equilibrium). But then I have no reason to think that UDT would yield the correct answer in those more complicated cases either.

Replies from: Wei_Dai, Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-21T03:20:54.986Z · LW(p) · GW(p)

I deleted my previous reply since it seems unnecessary given your ETA.

I'm pretty sure there are cases more complicated than this perfectly amnesiac driver where that would be the only correct policy. (ETA:To be more specific, cases where the planning-optimal solution is not a sequential equilibrium).

What would be the only correct policy? What I wrote after "According to my reading of that paragraph"? If so, I don't understand your "cases where the planning-optimal solution is not a sequential equilibrium". Please explain.

Replies from: Perplexed
comment by Perplexed · 2010-08-21T03:43:24.663Z · LW(p) · GW(p)

What would be the only correct policy? What I wrote after "According to my reading of that paragraph"?

Yes.

If so, I don't understand your "cases where the planning-optimal solution is not a sequential equilibrium". Please explain.

I would have thought it would be self explanatory.

It looks like I will need to construct and analyze examples slightly more complicated that the Absent Minded Driver. That may take a while. Questions before I start: Does UDT encompass game theory, or is it limited to analyzing single-player situations? Is UDT completely explained in your postings, or is it, like TDT, still in the process of being written up?

Replies from: Tyrrell_McAllister, Wei_Dai
comment by Tyrrell_McAllister · 2010-08-21T14:53:14.549Z · LW(p) · GW(p)

Questions before I start: Does UDT encompass game theory, or is it limited to analyzing single-player situations? Is UDT completely explained in your postings, or is it, like TDT, still in the process of being written up?

Wei has described a couple versions of UDT. His descriptions seemed to me to be mathematically rigorous. Based on Wei's posts, I wrote this pdf, which gives just the definition of a UDT agent (as I understand it), without motivation or justification.

The difficulty with multiple agents looks like it will be very hard to get around within the UDT framework. UDT works essentially by passing the buck to an agent who is at the planning stage*. That planning-stage agent then performs a conventional expected-utility calculation.

But some scenarios seem best described by saying that there are multiple planning-stage agents. That means that UDT is subject to all of the usual difficulties that arise when you try to use expected utility alone in multiplayer games (e.g., prisoners dilemma). It's just that these difficulties arise at the planning stage instead of at the action stage directly.


*Somewhat more accurately, the buck is passed to the UDT agent's simulation of an agent who is at the planning stage.

comment by Wei Dai (Wei_Dai) · 2010-08-21T04:21:15.014Z · LW(p) · GW(p)

What I meant was, what point were you trying to make with that statement? According to Aumann's paper, every planning-optimal solution is also an action-optimal solution, so the decision procedure they endorse will end up picking the planning-optimal solution. (My complaint is just that it goes about it in an unnecessarily round-about way.) If theirs is a correct policy, then the policy of just recomputing the planning-optimal solution must also be correct. That seems to disprove your "only correct policy" claim. I thought your "sequential equilibrium" line was trying to preempt this argument, but I can't see how.

Does UDT encompass game theory, or is it limited to analyzing single-player situations?

Pretty much single-player for now. A number of people are trying to extend the ideas to multi-player situations, but it looks really hard.

Is UDT completely explained in your postings, or is it, like TDT, still in the process of being written up?

No, it's not being written up further. (Nesov is writing up some of his ideas, which are meant to be an advance over UDT.)

Replies from: Perplexed
comment by Perplexed · 2010-08-21T05:03:07.944Z · LW(p) · GW(p)

What I meant was, what point were you trying to make with that statement? According to Aumann's paper, every planning-optimal solution is also an action-optimal solution, so the decision procedure they endorse will end up picking the planning-optimal solution.

My understanding of their paper has changed somewhat since we began this discussion. I now believe that repeating the planning-optimal analysis at every decision node is only guaranteed to give ideal results in simple cases like this one in which every decision point is in the same information set. In more complicated cases, I can imagine that the policy of planning-optimal-for-the first-move, then action-optimal-thereafter might do better. I would need to construct an example to assert this with confidence.

(My complaint is just that it goes about it in an unnecessarily round-about way.) If theirs is a correct policy, then the policy of just recomputing the planning-optimal solution must also be correct.

In this simple example, yes. Perhaps not in more complicated cases.

That seems to disprove your "only correct policy" claim. I thought your "sequential equilibrium" line was trying to preempt this argument, but I can't see how.

And I can't see how to explain it without an example

Replies from: Wei_Dai, Perplexed, Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-22T06:49:30.191Z · LW(p) · GW(p)

While I wait, did you see anything in Aumann's paper that hints at "the policy of planning-optimal-for-the first-move, then action-optimal-thereafter might do better"? Or is that your original research (to use Wikipedia-speak)? It occurs to me that if you're correct about that, the authors of the paper should have realized it themselves and mentioned it somewhere, since it greatly strengthens their position.

Replies from: Perplexed
comment by Perplexed · 2010-08-22T15:56:59.434Z · LW(p) · GW(p)

Answering that is a bit tricky. If I am wrong, it is certainly "original research". But my belief is based upon readings in game theory (including stuff by Aumann) which are not explicitly contained in that paper.

Please bear with me. I have a multi-player example in mind, but I hope to be able to find a single-player one which makes the reasoning clearer.

Regarding your last sentence, I must point out that the whole reason we are having this discussion is my claim to the effect that you don't really understand their position, and hence cannot judge what does or does not strengthen it.

comment by Perplexed · 2010-08-23T15:26:35.440Z · LW(p) · GW(p)

Ok, I now have at least a sketch of an example. I haven't worked it out in detail, so I may be wrong, but here is what I think. In any scenario in which you gain and act on information after the planning stage, you should not use a recalculated planning-stage solution for any decisions after you have acted upon that information. Instead, you need to do the action-optimal analysis.

For example, let us complicate the absent-minded driver scenario that you diagrammed by adding an information-receipt and decision node prior to those two identical intersections. The driver comes in from the west and arrives at a T intersection where he can turn left(north) or right(south). At the intersection is a billboard advertising today's lunch menu at Casa de Maria, his favorite restaurant. If the billboard promotes chile, he will want to turn right so as to have a good chance of reaching Maria's for lunch. But if the billboard promotes enchiladas, which he dislikes, he probably wants to turn the other way and try for Marcello's Pizza. Whether he turns right or left at the billboard, he will face two consecutive identical intersections (four identical intersections total). The day is cloudy, so he cannot tell whether he is traveling north or south.

Working this example in detail will take some work. Let me know if you think the work is necessary.

comment by Wei Dai (Wei_Dai) · 2010-08-21T05:19:27.960Z · LW(p) · GW(p)

Ok, I see. I'll await your example.

comment by Wei Dai (Wei_Dai) · 2010-08-21T03:07:52.637Z · LW(p) · GW(p)

Once you have this, you can use it at the first intersection. But at the other intersections, you have some choices

It is a part of the problem statement that you can't distinguish between being at any of the intersections. So you have to use the same algorithm at all of them.

either recalculate the planning-optimal solution each time

How are you getting this from their words? What about "this coordination can take place only before he starts out at the planning stage"? And "If he chose something else, or nothing at all, then at the action stage he will have some hard thinking to do"? Why would they say "hard thinking" if they meant "recalculate the planning-optimal solution"? (Especially when the planning-optimality calculation is simpler than the action-optimality calculation.)

comment by WrongBot · 2010-08-21T02:44:29.256Z · LW(p) · GW(p)

You can use a backslash to escape special characters in markdown.

If you type \*, that will show up as * in the posted text.

comment by cousin_it · 2010-08-17T21:00:33.915Z · LW(p) · GW(p)

In the comment section of Wei Dai's post in question, taw and pengvado completed his solution so conclusively that if you really take the time to understand the object level (instead of the meta level where some people are apriori smarter because they won a prize), you can't help but feel the snarking was justified :-)

comment by JamesAndrix · 2010-08-17T22:21:16.646Z · LW(p) · GW(p)

1A. It may well be a wrong problem. if so it ought to be dissolved.

1B. If so, many theorists (including presumably nobel prize winners), have missed it since 1969.

1C. Your intuition should not be considered a persuasive argument, even by you.

2 . Even ignoring any singularitarian predictions, given the degree to which knowledge acceleration has already advanced, you should expect to see cases where old standards are blown away with seemingly little effort.

Maybe this isn't one of those cases, but it should not surprise you if we learn that humanity as a whole has done more decision theory in the past few years than in all previous history.

Given that the similar accelerations are happening in many fields, there are probably several past-nobel-level advances by rank amateurs with no special genius.

comment by Perplexed · 2010-08-19T02:49:06.262Z · LW(p) · GW(p)

OK, I've got some big guns pointed at me, so I need to respond. I need to respond intelligently and carefully. That will take some time. Within a week at most.

comment by Wei Dai (Wei_Dai) · 2010-08-18T22:34:34.906Z · LW(p) · GW(p)

A couple more comments:

  1. For a long time I also didn't think that Newcomb's Problem was worth thinking about. Then I read something by Eliezer that pointed out the connection to Prisoner's Dilemma. (According to Prisoners' Dilemma is a Newcomb Problem, others saw the connection as early as 1969.) See also my Newcomb's Problem vs. One-Shot Prisoner's Dilemma where I explored how they are different as well.
  2. I'm curious what you now think about my perspective on the Absent Minded Driver, on both the object level and meta level (assuming I convinced you that it wasn't meant to be a snark). You're the only person who has indicated actually having read Aumann et al.'s paper.
Replies from: Perplexed
comment by Perplexed · 2010-08-20T23:58:24.470Z · LW(p) · GW(p)

The possible connection between Newcomb and PD is seen by anyone who considers Jeffrey's version of decision theory (EDT). So I have seen it mentioned by philosophers long before I had heard of EY. Game theorists, of course, reject this, unless they are analysing games with "free precommitment". I instinctively reject it too, for what that is worth, though I am beginning to realize that publishing your unchangeable source code is pretty-much equivalent to free precommitment.

My analysis of your analysis of AMD is in my response to your comment below.

comment by Kaj_Sotala · 2010-08-17T19:55:30.881Z · LW(p) · GW(p)

Seriously, Omega is not just counterfactual, he is impossible. Why do you guys keep asking us to believe so many impossible things before breakfast?

Omega is not obviously impossible: in theory, someone could scan your brain and simulate how you react in a specific situation. If you're already an upload and running as pure code, this is even easier.

The question is particularly relevant when trying to develop a decision theory for artificial intelligences: there's nothing impossible about the notion of two adversarial AIs having acquired each others' source codes and basing their actions on how a simulated copy of the other would react. If you presume that this scenario is possible, and there seems to be no reason to assume that it wouldn't be, then developing a decision theory capable of handling this situation is an important part of building an AI.

comment by prase · 2010-08-17T09:50:26.127Z · LW(p) · GW(p)

So far, the only effect that all the Omega-talk has had on me is to make me honestly suspect that you guys must be into some kind of mind-over-matter quantum woo.

What on Earth gives you that impression? I agree that scenarios with Omega wil have probably little impact on practical matters, at least in near future, but quantum woo?

In every case I have seen so far where Eliezer has denigrated the standard game solution because it fails to win, he has been analyzing a game involving a physically and philosophically impossible fictional situation.

Why is Omega physically impossible? What is philosophically impossible, in general?

Replies from: Perplexed, Perplexed
comment by Perplexed · 2010-08-17T21:26:36.455Z · LW(p) · GW(p)

So far, the only effect that all the Omega-talk has had on me is to make me honestly suspect that you guys must be into some kind of mind-over-matter quantum woo.

What on Earth gives you that impression?

Omega makes a decision to put the money in the box, or not. In my model of (MWI) reality, that results in a branching - there are now 2 worlds (one with money, one without). The only problem is, I don't know which world I am in. Next, I decide whether to one-box or to two-box. In my model, that results in 4 possible worlds now. Or more precisely, someone who knows neither my decision nor Omega's would count 4 worlds.

But now we are asked to consider some kind of weird quantum correlation between Omega's choice and my own. Omega's choice is an event within my own past light-cone. By the usual physical assumptions, my choice should not have any causal influence on his choice. But I am asked to believe that if I choose to two-box, then he will have chosen not to leave money, whereas if I just believe as Omega wishes me to believe, then my choice will make me rich by reaching back and altering the past (selecting my preferred history?). And you ask "What on Earth gives me the impression that this is quantum woo?"

Replies from: RobinZ, JamesAndrix, prase, FAWS
comment by RobinZ · 2010-08-17T21:32:35.550Z · LW(p) · GW(p)

Omega makes a decision to put the money in the box, or not. In my model of (MWI) reality, that results in a branching - there are now 2 worlds (one with money, one without). The only problem is, I don't know which world I am in. Next, I decide whether to one-box or to two-box. In my model, that results in 4 possible worlds now. Or more precisely, someone who knows neither my decision nor Omega's would count 4 worlds.

Incorrect. Omega's decision is no more indeterministic than the output of a calculation. Asking (say) me "Does two plus two equal three?" does not create two worlds, one in which I say "yes" and one in which I say "no" - overwhelmingly I will tell you "no".

comment by JamesAndrix · 2010-08-17T22:49:27.730Z · LW(p) · GW(p)

Your model ought to be branching at every subatomic event, not at every conscious intelligent choice.

This makes reality (even humans) predictable.

comment by prase · 2010-08-18T11:16:26.906Z · LW(p) · GW(p)

As others have said. Omega-talk is possible in a purely classical world, and is clearer in a classical world. Omega simply scans my brain and deterministically decides whether to put the money in or not. Then I decide whether I take one or two of the boxes. To say my choice should not have any causal influence on his choice is misleading at least. It may be true (depending on how exactly one defines causality), however it doesn't exclude correlations between the two choices simply because they are both consequences of a common cause (state of my brain and the relevant portion of the world immediately before the scenario begun).

There is no need to include quantumness or even MWI into this scenario, and no certain reason why quantum effects would prevent it from happening. That said, I don't say that something similar is probably going to happen soon.

comment by FAWS · 2010-08-17T21:37:57.862Z · LW(p) · GW(p)

That's the case if you somehow manage to use a quantum coin in your decision. Your decision could be close enough to deterministic that the measure of the words where you decide differently is billions of times or more smaller and can safely be neglected.

comment by Perplexed · 2010-08-17T18:35:24.083Z · LW(p) · GW(p)

Why is Omega physically impossible?

Because, in predicting my future decisions, he is performing Laplace demon computations based on Heisenberg demon measurements. And physics rules out such demons.

What is philosophically impossible, in general?

Anything which cannot consistently coexist with what is already known to exist

Replies from: FAWS, prase
comment by FAWS · 2010-08-17T18:46:26.873Z · LW(p) · GW(p)

One possibility: Omega is running this universe as a simulation, and has already run a large number of earlier identical instances.

There may be many less obvious possibilities, even if you require Omega to be certain rather than just very sure.

Replies from: Perplexed
comment by Perplexed · 2010-08-17T20:57:16.372Z · LW(p) · GW(p)

One possibility: Omega is running this universe as a simulation, and has already run a large number of earlier identical instances.

Ok, that is possible, I suppose. Though it does conflict, in a sense, with the claim that he put the money in the box before I made the decision whether to one-box or two-box. Because, in some sense, I already made that decision in all(?) of those earlier identical simulations.

comment by prase · 2010-08-18T10:59:05.298Z · LW(p) · GW(p)

It is far from sure that the decisions made by human brains rely heavily on quantum effects and that the relevant data can't be obtained by some non-destructive scanning, without Heisenberg-demonic measurements. The Laplace-demon aspects is in fact a matter of precision. If Omega needed to simulate the brain precisely (unfortunately, the formulations of the paradox here on LW and in the subsequent discussions suggest this), then yes, Omega would have to be a demon. But the Newcomb's paradox needn't happen in its idealised version with 100% success of Omega's predictions to be valid and interesting. If Omega is right only 87% of the time, the paradox still holds, and I don't see any compelling reason why this should be impossible without postulating demonic abilities.

comment by Mitchell_Porter · 2010-08-17T06:54:00.493Z · LW(p) · GW(p)

Have you read the original article? The payoff is less if you follow ordinary decision theory, and yet the whole point of decision theory is to maximize the payoff.

Replies from: Perplexed
comment by Perplexed · 2010-08-17T07:19:55.479Z · LW(p) · GW(p)

Yes, I read that article, and at least a half dozen articles along the same line, and dozens of pages of commentary. I also remember the first LW article that I read; something about making beliefs "pay rent" in anticipated experiences. Since I don't anticipate receiving a visit from Omega or anyone else who can read my mind, please forgive me if I don't take the superiority of revisionist decision theories seriously. Show me a credible example where it does better. One which doesn't involve some kind of spooky transmission of information backward in time.

Replies from: jimrandomh, Will_Newsome, wedrifid
comment by jimrandomh · 2010-08-17T12:10:58.882Z · LW(p) · GW(p)

Since I don't anticipate receiving a visit from Omega or anyone else who can read my mind, please forgive me if I don't take the superiority of revisionist decision theories seriously.

You are missing the point. Newcomb's problem, and other problems involving Omega, are unit tests for mathematical formalizations of decision theory. When a decision theory gets a contrived problem wrong, we don't care because that scenario might appear in real life, but because it demonstrates that the math is wrong in a way that might make it subtly wrong on other problems, too.

Replies from: Perplexed
comment by Perplexed · 2010-08-17T17:13:17.822Z · LW(p) · GW(p)

I think you are missing the point. Newcomb's problem is equivalent to dividing by zero. Decision theories aren't supposed to behave well when abused in this way. If they behave badly on this problem, maybe it is the fault of the problem rather than the fault of the theory.

If someone can present a more robust decision theory, UDT or TDT, or whatever, which handles all the well formed problems just as well as standard game theory, and also handles the ill-formed problems like Newcomb in accord with EY's intuitions, then I think that is great. I look forward to reading the papers and textbooks explaining that decision theory. But until they have gone through at least some serious process of peer review, please forgive me if I dismiss them as just so much woo and/or vaporware.

Incidentally, I specified "EY's intuitions" rather than "correctness" as the criterion of success, because unless Omega actually appears and submits to a series of empirical tests, I can't imagine a more respectable empirical criterion.

Replies from: timtyler
comment by timtyler · 2010-08-17T17:15:30.877Z · LW(p) · GW(p)

Newcomb's problem is equivalent to dividing by zero.

IMO, you haven't made a case for that - and few here agree with you.

If you really think randomness is an issue, imagine a deterministic program facing the problem, with no good source of randomness to hand.

Replies from: Perplexed
comment by Perplexed · 2010-08-17T19:04:30.488Z · LW(p) · GW(p)

No, randomness is kind of a red herring. I shouldn't have brought it up.

At one point I thought I had a kind of Dutch Book argument against Omega - if he could predict some future "random" event which I intended to use in conjunction with a mixed strategy, then I should be able to profit by making side bets "hedging" my choice with respect to Omega. But when I looked more carefully, it didn't work.

Replies from: timtyler
comment by timtyler · 2010-08-17T19:10:36.722Z · LW(p) · GW(p)

Yay: honesty points!

comment by Will_Newsome · 2010-08-17T07:27:30.921Z · LW(p) · GW(p)

True prisoner's dilemma. Also, the prisoner's dilemma generally. Newcomb just exemplifies the theme.

Replies from: Perplexed
comment by Perplexed · 2010-08-17T08:00:21.703Z · LW(p) · GW(p)

I don't understand. You are answering my "show me"? Standard game theory says to defect in both PD and TPD. You have a revisionist decision theory that does better?

Replies from: Will_Newsome, timtyler
comment by Will_Newsome · 2010-08-17T08:09:04.914Z · LW(p) · GW(p)

TDT does better, yes. My apologies; I'd forgotten the manuscript hasn't yet been released to the public. It should be soon, I think; it's been in the review process for a while. If for some reason Eliezer changed his mind and decided not to publish it then I'd be somewhat surprised. I'm guessing he's nervous because it's his first opportunity to show academia that he's a real researcher and not just a somewhat bright autodidact.

There was a decision theory workshop a few months ago and a bunch of decision theorists are still working on solving the comparably much harder problems that were introduced at that time. Decision theory is still unsolved but UDT/TDT/ADT/XDT are a lot closer to solving it than the ancient CDT/EDT/SDT.

Replies from: None, wedrifid, Perplexed
comment by [deleted] · 2010-08-17T08:33:13.829Z · LW(p) · GW(p)

At the risk of looking stupid:

What are ADT and XDT?

Replies from: Sniffnoy, Will_Newsome
comment by Sniffnoy · 2010-08-18T00:25:12.558Z · LW(p) · GW(p)

For that matter, what's SDT?

Replies from: Perplexed
comment by Perplexed · 2010-08-18T02:08:34.809Z · LW(p) · GW(p)

On the assumption that SDT stands for Sequential Decision Theory, I would like to take a shot at explaining this one, as well as at clarifying the relationship among CDT, EDT, and SDT. Everyone feel free to amend and extend my remarks.

Start with simple Bayesian updating. This is a theory of knowing, not a theory of acting. It helps you to know about the world, but doesn't tell you what to do with your knowlege (other than to get more knowlege). There are two ways you can go from here: SDT and EDT.

SDT is basically game theory as developed by Seldin and Harsanyi. It adds agents, actions, and preferences to the world of propositions which exists in simple Bayesianism. Given the preferences of each agent regarding the propositions, and the agents' beliefs about the effects which their actions have on the truth or falshood of propositions regarding which they have preferences, SDT advises each agent on their choice of actions. It is "sequential" because the decisions have to be considered in strict temporal order. For example, in the Parfit's hitchhiker problem, both the hitchhiker and the motorist probably wish that the hitchhiker decision to pay $100 could be made before the motorist decision whether to offer a ride. But, in SDT, the decisions can not be made in this reverse order. By the same token, you cannot observe the future before deciding in the present.

If at least some of the agents in SDT believe that some of the other agents are rational, then you have game theory and things can get complicated. On the other hand, if you have only one agent, or if none of the agents believe that the others are rational, then you have classical decision theory which goes back to Wald (1939).

EDT is a variant of single-agent SDT due to Richard Jeffrey (1960s). In it, actions are treated just like any other proposition, except that some agents can make decisions that set action-propositions to be either true or false. The most interesting thing about EDT is that it is relatively "timeless". That is, if X is an action, and A is an agent, then (A does X) might be thought of as a proposition. Using ordinary propositional logic, you can build and reason with compound propositions such as P -> (A does X), (A does X)&Q->P, or (A does X)->(B does Y). The "timeless" aspect to this is that "A does X" is interpreted as "Either A did X, or is currently doing X, or will do X; I don't really care about when it happens".

The thing that makes EDT into a decision theory is the rule which says roughly "Act so as to make your preferred propositions true. If EDT worked as well as SDT, it would definitely be considered better, if only because of Ockham's razor. It is an extremely elegant and simple theory. And it does work remarkably well. The most famous case where it doesn't work (at least according to the SDT fans) is Newcomb's problem. SDT says to two-box (because your decision cannot affect Omega's already frozen-in-time decision). EDT says to one box (because it can't even notice that the causality goes the wrong way). SDT and EDT also disagree regarding the Hitchhiker.

CDT is an attempt to improve on both SDT and EDT. It seems to be a work in progress. There are two variants out there right now - one built by philosophers and the other primarily the work of economist Judea Pearl. (I think I prefer Pearl's version.) CDT helps to clarify the relationship between causation and correlation in Bayesian epistemology (i.e. learning). It also clarifies the relationship between action-based propositions (which are modeled in both SDT and EDT as somehow getting their truth value from the free will of the agents, and other propositions which get their truth value from the laws of physics. In CDT (Pearl's version, at least) an action can be both free and determined - the flexibility reminds me of the compatibilist dissolution of the free will question which is suggested by the LW sequences.

I don't know whether that summary answers the question you wanted answered, but I'm pretty sure the corrections I am likely to receive will answer the questions I want answered. :)

[Edit: corrected typos]

comment by Will_Newsome · 2010-08-17T08:37:23.738Z · LW(p) · GW(p)

I think ADT is only described on Vladimir Nesov's blog (if there) and XDT nowhere findable. ADT stands for Ambient Decision Theory. Unfortunately there's no comprehensive and easy summary of any of the modern decision theories anywhere. Hopefully Eliezer publishes his TDT manuscript soon.

Replies from: Wei_Dai, Vladimir_Nesov
comment by Wei Dai (Wei_Dai) · 2010-08-17T08:51:45.710Z · LW(p) · GW(p)

I coined the name XDT here. I think Anna Salamon and Steve Rayhawk had come up with essentially the same idea prior to that (and have explored its implications more deeply, but not in published form).

Replies from: None
comment by [deleted] · 2010-08-17T09:00:50.684Z · LW(p) · GW(p)

Thanks. I couldn't find any references to ADT on Vladimir Nesov's blog but I only had a quick scan so maybe I missed it, will have a better look later. And I can now remember that series of comments on XDT but my mind didn't connect to it, thanks for the link.

comment by Vladimir_Nesov · 2010-08-17T09:09:29.685Z · LW(p) · GW(p)

DT list, nothing on the blog. Hopefully I'll write up the current variant (which is conceptually somewhat different) in the near future.

comment by wedrifid · 2010-08-17T08:28:31.397Z · LW(p) · GW(p)

TDT does better, yes. My apologies; I'd forgotten the manuscript hasn't yet been released to the public. It should be soon, I think; it's been in the review process for a while.

Wow. I didn't realise Eliezer had decided to actually release something formally. My recollection was that he was refusing to work on it unless someone promised him a PhD.

comment by Perplexed · 2010-08-17T17:25:05.834Z · LW(p) · GW(p)

TDT does better, yes.

Does better how? By cooperating? By achieving an reverse-Omega-like stance and somehow constraining the other player to cooperate, conditionally on cooperating ourselves? I am completely mystified. I guess I will have to wait for the paper(s).

Replies from: timtyler
comment by timtyler · 2010-08-17T17:30:51.250Z · LW(p) · GW(p)

I don't think there are any papers. There's only this ramble:

http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/

As I said, I think your correspondents are in rather a muddle - and are discussing a completely different and rather esoteric PD case - where the agents can see and verify each other's source code.

Replies from: Perplexed
comment by Perplexed · 2010-08-19T03:51:05.256Z · LW(p) · GW(p)

Thanks for the link. It was definitely telegraphic, but I think I got a pretty good notion where he is coming from with this, and also a bit about where he is going. I'm sure you remember the old days back at sci.bio.evolution talking about the various complications with the gene-level view of selection and Hamilton's rule. Well, give another read to EY's einsatz explanation of TDT:

The one-sentence version is: Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.

Does that remind you of anything? "As you are deciding how the expression of you as a gene is going to affect the organism, remember to take into account that you are deciding for all of the members of your gene clone, and that changing the expression of your clone in other organisms is going to have an impact on the fitness of your own containing organism." Now that is really cool. For the first time I begin to see how different decision theories might be appropriate for different meanings of the term "rational agent".

I can't claim to have understood everything EY wrote in that sketch, but I did imagine that I understood his concerns regarding "contrafactual surgery". I want to get a hold of a preprint of the paper, when it is ready.

comment by timtyler · 2010-08-17T17:07:28.828Z · LW(p) · GW(p)

I think your correspondents are in rather a muddle - and are discussing a completely different and rather esoteric PD case - where the agents can see and verify each other's source code. In which case, C-C is perfectly possible.

comment by wedrifid · 2010-08-17T08:51:32.098Z · LW(p) · GW(p)

Since I don't anticipate receiving a visit from Omega or anyone else who can read my mind, please forgive me if I don't take the superiority of revisionist decision theories seriously. Show me a credible example where it does better.

You have been given at least one such example in this thread and even had you not the process of taking an idealised problem and creating more mundane example should be one you are familiar with if you are as well versed in the literature as you claim.

Replies from: Perplexed
comment by Perplexed · 2010-08-17T17:25:51.718Z · LW(p) · GW(p)

Where was I given such an example? The only example I saw was of an unreliable Omega, an Omega who only gets it right 90% of the time.

If that is the example you mean, then (1) I agree that it adds unnecessary complexity by bringing in irrelevant considerations, and (2) I claim it is still f'ing impossible.

comment by thomblake · 2010-08-17T15:36:13.770Z · LW(p) · GW(p)

Impossible things need to have zero-probability priors.

0 and 1 are not probabilities. I certainly don't have a prior of 0 that Omega's existence is impossible; he's not defined in a contradictory fashion, and even if he was I harbor the tiniest bit of doubt that I'm wrong about how contradictions work.

Replies from: Perplexed
comment by Perplexed · 2010-08-17T18:41:43.359Z · LW(p) · GW(p)

I am using sloppy language here, perhaps. But to illustrate my usage, I claim that the probability that 2+2=4 is 1. And that p(2+2=5)=0.

Replies from: thomblake
comment by thomblake · 2010-08-17T18:45:54.720Z · LW(p) · GW(p)

If you were a Bayesian and assigned 0 probability to 2+2=5, you'd be in unrecoverable epistemic trouble if you turned out to be wrong about that. See How to convince me 2+2=3.

Replies from: Perplexed
comment by Perplexed · 2010-08-19T02:04:08.172Z · LW(p) · GW(p)

EY to the contrary, I remain smug in my evaluation p(2+2=5)=0. Of all the evidences that Eliezer offered, the only one to convince me was the one which demonstrated to me that I was confused about the meaning of the digit 5. Yes, by Cromwell's rule, I think it possible I might be mistaken about how to count. "1, 2, 3, 5, 6, 4, 7", I recite to myself. "Yes, I had been wrong about that. Thanks for correcting me."

I might then write down p(Eliezer Yupkowski is the guru of Less Wrong)=0.999999999. Once again, I would be mistaken. It is "Yudkowski", not "Yupkowski") But in neither case am I in unrecoverable epistemic trouble. Those were typos. Correcting them is a simple search-and-replace, not a Bayesian updating. Or so I understand.

Replies from: WrongBot, Nick_Tarleton
comment by WrongBot · 2010-08-19T02:21:38.606Z · LW(p) · GW(p)

I might then write down p(Eliezer Yupkowski is the guru of Less Wrong)=0.999999999. Once again, I would be mistaken. It is "Yudkowski", not "Yupkowski") But in neither case am I in unrecoverable epistemic trouble. Those were typos. Correcting them is a simple search-and-replace, not a Bayesian updating. Or so I understand.

It's Yudkowsky. Might want to update your general confidence evaluations.

comment by Nick_Tarleton · 2010-08-19T02:14:39.418Z · LW(p) · GW(p)

I might then write down p(Eliezer Yupkowski is the guru of Less Wrong)=0.999999999. Once again, I would be mistaken. It is "Yudkowski", not "Yupkowski"

Yudkowsky, in fact.

comment by timtyler · 2010-08-17T17:26:40.116Z · LW(p) · GW(p)

If you run out of material, here's an academic paper, that claims to resolve many of the same problems as are being addressed on this site:

"DISPOSITION-BASED DECISION THEORY"

comment by ocr-fork · 2010-08-18T06:30:55.151Z · LW(p) · GW(p)

CODT (Cop Out Decision Theory) : In which you precommit to every beneficial precommitment.

comment by timtyler · 2010-08-17T17:03:34.855Z · LW(p) · GW(p)

Seriously, Omega is not just counterfactual, he is impossible. Why do you guys keep asking us to believe so many impossible things before breakfast?

This Omega is not impossible.

It says: "Omega has been correct on each of 100 observed occasions so far".

Not particularly hard - if you pick on decision theorists who had previously publicly expressed an opinion on the subject.

Replies from: Perplexed
comment by Perplexed · 2010-08-17T18:27:42.918Z · LW(p) · GW(p)

Ah! So I need to assign priors to three hypotheses. (1) Omega is a magician (i.e. illusion artist) (2) Omega had bribed people to lie about his past success. (3) He is what he claims.

So I assign a prior of zero probability to hypothesis #3, and cheerfully one-box using everyday decision theory.

Replies from: timtyler
comment by timtyler · 2010-08-17T18:40:49.764Z · LW(p) · GW(p)

First: http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/

You don't seem to be entering into the spirit of the problem. You are "supposed" to reach the conclusion that there's a good chance that Omega can predict your actions in this domain pretty well - from what he knows about you - after reading the premise of the problem.

If you think that's not a practical possibility, then I recommend that you imagine yourself as a deterministic robot - where such a scenario becomes more believable - and then try the problem again.

Replies from: Perplexed
comment by Perplexed · 2010-08-17T21:03:10.362Z · LW(p) · GW(p)

If I imagine myself as a deterministic robot, who knows that he is a deterministic robot, I am no longer able to maintain the illusion that I care about this problem.

Replies from: cousin_it, timtyler
comment by cousin_it · 2010-08-17T21:10:09.747Z · LW(p) · GW(p)

Do you think you aren't a deterministic robot? Or that you are, but you don't know it?

Replies from: Perplexed
comment by Perplexed · 2010-08-19T01:43:10.102Z · LW(p) · GW(p)

It is a quantum universe. So I would say that I am a stochastic robot. And Omega cannot predict my future actions.

comment by timtyler · 2010-08-17T21:56:51.487Z · LW(p) · GW(p)

...then you need to imagine you made the robot, it is meeting Omega on your behalf - and that it then gives you all its winnings.

Replies from: TobyBartels
comment by TobyBartels · 2010-08-18T05:41:56.039Z · LW(p) · GW(p)

I like this version! Now the answer seems quite obvious.

In this case, I would design the robot to be a one-boxer. And I would harbour the secret hope that a stray cosmic ray will cause the robot to pick both boxes anyway.

Replies from: timtyler
comment by timtyler · 2010-08-18T06:11:55.818Z · LW(p) · GW(p)

Yes - but you would still give its skull a lead-lining - and make use of redundancy to produce reliability...

Replies from: TobyBartels
comment by TobyBartels · 2010-08-18T07:46:15.237Z · LW(p) · GW(p)

Agreed.

comment by Emile · 2010-08-17T09:53:14.527Z · LW(p) · GW(p)

Let me ask the question this way: What evidence do you have that the standard solution to the one-shot PD can be improved upon without creating losses elsewhere? My impression is that you are being driven by wishful thinking and misguided intuition.

For what it's worth, I have written programs that cooperate on the prisoner's dilemma if and only if their opponent will cooperate, without caring about the opponent's rituals of cognition, only about his behaviour.

Unfortunately, this margin is too small to contain them, I mean, they're not ready for prime time. I'll probably write up a post on that in the near future.

comment by KrisC · 2010-08-17T07:29:15.796Z · LW(p) · GW(p)

Now that you mention it, I was wondering earlier what would happen if you roll a die and one-box on odd and two-box on even...

Replies from: saturn, thomblake
comment by saturn · 2010-08-19T01:51:24.828Z · LW(p) · GW(p)

The problem states that Omega has never been wrong, which would imply that if there are cases where he can't be certain about his prediction, he won't offer the bargain in the first place.

comment by thomblake · 2010-08-17T15:36:47.403Z · LW(p) · GW(p)

Dice are deterministic, but there's still hope for quantum randomness...

comment by ata · 2010-08-17T04:24:21.541Z · LW(p) · GW(p)

There are a few essential questions here:

  1. Does a reasonable model of reality actually cause us to anticipate any scenarios where it is beneficial to have an irrational disposition?
  2. Are these common enough that choosing to surrender one's rational disposition would have an overall positive expected utility?
  3. If you've gotten far enough to be able to wield rationality skillfully enough to correctly determine the answers to those questions, is it really possible to force yourself to forget how to be that rational, if you decide it would be instrumentally beneficial to do so?

I'm not convinced that the answer to any of these is "yes", and I don't think you've really argued for them. This post would be stronger and more interesting if you attempted to make the point that agents with irrational dispositions do tend to be rewarded, and tend to be rewarded enough that being irrational is worth it.

(As for #3, I think there was an Eliezer post on that or a related issue, not sure what it was called...)

Edit: I think I was thinking of Doublethink (Choosing to be Biased).

Replies from: rwallace, RichardChappell
comment by rwallace · 2010-08-17T13:41:28.230Z · LW(p) · GW(p)

By "irrational", do you mean in the sense of "would pay the $100 as Parfit's Hitchhiker"? If so, then the answer to all three questions is yes: there are lots of scenarios in real life where we are called upon to pay debts both positive and negative (repay favors, retaliate against aggression) and we think the benefit to be gained from doing so will be less than the cost. There are enough such scenarios that a disposition to pay debts without stopping to do utility calculations usually pays off handsomely over a lifetime.

comment by RichardChappell · 2010-08-17T07:29:33.818Z · LW(p) · GW(p)

This post would be stronger and more interesting if you attempted to make the point that agents with irrational dispositions do tend to be rewarded, and tend to be rewarded enough that being irrational is worth it.

But I don't believe such claims are true, so why would I attempt to argue for them? My claim is purely theoretical: we need to distinguish, conceptually, between desirable dispositions and rational actions. It seems to me that many on LW fail to make this conceptual distinction, which can lead to mistaken (or at least under-argued) theorizing about rationality. The dispute between one-boxers and two-boxers is interesting and significant even if both sides agree about most "real world" cases.

Replies from: komponisto
comment by komponisto · 2010-08-17T09:08:38.547Z · LW(p) · GW(p)

My claim is purely theoretical: we need to distinguish, conceptually, between desirable dispositions and rational actions. It seems to me that many on LW fail to make this conceptual distinction, which can lead to mistaken (or at least under-argued) theorizing about rationality

This is because actions only ever arise from dispositions. Yes, given that Omega has predicted you will one-box, it would (as an abstract fact) be to your benefit to two-box; but in order for you to actually two-box, you would have to execute some instruction in your source code, which, if it were present, Omega would have read, and thus would not have predicted that you would one-box.

Hence only dispositions are of interest.

Replies from: utilitymonster
comment by utilitymonster · 2010-08-17T13:36:30.865Z · LW(p) · GW(p)

Is this the argument?

  1. It is impossible to have the one-boxing disposition and then two-box.
  2. Ought implies can.
  3. Therefore, it is false that someone with a one-boxing disposition ought to two-box.

Or are you agreeing that you ought to two-box, but claiming that this fact isn't interesting because of premise 1?

At any rate, it seems like a bad argument, since analogous arguments will entail that whenever you have some decisive disposition, it is false that you ought to act differently. (It will entail, for instance, NOT[people who have a decisive loss aversion disposition should follow expected utility theory].)

Replies from: komponisto
comment by komponisto · 2010-08-17T13:55:08.773Z · LW(p) · GW(p)

Or are you agreeing that you ought to two-box, but claiming that this fact isn't interesting because of premise 1?

Yes, if "ought" merely means the outcome would be better, and doesn't imply "can".

At any rate, it seems like a bad argument, since analogous arguments will entail that whenever you have some decisive disposition, it is false that you ought to act differently

As far as I can tell, it would only have that implication in situations where an outcome depended directly on one's disposition (as opposed to one's actions).

Replies from: utilitymonster
comment by utilitymonster · 2010-08-17T14:13:20.080Z · LW(p) · GW(p)

As far as I can tell, it would only have that implication in situations where an outcome depended directly on one's disposition (as opposed to one's actions).

I don't think so:

  1. John has the loss-aversion disposition..
  2. It is impossible to have the loss-aversion disposition and maximize expected utility in case C.
  3. Ought implies can.
  4. Therefore, it is false that John ought to maximize expected utility in case C.

Or, for Newcomb:

  1. It is impossible for someone with the two-boxing disposition to one-box.
  2. Ought implies can.
  3. Therefore, it is false that someone with the two-boxing disposition ought to one box.
Replies from: komponisto
comment by komponisto · 2010-08-17T14:43:50.789Z · LW(p) · GW(p)

Either "ought" applies to dispositions, or actions, but one mustn't equivocate. If "what John ought to do" means "the disposition John should have", then perhaps John ought to maximize expected utility even if he's not currently so disposed. If the outcomes depend on John's disposition only indirectly via his actions, and his current disposition will lead to a suboptimal action, then we may very well say that John "ought" to do something different, meaning that he should have a different disposition.

If, however, John is involved in a Newcomblike problem where there is a causal arrow leading directly from his disposition to the outcome, and his current disposition is optimal with respect to outcome, then one cannot say that he "ought" to do differently, on this (dispositional) usage of "ought".

Replies from: utilitymonster
comment by utilitymonster · 2010-08-17T16:07:45.396Z · LW(p) · GW(p)

Everyone agrees about what the best disposition to have is. The disagreement is about what to do. I have uniformly meant "ought" in the action sense, not the dispositional sense. (FYI: this is always the sense in which philosophers (incl. Richard) mean "ought", unless otherwise specified.)

BTW: I still don't understand the relevance of the fact that it is impossible for people with one-boxing dispositions to two-box. If you don't like the arguments that I formalized for you, could you tell me what other premises you are using to reach your conclusion?

Replies from: komponisto, CarlShulman
comment by komponisto · 2010-08-18T05:46:32.191Z · LW(p) · GW(p)

The disagreement is about what to do. I have uniformly meant "ought" in the action sense, not the dispositional sense. (FYI: this is always the sense in which philosophers (incl. Richard) mean "ought", unless otherwise specified.)

That sense is entirely uninteresting, as I explained in my first comment in this thread. It's the sense in which one "ought" to two-box after having been predicted by Omega to one-box -- a stipulated impossibility.

Philosophers who, after having considered the distinction, remain concerned with the "action" sense, would tend to be -- shall we say -- vehemently suspected of non-reductionist thinking; of forgetting that actions are completely determined by dispositions (i.e. the algorithms running in the mind of the agent).

Having said that, if one does use "ought" in the action sense, then there should be no difficulty in saying that one "ought" to two-box in the situation where Omega has predicted you will one-box. That's just a restatement of the assumption that the outcome of (one-box predicted, two-box) is higher in the preference ordering than that of (one-box predicted, one-box).

Normally, the two meanings of "ought" coincide, because outcomes normally depend on actions that happen to be determined by dispositions, not directly on dispositions themselves. Hence it's easy to be deceived into thinking that the action sense is the appropriate sense of "ought". But this breaks down in situations of the Newcomb type. There, the dispositional sense is clearly the right one, because that's the sense in which you ought to one-box; since the dispositional sense also gives the same answers as the action sense for "normal" situations, we may as well say that the dispositional sense is what we mean by "ought" in general.

Replies from: utilitymonster
comment by utilitymonster · 2010-08-18T07:09:36.393Z · LW(p) · GW(p)

So, you're really interested in this question: what is the best decision algorithm? And then you're interested, in a subsidiary way, in what you ought to do. You think the "action" sense is silly, since you can't run one algorithm and make some other choice.

Your answer to my objection involving the parody argument is that you ought to do something else (not go with loss aversion) because there is some better decision algorithm (that you could, in some sense of "could", use?) that tells you to do something else.

What do you do with cases where it is impossible for you to run a different algorithm? You can't exactly use your algorithm to switch to some other algorithm, unless your original algorithm told you to do that all along, so these cases won't be that rare. How do you avoid the result that you should just always use whatever algorithm you started with? However you answer this objection, why can't two-boxers who care about the "action sense" of ought answer your objection analogously?

comment by CarlShulman · 2010-08-17T16:25:44.995Z · LW(p) · GW(p)

Just take causal decision theory and then crank it with an account of counterfactuals whereby there is probably a counterfactual dependency between your box-choice and your early disposition.

Arntzenius called something like this "counterfactual decision theory" in 2002. The counterfactual decision theorist would assign high probability to the dependency hypotheses "if I were to one-box now then my past disposition was one-boxing" and "if I were to two-box now then my past disposition was two-boxing." She would assign much lower probability to the dependency hypotheses on which her current action is independent of her past disposition (these would be the cognitive glitch/spasm sorts of cases).

Replies from: utilitymonster
comment by utilitymonster · 2010-08-17T16:41:35.503Z · LW(p) · GW(p)

I agree that this fact [you can't have a one-boxing disposition and then two box] could appear as premise in an argument, together with an alternative proposed decision theory, for the conclusion that one-boxing is a bad idea. If that was the implicit argument, then I now understand the point.

To be clear: I have not been trying to argue that you ought to take two boxes in Newcomb's problem.

But I thought this fact [you can't have a one-boxing disposition and then two box] was supposed to be a part of an argument that did not use a decision theory as a premise. Maybe I was misreading things, but I thought it was supposed to be clear that two-boxers were irrational, and that this should be pretty clear once we point out that you can't have the one-boxing disposition and then take two boxes.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-17T17:06:38.533Z · LW(p) · GW(p)

Not irrational by their own lights. "Take the action such that an unanticipated local miracle causing me to perform that action would be at least as good news as local miracles causing me to perform any of the alternative actions" is a coherent normative principle, even though such miracles do not occur. Other principles with different miracles are coherent too. Arguments for one decision theory or another only make sense for humans because we aren't clean implementations of any of these theories, and can be swayed by considerations like "agents following this rule regularly get rich."

Replies from: utilitymonster
comment by utilitymonster · 2010-08-17T17:31:58.918Z · LW(p) · GW(p)

I agree with all of this.

comment by CarlShulman · 2010-08-17T11:02:48.302Z · LW(p) · GW(p)

It was good to have the disposition to ignore threats

But not as good as the disposition to ignore threats, except when the threats are caused by transparently accidental mental glitches (which would not be encouraged by the disposition).

Eliezer's theory is more-or-less causal decision theory with a different account of dependency hypotheses/counterfactuals. The most relevant philosophical disputes would be about whether to use "local miracle" counterfactuals rather than various backtracking counterfactuals, or logical/mathematical counterfactuals (Eliezer's timeless decision theory idea).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-17T23:27:14.891Z · LW(p) · GW(p)

The most relevant philosophical disputes would be about whether to use "local miracle" counterfactuals rather than various backtracking counterfactuals, or logical/mathematical counterfactuals (Eliezer's timeless decision theory idea).

Or reduce counterfactuals and get them out of the analysis of problem statement, rather than explicitly as part of the problem statement.

Decision theories that run on explicit notions of dependency only compete with each other on the correctness of informal dependence analysis established by guidelines (specific to a particular theory) for presenting dependencies. And for each such theory, we can find a problem statement where the guidelines collapse. Actual progress requires understanding where dependencies themselves come from (and for now it's UDT/ADT).

comment by PaulAlmond · 2010-08-17T11:58:11.909Z · LW(p) · GW(p)

"Due to an unexpected mental glitch, he threatens Joe again. Joe follows his disposition and ignores the threat. BOOM. Here Joe's final decision seems as disastrously foolish as Tom's slip up."

But of course, the initial decision to take the pill may be rational, and the "final decision" is constrained so much that we might regard it as a "decision" in name only. The way I see it: When Joe takes the pill, he will stop rational versions of Tom from threatening him, meaning he benefits, but will be at increased risk of irrational versions of Tom threatening him, meaning he loses. Whether the decision to take the pill is rational depends on how many rational versions of Tom he thinks are out there and how many irrational ones there are, as well as the relative costs of being forced to shine shoes and being blown up. If Toms tend to be rational, and shining shoes is unpleasant enough, taking the pill may be rational.

This kind of scenario has made me think in the past: Could this have contributed to some of our emotional tendencies? At times, we experience emotions that over-ride our rational behavior. Anger is a good example, though gratitude might be as well. There may be times when it is not just rational, in terms of reward and cost, to hit back at someone who has wrong us, but we may do anyway because we are angry. However, if we never got angry, and acted rationally all the time, we may be easy targets for people who know that they can wrong us and then retreat to some safe situation where revenge would be irrational. Something that can reduce our rationality, so that we act even when it is not in our interests, might, almost paradoxically, be a good thing for us, because it would make it less rational to attack us like this in the first place. Maybe anger is partly there for that reason - literally to ensure that we will actually do things that get ourselves killed to hit back at someone, as a deterrent.

Of course, someone could ask how people are supposed to know we have that tendency - but when people saw anger working in themselves and others they would generally get the idea - they would understand the consequences of reduced rationality in some situations. It could be argued that the best strategy is to fake your ability to become angry. Maybe you become angry in trivial situations, where the cost of the anger is minimal, while in the extreme situation where you are likely to get killed you act rationally, but a problem with this is that it is more complicated behavior, so we might assume that it is harder for it get evolved in the first place. There would presumably be some kind of balance between real deterrence and fake deterrence at work here.

I can think of real-world examples of this "pill". I think there is supposed to be one wealthy person who told his family that if he was kidnapped a ransom was not to be paid under any circumstances. Now, clearly, his family are likely to ignore that and pay: Any deterrence has failed and the rational thing is to save his life. That suggests that he may have taken precautions: He may have done his best to make it impossible for his family to pay a ransom.

Replies from: Lightwave, DanielLC, khafra, RichardChappell
comment by Lightwave · 2010-08-17T15:25:23.334Z · LW(p) · GW(p)

This reminds me of a quote from Scott Aaronson's On Self-Delusion and Bounded Rationality :

Two cars race toward each other on an empty freeway; the first to swerve is the chicken. How should you play if you want to preserve both your status and your life? The answer is clear: in full view of your opponent, rip out your car's steering wheel, blindfold yourself, down a bottle of Jack Daniels, scream. If you can persuade your opponent that you're incapable of making the decision to swerve, then he has to swerve. In other words: the stupider, more ignorant, more irrational you can prove you are, the better the chance you have of winning.

comment by DanielLC · 2010-09-05T20:24:10.800Z · LW(p) · GW(p)

We aren't transparent. The only reason to fulfill our threats is to make it so later people will know that we will, in which case it's totally rational by any decision theory.

comment by khafra · 2010-08-17T13:52:13.535Z · LW(p) · GW(p)

These "pills" and "dispositions" are equivalent to pre-commitments. If you're interested in the math and some interesting examples, I'd suggest reading The Strategy of Conflict.

comment by RichardChappell · 2010-08-17T12:20:55.932Z · LW(p) · GW(p)

Yep (I actually discuss the case of emotions in the linked post!)

comment by [deleted] · 2010-08-17T08:00:19.815Z · LW(p) · GW(p)

Sort of a side note to the main topic of discussion but being as my post was quoted, maybe worth responding:

The great thing about comparing an argument to one in the philosophical literature is that it provides access to a whole range of papers on the issue so that ideas don't need to be rediscovered. The corresponding bad thing though is it makes it easy to accidentally commit a straw man attack if the argument isn't actually the same as the one in the literature. So I'll outline my argument (basically I'll extend on the quote of mine you used).

If we think of rationality as utility maximisation then two boxing on Newcomb's Problem seems rational due to the principle of strong dominance which basically argues that, whatever state the boxes are in, you benefit from two boxing.

But if the priniciple of strong dominance is itself an irrational way of making decisions then a decision made according to this principle is no longer a rational way of behaving. My argument is basically that Newcombe's Problem shows that strong dominance is an irrational way to make decisions because you do not in fact benefit regardless of circumstances by following strong dominance.

If your decision procedure is irrational then decisions made due to that procedure are no longer rational.

That's not the same as saying irrational people never win. As you note, if you build irrationality into the scenario, then whatever definition of rationality you have, you can be made to lose. Nevertheless, that doesn't change my argument that one boxing on Newcomb's is rational (even if you think it is irrational, this particular attack doesn't seem relevant to my argument).

I'm not looking for a decision theory that "wins" in every possible circumstance. I am, however, looking for one that wins in more circumstances than EDT and CDT do.

Replies from: RichardChappell
comment by RichardChappell · 2010-08-17T09:40:03.733Z · LW(p) · GW(p)

Hi Adam, can I ask for a little more clarification here? You write:

My argument is basically that Newcombe's Problem shows that strong dominance is an irrational way to make decisions because you do not in fact benefit regardless of circumstances by following strong dominance

Newcomb's Problem is a case where Omega punishes those who are disposed to follow strong dominance reasoning. But how, exactly, does it follow from this that dominance reasoning isn't rational? It may just be a case where Omega punishes those who are disposed to reason rationally. (If dominance reasoning is indeed rational, then this is the right way to describe the case.)

Replies from: None
comment by [deleted] · 2010-08-17T09:56:34.904Z · LW(p) · GW(p)

Edit: Hang on, let me try that again before you respond.

I suppose it depends what you mean by rationality but it seems to me that the same argument that is often used to make people favour strong dominance (regardless of the world state, strong dominance leads to better outcomes) can actually be used to argue that it's not a very good decision procedure (because there are world states where using this decision procedure does not lead to a better outcome), at least as long as there are decision theories that do lead to better outcomes in general (regardless of the world state, these decision theories lead to better outcomes than other decision theories - or the weaker but more realistic, in more world states, these decision theories lead to sensible outcomes.).

Just as the rationality of a strong dominance decision is justified by it leading to better outcomes than other decisions, the rationality of a decision theory could be justified by whether it leads to better outcomes than other decision theories.

If that's not what you mean by rationality that's fine but then what establishes strong dominance as being a rational way of acting and hence what makes two boxing on Newcomb's rational? I'm not saying there's no answer to that but I am saying that I will struggle to respond to your question without knowing how you think about rationality in that situation.

I'm confident you know more about this topic than me so I will try to understand your points but so far, I haven't seen anything which would: a.) Establish a decision based on strong dominance at an individual point in time as being rational without: b.) Establishing strong dominance as an irrational decision procedure by using a similar argument but applied to decision procedures rather than individual decisions.

I'd be interested to know whether you think this is flawed as I'd be happy to either change my mind or learn to explain my reasoning better, depending on what the flaw was.

Replies from: None
comment by [deleted] · 2010-08-17T10:26:03.035Z · LW(p) · GW(p)

Rationality and winning may not be the same thing. But I do think they’re linked. If we’re asked to judge whether the principle of strong dominance is rational, we say yes because it always leads to the best outcome (leads to “winning”). If we were asked to choose from a 10% chance of winning $100 or a 20% chance, we would say it was rational to choose the 20% chance, once again because there’s a higher chance of winning.

In fact, it seems to me that people do judge whether a decision is rational based on whether it leads to "winning" but they just get confused by multiple possible meanings of winning in the case of Newcomb's Problem which I think comes from confusing two possible questions about the rationality of a decision in the problem (discussed later).

Regardless, even if that's not true, it seems that rationality and winning are at least related.

Now I believe that, in just the same way, the rationality of a decision theory or procedure can be judged based on the same basis. So it may be rational to follow TDT instead of CDT (as an example, I’m not getting into the conversation of which is better here) because it may lead to a greater chance of winning. The justification here is just the same as it is in the strong dominance and lottery example in the first paragraph.

Which means there are two questions: 1.) What is the rational decision to make in the circumstance? The answer here may well be the strongly dominant decision (two boxing) 2.) What is the rational decision theory to follow? The answer here might be (for example) TDT and hence the decision that flows from this is one boxing.

But that means the question of whether one boxing or two boxing is the rational decision in the case of Newcomb’s Problem can mean one of two things: 1.) Is it a rational decision? 2.) Did it follow from a rational decision theory?

Previously, I provided more weight to the second of these and said that as it followed from a rational decision theory, that was what mattered. I still feel like that’s right (like the meta level should override the normal) but I need to think on it more to figure out if I have a real justification for it. So let’s say both levels are equally important. So, given that, I would agree that two boxing is the rational decision.

However, when it comes to creating better or worse decision theories, I think the relevant question is whether the decision theory is rational, not whether the decisions it entails are. After all, we are judging between decision theories and hence the decision theory perspective seems more relevant.

But let’s say you totally disagree with my definition of rationality (my first question would be, how do you define rationality and how does this lead to strong dominance being seen as rational rather than just seen as a winning technique? Which is to say, I wonder whether your question can be applied to many things we already see as rational as easily as it can be applied to these contested issues – maybe I’m wrong there though). But regardless, I think I’m getting too caught up in this question of rationality.

Rationality is an important issue but I think a decision theory should be about making “winning” decision and if rationality and winning aren’t even linked in your definitions, then I would say that decision theories are meant to be about how to make decisions. I think their success should be measured based on whether they lead to the best outcomes not based on an arbitrary (or none arbitrary, for that matter), definition of rationality.

So let’s say two boxing is the rational decision in Newcomb’s Problem. I’m not sure I care. I’m more interested in whether we can come up with a decision theory that comes up with a better outcome and I personally will judge such a decision theory higher than one that meets so-and-so definition of rationality but doesn’t lead to such results.

Replies from: RichardChappell
comment by RichardChappell · 2010-08-17T12:36:45.887Z · LW(p) · GW(p)

decision theory should be about making “winning” decision

But remember, in Newcomb the one-boxer wins in virtue of her disposition, not in virtue of her decision per se.

On your broader point, I agree that we need to distinguish the two questions you note, though I find it a little obscure to talk of a "rational decision theory" (as by this I had previously taken you to mean the theory which correctly specifies rational decisions, when you really mean something more like what I'm calling desirable dispositions). I agree with you that one-boxing is the more desirable disposition (or decision-procedure to have inculcated). But it's a separate question what the rational act is; and I think it'd be a mistake to assume that two-boxing can't be a rational choice just because a disposition to so choose would not be rational to inculcate.

when it comes to creating better or worse decision theories, I think the relevant question is whether the decision theory is rational [desirable to inculcate], not whether the decisions it entails are.

Well, I think that depends on one's purposes. If you're interested in creature-building, then I guess you want to know what decision procedure would be best (regardless of the rationality of the decisions it leads to). But if - like me - you're just interested in understanding rationality, then what you want is a criterion or general theory of which particular actions are rational (and why) -- regardless of whether we can reliably implement or follow it.

(See also my previous contrast between the projects of constructing theoretical 'accounts' vs. practical 'instruction manuals'.)

Replies from: None
comment by [deleted] · 2010-08-17T12:45:47.716Z · LW(p) · GW(p)

Yes, I'm willing to concede the possibility that I could be using words in unclear ways and that may lead to problems.

I am interested though in how you define a rational decision if not in terms of which leads to the better outcome?

Replies from: Emile, RichardChappell
comment by Emile · 2010-08-17T12:50:29.436Z · LW(p) · GW(p)

Maybe the focus shouldn't be on the decision (or action) that leads to the best outcome, but on the decision procedure (or theory or algorithm) that leads to the best outcome.

If the outcome is entirely independent of the procedure, the difference is unimportant, so you can speak of "rational decision" and "rational decision procedure" interchangeably. But in newcomb's problem, that's not the case.

Replies from: None
comment by [deleted] · 2010-08-17T12:52:21.342Z · LW(p) · GW(p)

Yes, that's my basic view.

The difficulty in part is that people seem to have different ideas of what it means to be rational.

comment by RichardChappell · 2010-08-17T13:11:19.221Z · LW(p) · GW(p)

I am interested though in how you define a rational decision if not in terms of which leads to the better outcome?

That sounds fine to me. (Well, technically I think it's a primitive concept, but that's not important here.) It's applying the term 'rational' to decision theories that I found ambiguous in the way noted.

Replies from: None
comment by [deleted] · 2010-08-17T13:38:18.325Z · LW(p) · GW(p)

Which means that one boxing is the better choice because it leads to the better outcome. I say that slightly tongue in cheek because I know you know that but, at the same time, I don't really understand the position that says:

1.) The rational decision is the one that leads to the better outcome. 2.) In Newcomb's Problem one boxing would actually lead to the better outcome. 3.) But the principle of strong dominance suggests that this shouldn't be the case

I don't understand how 3, a statement about how things should be, outweighs 2, a statement about how things are.

It seems like the sensible thing to do is say, well due to point 2, one boxing does lead to the better outcome. Due to point 1, this means one boxing is rational. A side note of this is that strong dominance must not be a rational way of making decisions (in all cases).

Replies from: RichardChappell
comment by RichardChappell · 2010-08-18T01:40:23.517Z · LW(p) · GW(p)

No, the choice of one-boxing doesn't lead to the better outcome. It's one's prior possession of the disposition to one-box that leads to the good outcome. It would be best of all to have the general one-boxing disposition and yet (somehow, perhaps flukily) manage to choose both boxes.

(Compare Parfit's case. Ignoring threats doesn't lead to better outcomes. It's merely the possession of the disposition that does so.)

Replies from: None, None
comment by [deleted] · 2010-08-22T08:01:38.097Z · LW(p) · GW(p)

Okay, so your dispositions are basically the counterfactual "If A occurred then I would do B" and your choice, C, is what you actually do when A occurs.

In the perfect predictor version of Newcomb's, Omega predicts perfectly the choice you make, not your disposition. It may generate it's own counterfactual for this "If A occurs then this person will do B" but that's not to say it cares about your disposition just because the two counterfactuals look similar. Because Omega's prediction of C is perfect, that means that if a stray bolt of lightning hits you and switches your decision, Omega will have taken that lightning into account. You will always be sad if it changes your choice, C, to two boxing because Omega perfectly predicts C and so will punish you.

Inversely, the rational disposition in Newcomb's isn't to one box. Instead, your disposition has no bearing on Newcomb's except insofar as it is related to C (if you always act in line with your dispositions, for example, what your dispositions matter). It isn't a disposition to one box that leads to Omega loading the boxes a certain way, it's a choice to one box so your disposition neither helps nor hinders you.

As such, your choice of whether to one or two box is what is relevant. And hence, the choice of one boxing is what leads to the better outcome. Your disposition to one box plays no role whatsoever. Hence, based on the maximising utility definition of rationality, the rational choice is to one box because its this choice itself that leads to the boxes being loaded in a certain way (note on causality at the bottom of the post).

So to restate it in the terms used in the above comments: A prior possession of the disposition to one-box is irrelevant to Newcomb's because Omega is interested in your choices not your dispositions to choices and is perfect at predicting your choices not your dispositions. Flukily choosing two boxes would be bad because Omega would have perfectly predicted the fluky choice and so you would end up loosing.

It seems like dispositions distract from the issue here because as humans we think "Omega must use dispositions to predict the choice." But that need not be true. In fact, if dispositions and choices can differ (by a fluke, for example) then it must not be true. Omega must not use dispositions to predict choice. It simply predicts the choice using whatever means work.

If you use dispositions simply to mean, the decision you would make before you actually make the decision then you're denying one of the parts of the problem itself in order to solve it - you're denying that Omega is a perfect predictor of choices and you're suggesting he's only able to predict the way choices would be at a certain time and not the choice you actually make.

This can be extended to the imperfect predictor version of Newcomb's easily enough.

I'll grant you it leaves open the need for some causal explanation but we can't simply retreat from difficult questions by suggesting that they're not really questions. Ie. We can't avoid needing to account for causality in Newcomb's by simply suggesting that Omega predicts by reading your dispositions rather than predicts C using whatever means it is that gets it right (ie. taking your dispositions and then factoring in freak lightning strikes).

So far, everything I've said has been weakly defended so I'm interested to see whether this is any stronger or whether I'll be doing some more time thinking tomorrow.

comment by [deleted] · 2010-08-18T06:19:32.150Z · LW(p) · GW(p)

We're going in circles a little aren't we (my fault, I'll grant). Okay, so there are two questions:

1.) Is it a rational choice to one box? Answer: No. 2.) Is it rational to have a disposition to one box? Answer: Yes.

As mentioned earlier, I think I'm more interested in creating a decision theory than wins than one that's rational. But let's say you are interested in a decision theory that captures rationality: It still seems arbitrary to say that the rationality of the choice is more important than the rationality of the decision. Yes, you could argue that choice is the domain of study for decision theory but the number of decision theorists that would one box (outside of LW) suggests that other people have a different idea of what decision theory would be.

I guess my question is this: Is the whole debate over one or two boxing on Newcomb's just a disagreement over which question decision theory should be studying or are there people who use choice to mean the same thing that you do that think one boxing is the rational choice?

Replies from: Pavitra, RichardChappell, ocr-fork
comment by Pavitra · 2010-08-19T02:34:01.068Z · LW(p) · GW(p)

I don't understand the distinction between choosing to one-box and being the sort of person who chooses to one-box. Can you formalize that difference?

comment by RichardChappell · 2010-08-19T00:46:36.142Z · LW(p) · GW(p)

The latter, I think. (Otherwise, one-boxers would not really be disagreeing with two-boxers. We two-boxers already granted that one-boxing is the better disposition. So if they're merely aiming to construct a theory of desirable dispositions, rather than rational choice, then their claims would be utterly uncontroversial.)

comment by ocr-fork · 2010-08-18T06:27:16.394Z · LW(p) · GW(p)

I thought that debate was about free will.

comment by Vladimir_Nesov · 2010-08-17T19:44:34.223Z · LW(p) · GW(p)

For any given concept of "rational (action)" that's not defined as "(the action) arranging for the best expected winning", you can of course find a situation where that concept and winning are at odds. But if you define them to be the same, it's no longer possible. At that point, you can be taxed for being a given program and not other program (of for the fact that pi is less than 10, for that matter), something you don't control, but such criterion won't be about rationality of your decision-making, because it doesn't provide a suggestion one can theoretically use to improve one's performance. For example, "be less rational" would translate as "act in a way that leads to less winning", which is never desirable.

Note the distinction between Eliezer's quote and AdamBell's quote: Eliezer specifically talks about rationality of actions, a notion that's not vulnerable to taxing your algorithm, while AdamBell talks less concretely about "choosing decision theory", which is ambiguous between "choosing your program" (which is not possible), and choosing to self-improve (which is just a very special kind of action).

comment by RichardChappell · 2010-08-17T07:39:59.584Z · LW(p) · GW(p)

I'm curious about the downvotes. Do others disagree with me that Parfit's threat ignorer case (and the distinction it illustrates between evaluating dispositions and actions) is worth considering?

Replies from: FAWS, Will_Newsome, KrisC
comment by FAWS · 2010-08-17T08:44:25.204Z · LW(p) · GW(p)

You can't have a disposition to act in a certain way without counter-factually acting that way. You can't counter-factually act a certain way without actually acting that way in a situation indistinguishable form the counter-factual. What you seem to be talking about appears to be pretending to have a certain disposition (e.g. acting according to that disposition unless the stakes are really high and trying to hide that fact). In other words you are talking about signaling, and I don't think the decision theory discussions here have progressed far enough for complicating the matter by trying to incorporate a theory of signalling to be productive at this point.

(or perhaps you believe in magical acausal free will)

Replies from: RichardChappell, utilitymonster
comment by RichardChappell · 2010-08-17T09:27:05.850Z · LW(p) · GW(p)

No, neither. It's more the idea that certain identifiable dispositions needn't be 100% determinative. I may be disposed to X in C so long as I do X in a sufficiently high proportion of C-situations. But if (say) an unpredictable mental glitch leads me to do otherwise one day, that may well be all the better. My point is then that it would be a mistake to condemn this more-fortunate choice as "irrational", in such cases.

Replies from: FAWS
comment by FAWS · 2010-08-17T09:43:29.015Z · LW(p) · GW(p)

If it's due to a random glitch and not any qualities that you regard as part of defining who you are I don't see how it could possibly described as choice. Randomness is incompatible with sensibly defined choice (of course the act of deciding to leave something up to chance itself is a choice, but which of the possible outcomes actually comes about is not).

If your disposition is to flip a quantum coin in certain situations that is a fact about your disposition. If your disposition is to decide differently in certain high stake situations that also is a fact about your disposition. You may choose to try to hide such facts and pretend your disposition is more simple than it actually is, but that's a question of signaling, not of what your disposition is like. (Of course a disposition to signal in a certain way is also a disposition).

Replies from: RichardChappell
comment by RichardChappell · 2010-08-17T13:04:46.674Z · LW(p) · GW(p)

It's an interesting question how to draw the line between chosen action and mere behaviour. If the "glitch" occurs at an earlier enough stage, and the subsequent causal process includes enough of my usual reasons-responsive mechanisms (and so isn't wildly contrary to my core values, etc.), then I don't see why the upshot couldn't, in principle, qualify as "my" choice -- even if it's rather surprising, at least to a casual observer, that I ended up acting contrary to my usual disposition.

Your second point involves the notion of a kind of totalizing, all things considered disposition, such that your total disposition + environmental stimuli strictly entails your response (modulo quantum complications). Granted, the kind of distinction I'm wanting to draw won't be applicable when we're talking about such total dispositions.

But there are cases where it is applicable. In particular, there are cases where everyone involved is less than omniscient (even about such local matters as the precise arrangement of matter in my head). They might have some fantastic knowledge -- e.g. they might know everything there is to know about my brain that can be captured using the language of ordinary folk psychology. This can include various important dispositional facts about me. But if folk psychology is too coarse-grained to capture my total disposition, then we need to distinguish (and separately evaluate) my coarse-grained dispositions from my actual actions.

Replies from: FAWS
comment by FAWS · 2010-08-17T13:51:37.891Z · LW(p) · GW(p)

It's an interesting question how to draw the line between chosen action and mere behaviour. If the "glitch" occurs at an earlier enough stage, and the subsequent causal process includes enough of my usual reasons-responsive mechanisms (and so isn't wildly contrary to my core values, etc.), then I don't see why the upshot couldn't, in principle, qualify as "my" choice -- even if it's rather surprising, at least to a casual observer, that I ended up acting contrary to my usual disposition.

If your normal decision making apparatus continues to work afterwards, has the chance to compensate for the glitch, doesn't, and the glitch changes the result it would have to be almost exactly balanced in a counterfactual case without the glitch. How likely is that? And even so it doesn't strike me as conceptionally all that different from unconsciously incorporating a small random element in the decision making process right from the start. In either case the more important the random element the less accurately is the outcome described as your choice, as far as I'm concerned (maybe some would define the the random element as the real you, and not the parts that include your values, experiences, your reasoning ability and so on; or possibly argue that for mysterious reasons they are so conveniently entangled that they are somehow the same thing)

But there are cases where it is applicable. In particular, there are cases where everyone involved is less than omniscient (even about such local matters as the precise arrangement of matter in my head). They might have some fantastic knowledge -- e.g. they might know everything there is to know about my brain that can be captured using the language of ordinary folk psychology. This can include various important dispositional facts about me. But if folk psychology is too coarse-grained to capture my total disposition, then we need to distinguish (and separately evaluate) my coarse-grained dispositions from my actual actions.

But that's just a map-territory difference. If you use disposition as your word for "map of the decision making process" of course that map will sometimes have inaccuracies. But calling the difference between map and territory "choice" strikes me as ... well.. it matches the absolutely crazy way some people think about free will, but is worse than useless. Unless you want to outlaw psychology because it's akin to slavery, trying to take away peoples choice by understanding them, oh the horror!

Replies from: RichardChappell
comment by RichardChappell · 2010-08-18T02:02:24.932Z · LW(p) · GW(p)

calling the difference between map and territory "choice"

Eh? That's not what I'm doing. I'm pointing out that there's a respectable (coarse-grained) sense of 'disposition' (i.e. tendency) according to which one can have a disposition to X without this necessarily entailing that one will actually do X. (There's another sense of 'total disposition' where the entailment does hold. N.B. We make choices either way, but it only makes sense to separately evaluate choices from coarse-grained dispositions.)

I take these general dispositions to accurately correspond to real facts in the world -- they're just at a sufficiently high level of abstraction that they allow for various exceptions. (Ceteris paribus laws are not, just for that reason, "inaccurate".)

Replies from: Lightwave, FAWS
comment by Lightwave · 2010-08-18T09:21:46.024Z · LW(p) · GW(p)

My take on this is the following: It's easier to see what is meant by disposition if you look at it in terms of AI. Replace the human with an AI, replace "disposition" with "source code" and replace "change your disposition to do some action X" to "rewrite your source code so that it does action X". Of course it would still want to incorporate the probability of a glitch as someone else already suggested.

If an AI, which is running CDT expects to encounter a newcomb-like problem, it would be rational for it to self-modify (in advance) to use a decision theory which one-boxes (i.e. the AI will change it's disposition).

Replies from: RichardChappell
comment by RichardChappell · 2010-08-19T00:40:44.631Z · LW(p) · GW(p)

Likewise, an AI surrounded by threat-fulfillers would rationally self-modify to become a threat-ignorer. (The debate is not about whether these are desirable dispositions to acquire -- that's common ground.) Do you think it follows from this that the act of ignoring a doomsday threat is also rational?

comment by FAWS · 2010-08-18T07:21:02.479Z · LW(p) · GW(p)

But you use disposition as word for the map, right? Otherwise why would you have mentioned folk psychology? If so talking about disposition in games involving other players is talking about signaling.

If not, what would it even mean to act contrary to ones disposition? That there exists a possible coarse-grained model of ones decision making process that that predicts a a majority of ones actions (where is the cutoff? 50%? 90%?), but doesn't predict that particular action? How do you know that's not the case for most actions? Or that the mathematically most simple model of ones decision making process that predicts a high enough percentage of ones actions doesn't predict that particular action?

Replies from: RichardChappell
comment by RichardChappell · 2010-08-19T01:33:50.248Z · LW(p) · GW(p)

No, I referenced folk psychology just to give a sense of the appropriate level of abstraction. I assume that beliefs and desires (etc.) correspond to real (albeit coarse-grained) patterns in people's brains, and so in that sense concern the 'territory' and not just the 'map'. But I take it that these are also not exhaustive of one's total disposition -- human brains also contain a fair bit of 'noise' that the above descriptions fail to capture.

Regardless, this isn't anything to do with signalling, since there's no possibility of manipulated or false belief: it's stipulated that your standing beliefs, desires, etc. are all completely transparent. (And we may also stipulate, in a particular case, that the remaining 'noise' is not something that the agents involved have any changing beliefs about. Let's just say it's common knowledge that the noise leads to unpredictable outcomes in a very small fraction of cases. But don't think of it as the agent building randomness into their source code -- as that would presumably have a folk-psychological analogue. It's more a matter of the firmware being a little unreliable at carrying out the program.)

The upshot, as I see things, is as follows: the vast majority of people who "win" at Newcomb's will be one-boxers. After all, it's precisely the disposition to one-box that is being rewarded. But the predictor (in the variation I'm considering) is not totally omniscient: she can accurately see the patterns in people's brains that correspond to various folk-psychological attributes (beliefs, desires, etc.), but is sometimes confounded by the remaining 'noise'. So it's compatible with having a one-boxing disposition (in the specified sense) that one go on to choose two boxes. And an individual who does this gains the most of all.

(Though obviously one couldn't plan on winning this way, or their disposition would be for two-boxing. But if they have an unexpected and unpredictable 'change of heart' at the moment of decision, my claim is that the resulting decision to two-box is more rather than less rational.)

Replies from: FAWS
comment by FAWS · 2010-08-19T05:35:16.350Z · LW(p) · GW(p)

I still don't see how statements about disposition in your sense are supposed to have an objective truth value (what does someone look like in visually simplified?), and why you think this disposition is supposed to better correlate with peoples predictions about decisions than the non-random component of the decision making process (total disposition) does (or why you think this concept is useful if it doesn't), but I suspect discussing this further won't lead anywhere.

Let's try leaving the disposition discussion aside for a moment: You are postulating a scenario where someone spontaneously changes from a one-boxer into a two-boxer after the predictor has already made the prediction, just long enough to open the right hand box and collect the $1000. Is that right? And the question is whether I should regret not being able to change myself back into a one boxer in time to refuse the $1000?

Obviously if my behavior in this case was completely uncorrelated to the odds of finding the $1,000,000 box empty I should not. But the normal assumption for cases where your behavior is unpredictable (e. g. when you are using a quantum coin) is that P(two box) = P ( left box empty). Otherwise I would try to contrive to one-box with a probability of just over 0.5. So the details depend on P.

If P>0.001 (I'm assuming constant utility per dollar, which is unrealistic) my expected dollars before opening the left box have been reduced, and I bitterly regret my temporary lapse from sanity since it might have costed me $1,000,000. The rationale is the same as in the normal Newcomb problem.

If P<0.001 my expected dollars right at that point have increased, and according to some possible decision theories that one-box I should not regret the spontaneous change, since I already know I was lucky. But nevertheless my overall expected payoff in all branches is lower than it would be if temporary lapses like that were not possible. Since I'm a Counterfactual muggee I regret not being able to prevent the two-boxing, but am happy enough with the outcome for that particular instance of me.

comment by utilitymonster · 2010-08-17T14:08:19.910Z · LW(p) · GW(p)

You can't have a disposition to act in a certain way without counter-factually acting that way. You can't counter-factually act a certain way without actually acting that way in a situation indistinguishable form the counter-factual.

What is the relevance of this? Are you using this argument? (See comment above.)*

  1. It is impossible to have the one-boxing disposition and then two-box.
  2. Ought implies can.
  3. Therefore, it is false that someone with a one-boxing disposition ought to two-box.

If that isn't your argument, what is the force of the quoted text?

At any rate, it seems like a bad argument, since analogous arguments will entail that whenever you have some decisive disposition, it is false that you ought to act differently. (It will entail, for instance, NOT[people who have a decisive loss aversion disposition should follow expected utility theory].)

Notice that an analogous argument also cuts the other way:

  1. It is impossible for someone with the two-boxing disposition to one-box.
  2. Ought implies can.
  3. Therefore, it is false that someone with the two-boxing disposition ought to one box.

*I made a similar comment above, but I don't know how to link to it. Help appreciated.

Replies from: TobyBartels, FAWS
comment by TobyBartels · 2010-08-18T06:36:45.317Z · LW(p) · GW(p)

I made a similar comment above, but I don't know how to link to it. Help appreciated.

Type

(See [comment above](http://lesswrong.com/lw/2lg/desirable_dispositions_and_rational_actions/2gg4?c=1).)

to get

(See comment above.)

(I got the URI http://lesswrong.com/lw/2lg/desirable_dispositions_and_rational_actions/2gg4?c=1 from the Permalink on your comment above.)

Replies from: utilitymonster
comment by utilitymonster · 2010-08-18T07:14:59.489Z · LW(p) · GW(p)

Thanks.

comment by FAWS · 2010-08-17T15:12:00.103Z · LW(p) · GW(p)

Making a decision means discovering your disposition (if we are using that word, we could call it something else if that avoids terminology confusion. What I mean is the non-random element of how you react to a specific input) in respect to a certain action. In a certain sense you are your dispositions, and everything else is just meaningless extras (that is your values, experiences, non-value preferences, reasoning ability etc. collectively form your dispositions and are part of them). Controlling your dispositions is how you control your actions. And your dispositions are what is doing that controlling. Making a choice between A and B doesn't mean letting disposition a and disposition b fight and pick a winner, it means that preferences vs A and B are the cause for your disposition being what it is. You can change your disposition vs act X in the sense that your disposition vs any X before time t is Y and your disposition for any X after t is Z, but not in the sense that you can change your disposition vs X at time t from Y to Z. Whatever you actually do (modulo randomness) at time t, that's your one and only disposition vs X at time t.

Assume you prefer red to blue, but more strongly prefer cubes to spheres. When given the choice between a red sphere and a blue cube and only one of them you can't just pick the red cube. And it's not the case that you ought to pick the red after you already have the cube, that's just nonsense. The problem is more than just impossibility.

Replies from: utilitymonster
comment by utilitymonster · 2010-08-17T15:54:34.165Z · LW(p) · GW(p)

Whatever you actually do (modulo randomness) at time t, that's your one and only disposition vs X at time t.

Okay, I understand how you use the word "disposition" now. This is not the way I was using the word, but I don't think that is relevant to our disagreement. I hereby resolve to use the phrase "disposition to A" in the same way as you for the rest of our conversation.

I still don't understand how this point suggests that people with one-boxing dispositions ought not to two-box. I can only understand it in one way: as in the argument in my original reply to you. But that argument form leads to this absurd conclusion:

(a) whenever you have a disposition to A and you do A, it is false that you ought to have done something else

In particular, it backfires for the intended argumentative purpose, since it entails that two-boxers shouldn't one-box.

Replies from: FAWS
comment by FAWS · 2010-08-17T16:05:06.723Z · LW(p) · GW(p)

No, when you have disposition a and do A it may be the case that you ought to have disposition b and do B, perhaps disposition a was formed by habit and disposition b would counter-factually have resulted if the disposition had formed on the basis of likely effects and your preferences. What is false is that you ought to have disposition a and do B.

Replies from: utilitymonster
comment by utilitymonster · 2010-08-17T16:17:59.481Z · LW(p) · GW(p)

What is false is that you ought to have disposition a and do B.

OK. So the argument is this one:

  1. According to two-boxers, you ought to (i) have the disposition to one-box, and (ii) take two boxes.
  2. It is impossible to do (i) and (ii).
  3. Ought implies can.
  4. So two-boxers are wrong.

But, on your use of "disposition", two-boxers reject 1. They do not believe that you should have a FAWS-disposition to one-box, since having a FAWS-disposition to one-box just means "actually taking one box, where this is not a result of randomness". Two-boxers think you should non-randomly choose to take two boxes.

ETA: Some two-boxers may hesitate to agree that you "ought to have a disposition to one-box", even in the philosopher's sense of "disposition". This is because they might want "ought" to only apply to actions; such people would, at most, agree that you ought to make yourself a one-boxer.

Replies from: FAWS
comment by FAWS · 2010-08-17T16:53:10.391Z · LW(p) · GW(p)

From the original post:

Rachel does not envy Irene her choice at all. What she wishes is to have the one-boxer's dispositions, so that the predictor puts a million in the first box, and then to confound all expectations by unpredictably choosing both boxes and reaping the most riches possible.

Richard is probably using disposition in a different sense (possibly the model someone has of someones disposition in my sense) but I believe Eliezer's usage was closer to mine, and either way disposition in my sense is what she would need to actually get the million dollars.

comment by Will_Newsome · 2010-08-17T08:24:20.705Z · LW(p) · GW(p)

It's definitely worth considering; but it seems intuitively clear at least that having the disposition of negotiating with counterfactual terrorists tends to lead to much greater utility loss than being screwed over now and again by terrorists who are mindlessly destructive irrespective of any gains they could make. I'm not sure exactly what argument would lead one to believe that such mindless terrorists are rare; something like Omohundro's basic AI drives might indicate that Bayesian utility-maximizing superintelligences are unlikely to be stubbornly destructive at any rate.

(By the way, I like your blog, and am glad to see you posting here on Less Wrong.)

comment by KrisC · 2010-08-17T07:57:03.723Z · LW(p) · GW(p)

It appears to be typical cold war reasoning, nothing new. Was MAD rational? Perhaps not in the idealized world of abstraction, but many real world situations rely on both sides maintaining a feeling of control by threatening each other.

Maybe there is more to the Parfit case? It seems very easy to model and to involve typical day-to-day types of threat assessment.

comment by Oligopsony · 2010-08-17T04:38:41.425Z · LW(p) · GW(p)

In most of these cases we can distinguish further: what is rational is to act in a certain way and to have a certain reputation. This has the benefit of being more airtight - one can argue for a logical relationship between disposition and action. (In Newcomb, the existence of an omniscient agent makes them all equivalent, but weird assumptions lead to weird conclusions.)

comment by cousin_it · 2010-08-17T10:46:31.352Z · LW(p) · GW(p)

Your discussion of the threat game is utterly dissolved by game theory. The game between Tom and Joe has a mixed Nash equilibrium where both make some sort of "probabilistic precommitments", and neither can improve their outcome by changing their "disposition" while assuming the other's "disposition" as given.

comment by Psychohistorian · 2010-08-17T23:47:21.432Z · LW(p) · GW(p)

I've been tinkering with the idea of making a top level post on this issue, but figured it would get excessively downvoted. So I'll risk it here.

For any decision theory, isn't there some hypothetical where Omega can say, "I've analyzed your decision theory, and I'm giving you proposition X, such that if you act the way your decision theory believes is optimal, you will lose?" The "Omega scans your brain and tortures you if you're too rational" would be an obvious example of this.

Designing a decision theory around any such problem seems relatively trivial. Recognizing when such a proposition is actually legitimate, on the other hand, seems virtually if not actually impossible. In other words, the evidence one would need about Omega's predictive capacity and honesty is quite staggering. Absent that evidence, you should always two-box. The Counterfactual mugging is even more problematic; the relative chances of running into a trickster versus an honest entity in those circumstances are probably so large that, to the human mind, they may as well be infinite.

If this sense is correct, then designing an agent to be able to accomodate Newcomb's or the Counterfactual mugging would actually be a reduction in its rationality. These events are so phenomenally unlikely to occur that actually executing the behaviour specified for them would almost certainly be a misfiring. The entity would be better off losing the 1/3^^^3 times when it actually encounters Newcomb's, and winning the remaining ~100% of the time.

In other words, for want of a better term, much of the discussion of decision theory seems masturbatory. You have an existing system. Someone thinks of how to create a problem for your existing system. Someone solves said problem. Someone thinks of a new problem. Repeat ad infinitum. The marginal cases of something like Newcomb's so thoroughly lack any practical consequence as to be wholly irrelevant for any actual entity that needs to make decisions.

I'm entirely open to the idea that I'm wrong and that Newcomblike problems occur, or that maybe there is some uber-decision theory that can never be broken by Omega. But if neither of those conditions are satisfied, this seems like something of a waste of mental effort. Of course, if it's fun to discuss despite being essentially useless, that's cool. It's just best not to pretend otherwise.

Replies from: Caspian, Sniffnoy
comment by Caspian · 2010-08-22T14:23:04.342Z · LW(p) · GW(p)

I think of Omega as a simplified stand-in for other people.

The part about Omega being omniscient and knowably trustworthy isn't solved. But I think the problem of Omega rewarding bizarre irrational behaviour on your part mostly goes away if you assume it's fairly human-like, perhaps following UDT or some other decision theory itself. The human motivation for it posing Newcomb's problem could be that it wants one of the boxes kept closed for some reason, and will reward you for keeping it closed. To make it fit this explanation, Omega should say it doesn't want you to open the box, and preferably give a reason.

Kinds of things the human-like Omega might do:

  • trust you or not based on it's prediction of your behaviour.
  • prefer you to be rewarded if you act how it wants.
  • prefer you be punished if you harm it.
  • tell you what it wants of you.

But it should be less likely to reward you for acting irrational for no reason, or for doing what it wants you not to do.

comment by Sniffnoy · 2010-08-18T00:59:44.232Z · LW(p) · GW(p)

For any decision theory, isn't there some hypothetical where Omega can say, "I've analyzed your decision theory, and I'm giving you proposition X, such that if you act the way your decision theory believes is optimal, you will lose?" The "Omega scans your brain and tortures you if you're too rational" would be an obvious example of this.

This isn't obvious. In particular, note that your "obvious example" violates the basic assumption all these attempts at a decision theory are using, that the payoff depends only on your choice and not how you arrived at it. Of course this is not necessarily a realistic assumption, but that is, IINM, the problem they're trying to solve.

Replies from: ocr-fork, Psychohistorian
comment by ocr-fork · 2010-08-18T06:22:28.454Z · LW(p) · GW(p)

This isn't obvious. In particular, note that your "obvious example" violates the basic assumption all these attempts at a decision theory are using, that the payoff depends only on your choice and not how you arrived at it.

Omega simulates you in a variety of scenarios. If you consistently make rational decisions he tortures you.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-08-18T23:47:15.885Z · LW(p) · GW(p)

My reply to this was going to be essentially the same as my comment on bentarm's thread, so I'll just point you there.

comment by Psychohistorian · 2010-08-18T01:33:21.325Z · LW(p) · GW(p)

That does make it somewhat more useful, if that's the constraint under which it's operating. It still strikes me as probable that, insofar as decision theory A+ makes decisions that theory A- does not, there must be some way to reward A- and punish A+. I may well be wrong about this. The other flaws, namely the fact that actual decision makers do not encounter omniscient entities with entirely inscrutable motives that are unwaveringly honest, still seem to render the pursuit futile. It's decidedly less futile if Omega is constrained to outcome based reward/punishment.