Ordinary and unordinary decision theory

post by JonasMoss · 2022-03-02T11:39:30.522Z · LW · GW · 7 comments

Contents

  Ordinary decision theory
  Newcomb's problem and incompatibility between descriptions
  Counterfactual game-theory
  When the size of the action space affects Ua
None
7 comments

There are a couple of paradoxes in decision theory, typically involving beings making perfect predictions, copies of yourself, and similar shenanigans. If you're unfamiliar with all of this, take a look at Newcomb's problem [? · GW]. Attempted resolutions of these paradoxes usually involve either the construction of complex decision theories (such as FDT [? · GW] ) or copious hand-waving and informal arguments (the philosophy literature on the subject). 

This post has two messages.

  1. Most decision problems, including paradoxes, are ordinary decision problems.
  2. Unordinary decision problems can be found by testing with Jensen's inequality. These typically fall into two categories
    1. Counterfactual game-theory and
    2. Precommitment problems, when the size of the action space affects the utility.

I have three guiding principles.

  1. Formulate your problems mathematically. Make sure all your quantities are defined. I'm not talking about complex math here. I'm just talking about doing the work. Sometimes it's not possible to formalize the problem in a way that keeps every aspect of the original, informal project description. And that's ok!
  2. Ignore useless information. Find the minimal description of your problem and solve it.
  3. Take action sets seriously. If  is an allowed action, you can do , even if it is irrational. 

Ordinary decision theory

We're dealing with a rational actor, presumably you, an action space , and a set of random utilities . I take the action set seriously – it is possible to choose any of the actions in the set. We know the distribution of  for every . Let's say that an act is rational if it maximizes the expected utility .We'll assume that  exists. (Defining a rational act is hard when a maximum does not exist. For instance when the act a gives deterministic utility  when , and even harder when the utility function is discontinuous at infinity. See "Satan's apple".)

You may want to pause for a second or two to figure out if you agree with this notion of a rational act. Is it the usual definition? Is it strange in any way?

Let's take a look at a simple example. Suppose you're given the option to flip either (A) a biased coin with probability  for heads, (B) a biased coin with probability  for heads. In either case, you can bet $1 and on either tails or heads, winning $2 if you're right and losing your $1 if you're wrong. Your action space is 

The expected utilities are

Unsurprisingly, choosing coin A and heads is the rational act. So far, so good.

Ordinary decision theory wants you to choose between different random variables , not just between different acts a and a fixed source of randomness . Usually, you would be restricted to acting within the context of a single random variable, as in the von Neumann–Morgenstern utility theorem, where we maximize . This distinction usually doesn't matter mathematically, as it's mathematically trivial to transform a  problem to a  problem. It does matter for interpretation though!

Now take note of the following:

  1. The action a isn't a random variable, hence evidential decision theory cannot be used. At least not without adding more information to the problem. Even if you did decide to add more evidential information to the problem, it wouldn't change the rational act, as it's already defined without it. Adding extraneous information about the population distribution makes the decision problem a l'âge du capitaine problem
  2. There's no causal structure inherent in a utility maximization problem. There's no way to use statistical counterfactuals, do-calculus, possible world semantics, or whatever your favorite description of causal decision theory uses. Hence causal decision theory cannot be used either. Again, you might add causal information to the problem, but the rational act is already defined in terms of the information we have. Adding causal information won't help you be rational.
  3. Ordinary decision theory is timeless. As is the case with causality, there is no notion of time embedded into the decision problem . (Unless you're dealing with a dynamic decision problem.)

Not every problem can be formulated as an ordinary decision problem, either because they are inherently game-theoretical (counterfactual mugging, Psykosh's non-anthropic problem) or because they encode more than a single ordinary decision problem (opaque Newcomb).

Newcomb's problem and incompatibility between descriptions

Consider the following decision problem. You have two acts,  and . The utilities  are deterministic and equal to  and . Which act do you choose? Using the definition of a rational act above, you choose . This is utterly uncontroversial, isn't it? It turns out it isn't, as adding mathematically irrelevant information to the problem makes the majority of philosophers choose !

Now, let's add Newcomb's description to the problem so that  corresponds to one-boxing and  to two-boxing. The mathematical formulation of Newcomb's problem coincides with the setup above: “You have two acts,  and . The utilities  are deterministic, and are equal to  and .” This information was sufficient for  to be rational – adding more information can't change that.

Newcomb's problem feels like a significant problem because it involves a clash between intuitions; on one hand, you wish to reason causally, which leads to the incorrect application of dominance analysis. On the other hand, you want to do the rational act. 

I can see two resolutions to these

  1. You admit that the mathematical description of Newcomb's problem does not match its informal causal structure, hence dominance analysis (or any other kind of causal reasoning) cannot be applied to solve .
  2. You accept that an agent can be predicted perfectly.

My impression is that (b) is the favored resolution to Newcomb's problem on LessWrong. While that's an OK resolution, I dislike it. It (informally!) implies that you cannot choose any action from your action set. As I said, I take action sets seriously (and so should you). The resolution cannot be generalized to situations with proper randomness involved, as omega cannot predict your choice if you have access to a source of randomness that he doesn't, but it's mathematically 100% OK to formulate Newcomb's problem with mixed strategies instead of pure ones. Resolution (a) is better. No one is claiming that Newcomb's problem is possible. It's not possible if there's a true source of randomness that you can observe but Omega can't. The trick is to see through your intuitions, write the problem down, and solve it.

(Newcomb's problem is sometimes used as a short morality tale, as in "LOL! Don't you know the human brain is a machine and can be predicted?!?" Some people have not gotten that message, but I don't think the confusion around the problem is primarily about the possibility of powerful predictors.)

Is causal decision theory useful? Yes, as a device to construct decision-theoretic problems, but not as a solution method. Writing down small causal graphs helps you to think clearer, but can't ever change the fundamental idea of choosing the act that maximizes expected utility. Causal decision theory is ordinary decision theory – but only if the causal description matches the mathematical description of the problem. In these cases, causal decision theory will add irrelevant, but not misleading, information. 

The same cannot be said for evidential decision theory, whose added information is actively misleading (think about, e.g., the smoking lesion).

I insist on using the notation . instead of  as it clarifies what we are dealing with. You could Newcomb's problem using the notation , with a complicated enough utility function  and outcome variable . To write down Newcomb's problem using , we could, for instance, define  and  equal to  if  and  otherwise. That's pretty easy, but notice that  is not equal to the amount of money left in the boxes, which we would expect. Now, take a look at a slightly modified Newcomb's problem, where Omega predicts your action with  accuracy. Then

To write down , we can let X be the random variable that equals 1 (Omega predicted correctly) with probability 0.9 and 0 with probability 0.1. Then we define 

This formulation is unsatisfying, as  isn't the natural outcome (did Omega put money in the opaque box?). That's why I advocate for .

Counterfactual game-theory

It's a consequence of Jensen's inequality that, for any random variable 

In other words, you cannot expect to lose expected utility by gaining information. A couple of decision-theoretic paradoxes ostensibly violate Jensen's inequality. That is a big deal since it implies that we are either (i) dealing with an inconsistent decision problem, or (ii) the problem cannot be formulated as an ordinary decision problem. After all, Jensen's inequality is a theorem.

Recall counterfactual mugging [? · GW]

Omega, a perfect predictor, flips a coin. If it comes up tails Omega asks you for $100. If it comes up heads, Omega pays you $10,000 if it predicts that you would have paid if it had come up tails.

Here's an intuitive analysis of this problem. If you could force yourself to the strategy of paying when , then 

On the other hand, refusing to pay is the optimal action, conditioned on , which yields a utility of 0. But using this strategy makes

which appears to imply that you lose money, in expectation, by gaining information. As Jensen's inequality is violated, counterfactual mugging isn't an ordinary decision-theoretic problem. What is it then?

The most natural solution (one-shot) solution to this problem uses game theory, by letting you and your counterfactual self be the two players. The Nash equilibrium is “not pay”, so it does not optimize expected utility across both players (i.e., it's not Pareto optimal), but that's not exactly uncommon in game theory. Just as defection is rational in the Prisoner's dilemma, I see no problem with not paying in counterfactual mugging; your counterfactual self isn't you. (The counterfactual player does not make an action in counterfactual mugging, but he does in the counterfactual Prisoner's dilemma [LW · GW].)

Psy-kosh's non-anthropic problem [LW · GW] can also be analyzed with a game-theoretic lens to explain why coordinating has the highest payoff and is thus rational, provided it is an option. There is simply no reason to believe game-theoretic reasoning to be Pareto-optimal, so it should be avoided if possible. 

I haven't seen any detailed analyses of Psy-kosh's problem and similar "counterfactual" game-theoretic problems yet, and I don't believe they are fully solved. But I do believe counterfactual game-theory forms an interesting subset of unordinary decision problems. Developing a theory for counterfactual game theory could be worthwhile. Please tell me if you want to work on it (and like the approach of this post).

When the size of the action space affects 

Finally, we have paradoxical decision problems about precommitment. In transparent Omega, both boxes are transparent, the  box and the  box. Omega perfectly predicts your choice and puts  in the transparent box if he predicts one-boxing.

If you could remove the strategy  from the action set, then Omega would correctly predict , and put  in the box, so that . Since , the decision problem is trivial. 

Now consider the case when the action space isn't reduced, i.e., . Since Omega predicts perfectly, he can't possibly place  in the box, as it would make it possible for you to two-box. (Mere possibility is enough. If you're given a choice, you're given a choice.). Thus we're left with two separate decision problems:

You can combine both problems into one:

Viewcomb: The set-up is the same as Newcomb, only now you can look inside the opaque box before making your decision. However, if you wish to make your decision straight away, without first looking in the box, you must pay a small fee. As before, the predictor predicted only whether you will one-box or two-box.

Here you're asked to choose between the small decision problem, yielding a guaranteed , and the big decision problem, yielding a guaranteed . In conclusion, you should make your decision straight away if the fee is smaller .

A standard precommitment version of Viewcomb won't work. If you write a contract pledging to pay  when two-boxing, it's always rational to one-box. But the (irrational) possibility of one-boxing is still there, and Omega, being a perfect predictor, cannot allow you to make that choice. Thus you're stuck in the large decision problem.

7 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2022-03-04T18:43:46.921Z · LW(p) · GW(p)

I guess I don't see what's so wrong with "you can't do all the actions in your action space," which just seems like another way of saying "this action space isn't the right formalization of the problem."

Replies from: JonasMoss
comment by JonasMoss · 2022-03-04T20:57:06.755Z · LW(p) · GW(p)

I disagree. Sometimes your entire payoffs also change when you change your action space (in the informal description of the problem). That is the point of the last example, where precommitment changes the possible payoffs, not only restricts the action space.

comment by TLW · 2022-03-03T06:11:46.908Z · LW(p) · GW(p)

I remain confused about Newcomb-style problems in general, and counterfactual mugging in particular. They seem to typically be fairly trivially self-contradictory when you look at them from the lens of computability theory.

Omega, a perfect predictor, flips a coin. If it comes up tails Omega asks you for $100. If it comes up heads, Omega pays you $10,000 if it predicts that you would have paid if it had come up tails.

Either the agent must be running a decidable algorithm, or there is no such Omega satisfying the requirements[1].
In the former case, there is no contradiction[2].
In the latter case, the problem is a moot point.

Suppose you had an algorithm[3] for Omega. Then I feed your Omega an agent that does this:

  1. Run Omega on myself[4].
  2. If the prediction from 1 is that I would pay if the coinflip comes up tails:
    1. If the coinflip came up tails:
      1. Don't pay.
  3. Otherwise (that is, if the prediction from 1 is that I would not pay if the coinflip comes up tails):
    1. If the coinflip came up tails:
      1. Pay.

Unfortunately, it's fairly simple to show that Omega cannot avoid a contraction here. (About the best that Omega can do is loop forever / fail to halt... which still doesn't fit the problem.)

This is a contradiction, so Omega cannot exist.

  1. ^

    Or Omega is super-Turing, in which case all bets are off.

  2. ^

    Jensen's inequality as used in this context only applies if the agent can do unbounded computation, for one. (Rough sketch of an example with finite computation: suppose I have an agent that does at most C operations. If I ask it for the best outcome over [1] (that is, one outcome with a value of 1), the answer is 1, fairly straightforwardly. If I instead ask it for the best outcome over C outcomes each with a value of  and one outcome with a value of 1, the agent has an expected value of no more than  , as it doesn't have enough operations to scan all possible outcomes looking for the good one.)

  3. ^

    Strictly speaking, Omega is non-deterministic as stated, but you can recover determinism by passing the coinflip result into Omega as an argument.

  4. ^

    This is non-trivial, but doable. It requires a vaguely quine-like construct.

Replies from: dxu
comment by dxu · 2022-03-03T06:49:49.504Z · LW(p) · GW(p)

The usual formulations typically have Omega being "bigger" than your agent, such that Omega can predict your agent, but your agent cannot predict Omega. This is generally taken to be unproblematic because physical agents are always bounded by time + memory constraints, which makes them analogous to finite-state machines rather than full Turing machines. (If unbounded computation were permitted, on the other hand, then you would indeed need a super-Turing version of Omega to pull off the trick.)

Replies from: TLW
comment by TLW · 2022-03-03T06:53:01.464Z · LW(p) · GW(p)

...in which case much of game theory does not apply and the paradoxes, by and large, aren't actually paradoxical, no?

Replies from: JonasMoss
comment by JonasMoss · 2022-03-03T09:29:58.708Z · LW(p) · GW(p)

Paradoxical decision problems are paradoxical in the colloquial sense (such as Hilbert's hotel or Bertrand's paradox), not the literal sense (such as "this sentence is false"). Paradoxicality is in the eye of the beholder. Some people think Newcomb's problem is paradoxical, some don't. I agree with you and don't find it paradoxical.

Replies from: TLW
comment by TLW · 2022-03-04T01:00:55.930Z · LW(p) · GW(p)

In that case why are people spending so much effort on it[1]?

Ditto, why is there so much argumentation based around applying game theory to Newcomb's problem[2] (or variants) when much of game theory does not apply to it?

*****

Paradoxical decision problems are paradoxical in the colloquial sense (such as Hilbert's hotel or Bertrand's paradox), not the literal sense (such as "this sentence is false").

I think that there is a lot of effort being wasted due to not clearly distinguishing counterintuative results and self-contradictory results in general, and this is a prime example.

  1. ^

    The LessWrong "Newcomb's Problem" tag has 41 entries with a combined total of over 1,000 comments, for instance.

  2. ^

    See also any argument[3] that says that X is invalid because it provides 'the wrong solution' to Newcomb's Problem.

  3. ^

    Such as "Evidential decision theories are inadequate rational decision theories. For either they provide wrong solutions to Newcomb's problem [...]", which is a direct quote from the overview of Newcomb's problem linked in the description of the tag 'Newcomb's Problem' on this site.