When wishful thinking works

post by AlexMennen · 2018-09-01T23:43:00.858Z · LW · GW · 1 comments

Contents

1 comment

This idea is due to Scott Garrabrant.

Suppose you have propositions , and you want to form beliefs about whether they are true; specifically, you want to form a joint probability distribution over the events . But there’s a catch: these propositions might refer to the joint probability distribution you come up with. If is the claim that , then you have no way to assign probabilities in a well-calibrated way. But suppose these propositions depend continuously on the probabilities you assign to them. For instance, could be defined so that its “true” probability is , where means probability that you assigned. Let be the function from the space of joint probability distributions over to itself that sends each probability distribution to the true probability distribution that would result if you believed . In this case, you can be well-calibrated by letting . By Brouwer’s fixed point theorem, there will always be a way to assign probabilities in a well-calibrated way.

But could have multiple fixed points. Which one is right? You get to pick; whichever fixed point you decide to believe ends up being correct, since they are fixed points of the function determining the true probabilities from your beliefs. Cases in which there are multiple such fixed points are cases in which you actually can make something be true by believing it. So you may as well believe the fixed point according to which you have the highest expected utility.

As an example, suppose you’re suffering from an ailment that can be cured by placebo, and the placebo works even if you know it’s just a placebo, provided you believe that the placebo will work. When given a pill that you know is a placebo, you may as well believe that it will cure you, since then you’ll be right, and get better.

Related to the question of what to believe is the question of what actions to take. The traditional answer is to take the action which has the highest expected utility. Another possible answer is to act the way that you believe you will act. If we do this, then will have lots of fixed points: for any probability distribution over actions we could take, if we believe that we will take actions according to those probabilities, then we will be correct. And picking the fixed point that maximizes expected utility recovers the original rule of picking the action that maximizes expected utility.

A possible objection is to ask why we would restrict to fixed points, instead of just choosing to believe whatever probability maximizes the expected utility of (which we might expect to often, though not necessarily always, be a fixed point, since having accurate beliefs is useful). A possible answer to this objection is that choosing to believe a non-fixed point because of what you expect the consequences of choosing this belief to be isn’t possible; since you are choosing based on , you are implicitly acting as if is your true beliefs, in which case the true probability distribution would be , and having high expected utility would not be useful.

If is not required to be continuous, then we can still almost find a fixed point by taking the closure of the graph of , and then taking the convex hull of each fiber. By Kakutani’s fixed point theorem, this multi-valued function has a fixed point. If the agent is only assumed to know its own utility function up to an error that is either infinitesimal (as in Definability of Truth in Probabilistic Logic) or small (as in Logical Induction), then adding a small random (unknown to the agent) error to a fixed point of the Kakutani closure of can give you a narrow probability distribution over probability distributions that are almost fixed by . We can then take the highest expected utility of these pseudo-fixed points as in the continuous case.

This helps make sense of playing mixed strategy Nash equilibria in games. In (non-game theoretic) decision theory, it is often assumed that the outcome just depends on what action you actually take, and you take whichever action leads to highest expected utility. Under this framework, there is no reason you would want to randomize your action. But under the assumption that strategies are common knowledge, changes in your beliefs about your own actions will be reflected in other players’ beliefs about your actions, which influence their actions.

To a certain extent, this can also help make sense of how to pick good Nash equilibria instead of bad ones. In a game in which one player is playing best response, and the other player knows this, and is picking the best fixed point, the result will be the Nash equilibrium that is best for the latter player. If both players are playing best fixed point, then it’s unclear exactly what happens, since you’d need to know how to evaluate the counterfactuals in which one player changes their strategy. But you’d at least expect to end up in Pareto-optimal Nash equilibria.

1 comments

Comments sorted by top scores.

comment by Stuart_Armstrong · 2018-09-05T10:39:29.568Z · LW(p) · GW(p)

Nice!

For picking good Nash equilibria, there's an issue as to whether the agents have access to a joint source of randomness or not: https://agentfoundations.org/item?id=523