A Decision Theory Can Be Rational or Computable, but Not Both

post by StrivingForLegibility · 2023-12-21T21:02:45.366Z · LW · GW · 4 comments

Contents

4 comments

In classical game theory, rational agents always choose options which are optimal according to their preferences. Even when such a choice implies the ability to evaluate functions that are provably uncomputable. In other words, rationality implies hypercomputation.

We can reduce any rational agent into an oracle for any decision problem by asking it to choose between YES and NO, and giving the agent a higher payoff for choosing the correct answer. [1] If we choose the halting problem, a properly motivated rational agent will act as a halting oracle[2] If we could look at the gears [LW · GW] of a rational agent, we'd be able to find some subsystem that was performing hypercomputation.

Any computable implementation of any decision theory will necessarily fail to choose rationally in some contexts. For any undecidable problem, it is provably impossible for any algorithm to choose the correct answer in all cases.

This begs the question: if a rational agent had to delegate their decision to a computer program, what sort of program would they choose?

  1. ^

    Or at least we could if rational agents weren't hypercomputational horrors from beyond spacetime. 

  2. ^

    Aligning the behavior of an infinitely intelligent alien creature, using externally supplied payoffs, is left as an exercise to the reader.

4 comments

Comments sorted by top scores.

comment by Cole Wyeth (Amyr) · 2023-12-22T04:58:17.076Z · LW(p) · GW(p)

The levels of computability of various notions of optimal decision making are discussed in Jan Leike's PhD thesis: https://jan.leike.name/publications/Nonparametric%20General%20Reinforcement%20Learning%20-%20Leike%202016.pdf

Replies from: StrivingForLegibility
comment by StrivingForLegibility · 2023-12-22T05:41:15.061Z · LW(p) · GW(p)

This is a much more nuanced take! At the beginning of Chapter 6, Jan proposes restricting our attention to agents which are limit computable

Our agents are useless if they cannot be approximated in practice, i.e., by a regular Turing machine. Therefore we posit that any ideal for a ‘perfect agent’ needs to be limit computable ().

This seems like a very reasonable restriction! Any implementation needs to be computable, but it makes sense to look for theoretic ideals which can be approximated.

comment by RogerDearnaley (roger-d-1) · 2023-12-22T04:23:12.968Z · LW(p) · GW(p)

This rather reminds be of a discussion (which alas I cannot recall a reference for) of the idea that a system having "free will" was an illusion (or at least a heuristic viewpoint) induced by computational resource limitations of not having sufficient computational resources and data to fully model and predict the system's behavior, and that thus something that to us looks like it has free will (including ourselves) might be seen as entirely deterministic or probabilistic to an agent with much higher computational and data resources. If you can predict the behavior of a computationally-bounded (and thus not fully rational) agent sufficiently well to sucessfully Dutch-book them, because you have significantly more computational resources than them, then they probably don't look to you as if they entirely have "free will".

Replies from: StrivingForLegibility
comment by StrivingForLegibility · 2023-12-22T04:47:59.917Z · LW(p) · GW(p)

Yes! I'm a fan of Yudkowsky's view [LW · GW] that the sensation of free will is the sensation of "couldness" among multiple actions. When it feels like I could do one thing or another, it feels like I have free will. When it feels like I could have chosen differently, it feels like I chose freely.

I suspect that an important ingredient of the One True Decision Theory is being shaped in such a way that other agents, modelling how you'll respond to different policies they might implement, find it in their interest [LW · GW] to implement policies which treat you fairly.