Reflective oracles and the procrastination paradox
post by jessicata (jessica.liu.taylor) · 2015-03-26T22:18:15.000Z · LW · GW · 4 commentsContents
4 comments
The procrastination paradox relates to the following problem:
- There are infinite time steps (one for each natural number).
- For each time step, there is an agent.
- Each agent may press the button or not.
- Each agent will get utility 1 if and only if it or a later agent presses the button.
The paradox is that the following reasoning process leads to the button never getting pressed:
- My rule (and the rule for all future agents) is that, if I can prove that the button will be pressed in the future, then I will not press the button, and otherwise I will. This rule (appears to) maximizes utility.
- The next agent uses this rule, so the next agent will only fail to press the button if it can prove that the button will be pressed by some later agent.
- I trust the next agent's proof system.
- Therefore, whether or not I press the button, the next agent or an agent after that will press the button.
- Therefore, if I don't press the button, my utility is 1.
- Therefore, my rule says I will not press the button.
But the same reasoning can be used for every time step! No agent will press the button, and all will trust that some future agent will. The flaw here is that we constructed a sequence of logical systems (one for each agent), each of which considers the next one sound.
We can formalize the procrastination paradox using reflective oracles instead of logic. Suppose each agent is a reflective CDT agent. Define a machine to represent whether agent presses the button (for some natural number ), and define a machine to represent whether agent or a later agent presses the button:
The first line states that the agent must press the button if , and may do anything otherwise. In the second line, , and is a way of sampling a bit with the same distribution , as defined in this post:
So we call our oracle on the pair (), and throw a fair coin.
- If the coin comes up heads and the oracle says “false” (the probability of returning 1 is smaller than 0.5), we output a zero.
- If the coin lands heads and the oracle says “true”, we output a one.
- If the coin lands tails and the oracle says “false”, we call our oracle on (,0.25) and repeat the process; if the coin lands tails and the oracle says “true”, we call the oracle on (,0.75) and repeat.
Observe that an assignment of probabilities to each (from which we can determine an consistent with these probabilities) is consistent if and only if each . This is because if any , then would also be less than 1, which implies , which implies .
Although a reflective oracle must assign for all , we have no restrictions on given this, so it is consistent for the oracle to say that no agent presses the button, but the button gets pressed eventually. Therefore, a sequence of reflective CDT agents reasoning in this fashion may choose to never press the button!
As reflective oracles were derived from Paul's probabilistic logic,
we would expect this proof to resemble the proof that Paul's logic
fails the procrastination paradox.
The proofs are not exactly analogous (specifically, the proof for
Paul's logic does not use recursion to define the statement that the button is pressed
in the future), but they are similar. Perhaps if we can solve the problem
in the simpler case with reflective oracles, we can adapt the solution to
talk about Paul's logic. For example, Benja suggested that we could
restrict utility functions to be a continuous function of the actions
(in that sufficiently late actions can only have a small effect on the
resulting utility), and then prove an optimality result for reflective
oracles that depends on continuity.
4 comments
Comments sorted by top scores.
comment by jimrandomh · 2015-04-20T19:58:40.000Z · LW(p) · GW(p)
The procrastination paradox is isomorphic to well-founded recursion. In the reasoning, the fourth step, "whether or not I press the button, the next agent or an agent after that will press the button" is an invalid proof-step; it's shown that there is an inductive steps ending at the conclusion, but not that that chain has a base case.
This can only happen when the relation between an agent and its successor is not well-founded. If there is any well-founded relation between agents and their successors - either because they're in a finite universe, or because the first agent picked a well-founded relation and build that in - then the button will eventually get pushed.
comment by danieldewey · 2015-03-26T23:12:48.000Z · LW(p) · GW(p)
I don't (confidently) understand why the procrastination paradox indicates a problem to be solved. Could you clarify that for me, or point me to a clarification?
First off, it doesn't seem like this kind of infinite buck-passing could happen in real life; is there a real-life (finite?) setting where this type of procrastination leads to bad actions? Second, it seems to me that similar paradoxes often come up in other situations where agents have infinite time horizons and can wait as long as they want -- does the problem come from the infinity, or from something else?
The best explanation that I can give is "It's immediately obvious to a human, even in an infinite situation, that the only way to get the button pressed is to press it immediately. Therefore, we haven't captured human reasoning (about infinite situations), and we should capture that human reasoning in order to be confident about AI reasoning." This is AFAICT the explanation Nate gives in the Vingean Reflection paper. Is that how you would express the problem?
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2015-03-31T02:59:10.000Z · LW(p) · GW(p)
It is definitely a problem with infinite buck-passing. It is probably possible to prove optimality if we have a continuous utility function (e.g. we're using discounting). I think we might actually want a continuous utility function, but maybe not. Is there any time t such that you would consider it almost as good for a wonderful human civilization to exist for t steps and then die, compared to existing indefinitely?
The way I would express the procrastination paradox is something like:
- There's the tiling agents problem: we want AIs to construct successors that they trust to make correct decisions.
- It would be desirable to have a system where an infinite sequence of AIs each trust the next one. If it worked, this would solve the tiling agents problem.
- But, if we have something like this, then it will be unsound: it will prove that the button will eventually get pressed, even though it will never actually get pressed.
We can construct things that do press the button, but they don't have the property of trusting successors that is desirable in some ways. Due to their handling of recursion, Paul's logic and reflective oracles are both candidates for solving the tiling agents problem, however they both fail the procrastination paradox (when it's set up this way).
Replies from: danieldewey↑ comment by danieldewey · 2015-03-27T23:24:25.000Z · LW(p) · GW(p)
Cool, thanks; sounds like I have about the same picture. One missing ingredient for me that was resolved by your answer, and by going back and looking at the papers again, was the distinction between consistency and soundness (on the natural numbers), which is not a distinction I think about often.
In case it's useful, I'll note that the procrastination paradox is hard for me to take seriously on an intuitive level, because some part of me thinks that requiring correct answers in infinite decision problems is unreasonable; so many reasoning systems fail on these problems, and infinite situations seem so unlikely, that they are hard for me to get worked up about. This isn't so much a comment on how important the problem actually is, but more about how much argumentation may be required to convince people like me that they're actually worth working on.