Questioning GLS-Coherence

post by abramdemski · 2016-06-18T18:49:54.000Z · LW · GW · 0 comments

Contents

  Are the updates naturalistic?
  Bayes-Endorsement ≠ Planning-Endorsement
None
No comments

At the end of my trolling post, I said that naturalistic logical updates stand or fall with the strength of the reflective consistency argument for them, and that this has not been sufficiently articulated yet. Indeed, the choice of -coherence as a model of bounded rationality is highly questionable. While previous posts have discussed the motivation a fair amount, I was writing with the hopes of providing a solution, so I added certain assumptions that I thought might be sufficient, if not necessary. The trolling theorem, among other concerns, casts doubt on those assumptions. Here, I describe where I think I was wrong.

Are the updates naturalistic?

My argument for using -coherence was that the agent, planning ahead for the possibility that is proved, should be coherent with respect to new propositional constraints which contains. formalizes this as , which allows us to conclude that a -coherent prior is coherent with respect to when updated on . Coherence updating on isn't really what my argument asked for, though: I asked for coherence conditioning on future-self having access to a proof of . A -coherent prior isn't good enough to fill in this inferential gap on its own. It can imagine future-self having access to a proof of without being true.

Hence, my updates were not really "naturalistic"; they were not able to look at the imagined future world and extract . In order for the UDT to make plans which depend on whether holds, it would give the plans direct access to the theorem prover in a non-naturalistic way (it would trust its own theorem prover in a special way which wouldn't automatically extend to other theorem provers in the environment).

I think this objection isn't totally fatal to -coherence; it means there has to be special attention to the "pipe" which connects with actually-having-proven- in the overall decision theory. However, it does suggest that we might be able to accomplish the job easier without -coherence, only using this "pipe" between proof and belief.

Bayes-Endorsement Planning-Endorsement

The two-update problem is the observation that in systems of logical probabilities so far, there seems to be a non-Bayesian "second update" which adjusts beliefs based on proofs. (Let's call this the "logical update".) I've argued in previous posts that there is a strong connection between reflective consistency and Bayesian updates, which led me to attempt to pack a sufficiently good logical update into a Bayesian update on proofs. The trolling theorem shows, at best, severe difficulties making this work. My proposal of -coherence solves the two-update problem, but abandons essentially every other good property which we've been aiming for in logical updates. I now see that this was based on faulty assumptions about the importance of the Bayesian update.

I've been failing to distinguish sufficiently well between two different notions of "endorsement". My argument for -coherence, as stated in the previous section, implicitly relies on a connection between plan endorsement and Bayesian endorsement. Plan-endorsement means estimating high utility for plans optimized according to future-self's beliefs. Bayes-endorsement means that future-self's beliefs match current self's conditional probabilities on the evidence future-self would see to change its beliefs (in this case, proofs). Intuitively (for me at least), there seemed to be an almost trivial relationship between these two: if your conditional probabilities differ from future-self's beliefs on the same evidence, there should be some scenarios in which these belief differences cause you to endorse different plans, right? This connection is not as strong as it appears.

Let's get a bit more formal. Let be the possibly-not-Bayes update of on some evidence . Also let be an event representing that beliefs have become some function based on the update.

Bayes-endorsement: .

Plan-endorsement:

These are somewhat difficult to compare, but the following implies plan-endorsement:

Strong plan-endorsement:

This says that current-self would defer to future-self's probability estimates, if it knew what they were. This implies plan-endorsement, because if current-self imagines that future-self has a specific belief distribution, then it evaluates plans in that scenario with exactly that belief distribution; so, of course it agrees with future-self about which plans are good.

But, even strong plan-endorsement is weaker than Bayes-endorsement. Strong plan-endorsement implies Bayes-endorsement under the condition that . If knows exactly how future-self reacts to evidence, and it trusts that reaction, then its conditional probabilities must embody that reaction. However, in situations of logical uncertainty such as $GLS-coherence was attempting to address, this need not be the case. Future-self's computations in response to observing proofs can be more complicated than current-self can predict.

As in the previous section, this makes the problem appear easier rather than harder. Bayes-endorsement seemed quite problematic, but it turns out we don't need it. Perhaps logically ignorant UDT can use a really simplistic prior which has almost no structure except strong plan-endorsement of a really good logical update.

0 comments

Comments sorted by top scores.