# Troll Bridge

post by abramdemski · 2019-08-23T18:36:39.584Z · score: 72 (41 votes) · LW · GW · 36 comments

## Contents

  Analogy to Smoking Lesion
Probabilistic Version
Conclusions
None


All of the results in this post, and most of the informal observations/interpretations, are due to Sam Eisenstat.

Troll Bridge is a decision problem which has been floating around for a while, but which has lacked a good introductory post. The original post gives the essential example, but it lacks the "troll bridge" story, which (1) makes it hard to understand, since it is just stated in mathematical abstraction, and (2) makes it difficult to find if you search for "troll bridge".

The basic idea is that you want to cross a bridge. However, there is a troll who will blow up the bridge with you on it, if (and only if) you cross it "for a dumb reason" — for example, due to unsound logic. You can get to where you want to go by a worse path (through the stream). This path is better than being blown up, though.

We apply a Löbian proof to show not only that you choose not to cross, but furthermore, that your counterfactual reasoning is confident that the bridge would have blown up if you had crossed. This is supposed to be a counterexample to various proposed notions of counterfactual, and for various proposed decision theories.

The pseudocode for the environment (more specifically, the utility gained from the environment) is as follows:

IE, if the agent crosses the bridge and is inconsistent, then U=-10. (□⊥ means "PA proves an inconsistency".) Otherwise, if the agent crosses the bridge, U=+10. If neither of these (IE, the agent does not cross the bridge), U=0.

The pseudocode for the agent could be as follows:

This is a little more complicated, but the idea is supposed to be that you search for every "action implies utility" pair, and take the action for which you can prove the highest utility (with some tie-breaking procedure). Importantly, this is the kind of proof-based decision theory which eliminates spurious counterfactuals in 5-and-10 type problems [AF · GW]. It isn't that easy to trip up with Löbian proofs. (Historical/terminological note: This decision theory was initially called MUDT, and is still sometimes referred to in that way. However, I now often call it proof-based decision theory, because it isn't centrally a UDT. "Modal DT" (MDT) would be reasonable, but the modal operator involved is the "provability" operator, so "proof-based DT" seems more direct.)

Now, the proof:

• Reasoning within PA (ie, the logic of the agent):
• Suppose the agent crosses.
• Further suppose that the agent proves that crossing implies U=-10.
• Examining the source code of the agent, because we're assuming the agent crosses, either PA proved that crossing implies U=+10, or it proved that crossing implies U=0.
• So, either way, PA is inconsistent -- by way of 0=-10 or +10=-10.
• So the troll actually blows up the bridge, and really, U=-10.
• Therefore (popping out of the second assumption), if the agent proves that crossing implies U=-10, then in fact crossing implies U=-10.
• By Löb's theorem, crossing really implies U=-10.
• So (since we're still under the assumption that the agent crosses), U=-10.
• So (popping out of the assumption that the agent crosses), the agent crossing implies U=-10.
• Since we proved all of this in PA, the agent proves it, and proves no better utility in addition (unless PA is truly inconsistent). On the other hand, it will prove that not crossing gives it a safe U=0. So it will in fact not cross.

The paradoxical aspect of this example is not that the agent doesn't cross -- it makes sense that a proof-based agent can't cross a bridge whose safety is dependent on the agent's own logic being consistent, since proof-based agents can't know whether their logic is consistent. Rather, the point is that the agent's "counterfactual" reasoning looks crazy. (However, keep reading for a version of the argument where it does make the agent take the wrong action.) Arguably, the agent should be uncertain of what happens if it crosses the bridge, rather than certain that the bridge would blow up. Furthermore, the agent is reasoning as if it can control whether PA is consistent, which is arguably wrong.

In a comment [AF · GW], Stuart points out that this reasoning seems highly dependent on the code of the agent; the "else" clause could be different, and the argument falls apart. I think the argument keeps its force:

• On the one hand, it's still very concerning if the sensibility of the agent depends greatly on which action it performs in the "else" case.
• On the other hand, we can modify the troll's behavior to match the modified agent. The general rule is that the troll blows up the bridge if the agent would cross for a "dumb reason" -- the agent then concludes that the bridge would be blown up if it crossed. I can no longer complain that the agent reasons as if it were controlling the consistency of PA, but I can still complain that the agent thinks an action is bad because that action indicates its own insanity, due to a troublingly circular argument.

## Analogy to Smoking Lesion

One interpretation of this thought-experiment is that it shows proof-based decision theory to be essentially a version of EDT, in that it has EDT-like behavior for Smoking Lesion. The analogy to Smoking Lesion is relatively strong:

• An agent is at risk of having a significant internal issue. (In Smoking Lesion, it’s a medical issue. In Troll Bridge, it is logical inconsistency.)
• The internal issue would bias the agent toward a particular action. (In Smoking Lesion, the agent smokes. In Troll Bridge, an inconsistent agent crosses the bridge.)
• The internal issue also causes some imagined practical problem for the agent. (In Smoking Lesion, the lesion makes one more likely to get cancer. In Troll Bridge, the inconsistency would make the troll blow up the bridge.)
• There is a chain of reasoning which combines these facts to stop the agent from taking the action. (In smoking lesion, EDT refuses to smoke due to the correlation with cancer. In Troll Bridge, the proof-based agent refuses to cross the bridge because of a Löbian proof that crossing the bridge leads to disaster.)
• We intuitively find the conclusion nonsensical. (It seems the EDT agent should smoke; it seems the proof-based agent should not expect the bridge to explode.)

Indeed, the analogy to smoking lesion seems to strengthen the final point -- that the counterfactual reasoning is wrong.

## Probabilistic Version

For the purely logical version, I said that we can't fault the conclusion (because the agent can't prove that it is safe to cross the bridge) – only the reasoning is being critiqued. We might hope that an agent who uses some uncertainty, rather than reasoning with logic alone, would decide to cross the bridge.

More specifically, suppose that we make the risk of crossing extremely small: rather than -10 payoff for the case where the bridge is blown up, we get a very small negative payoff, -epsilon, which is barely worse than refusing to cross. This should make crossing the bridge worth the risk; very little to lose, and everything to gain.

Unfortunately, this is not how things work out.

Suppose that the agent uses a probability distribution which "respects logic" in the sense that it assigns probability zero to anything which is logically refutable. (This assumes logical omniscience. I'll drop that assumption later.)

The agent is as follows: if P(cross)=0, cross; if P(¬cross)=0, don't cross; else, take the action with highest expected utility, breaking ties by not crossing. The idea is that we are trying to take the highest-expected-utility action; however, we must handle cases where this isn't well-defined. Choosing to take an action when the probability of the action is zero guarantees that the agent can never prove its own action; it is a version of the chicken rule. Breaking ties by not crossing is arbitrary, but gives the simplest version of the example, because this choice reduces the number of cases we need to check.

The troll is as follows: if the agent crosses the bridge due to the P(cross)=0 clause, blow up the bridge.

Reasoning in the logic of the agent:

• Suppose
• Suppose
• Examining the agent, either crossing had higher expected utility, or P(cross)=0. But we assumed , so it must be the latter. So the bridge gets blown up.
• Therefore, .
• Therefore,

Thus, by Löb's theorem, we see that .

Therefore, since the agent is certain of things which are provable, the agent assigns expected value -10 to crossing. Since the expected value of the other action will always be higher than this, and since we know the agent doesn't ever really enter into the P=0 conditions, the agent will choose not to cross.

Notice that this reasoning did not depend much on the values 10, 0, and -10. The utility of the bridge getting blown up could be -0.000001, and the agent still won't cross. It isn't weighing the risk; it's decided that the worst outcome is inevitable. In the case of proof-based agents, I said that the overall decision not to cross was understandable, since proof-based agents are unable to weigh the risks. A probabilistic agent, however, should intuitively be able to say "I don't know whether the bridge will get blown up, because it involves reasoning about properties of myself which I'm fundamentally uncertain about; but, the odds look decent." But that's not what happens: instead, it is sure that crossing is unfavorable, no matter what overall probability it assigns to P(A=cross)=0.

So, in this case we conclude that the Troll Bridge example results in a chicken-rule-based agent taking the wrong action overall. The agent shouldn't be sure that it would cross "for the right reason" (it should assign some probability to P(A=cross)=0, since it can't know that its own logic is consistent). However, intuitively, it should be able to assign some probability to this, and balance the risks. If the downside risk is U=-0.000001, and the probability it assigns to its logic being consistent is not similarly small, it should cross -- and in doing so, it would get +10.

As mentioned for the proof-based agent, the agent's code is a bit arbitrary, and it is worth asking how important the details were. In particular, the default in the case of a tie was to not cross. What if the default in case of a tie were to cross?

We then modify the troll's algorithm to blow up the bridge if and only if P(A=cross)=0 or there is a tie. The proof then goes through in the same way.

Perhaps you think that the problem with the above version is that I assumed logical omniscience. It is unrealistic to suppose that agents have beliefs which perfectly respect logic. (Un)Fortunately, the argument doesn't really depend on this; it only requires that the agent respects proofs which it can see, and eventually sees the Löbian proof referenced. We can analyze this using logical inductors, with a version of LIDT which has a chicken rule. (You can use LIDT which plays epsilon-chicken, taking any action with probability less than epsilon; or, you can consider a version which takes actions which the deductive state directly disproves. Either case works.) We consider LIDT on a sequence of troll-bridge problems, and show that it eventually notices the Löbian proof and starts defecting. This is even more frustrating than the previous example, because the agent can cross for a long time, apparently learning that crossing is safe and reliably gets +10 payoff. Then, one day, it suddenly sees the Löbian proof and stops crossing the bridge!

I leave that analysis as an exercise for the reader.

## Conclusions

All of the examples have depended on a version of the chicken rule. This leaves us with a fascinating catch-22:

• We need the chicken rule to avoid spurious proofs [AF · GW]. As a reminder: spurious proofs are cases where an agent would reject an action if it could prove that it would not take that action. These actions can then be rejected by an application of Löb's theorem. The chicken rule avoids this problem by ensuring that agents cannot know their own actions, since if they did then they'd take a different action from the one they know they'll take (and they know this, conditional on their logic being consistent).
• However, Troll Bridge shows that the chicken rule can lead to another kind of problematic Löbian proof.

So, we might take Troll Bridge to show that the chicken rule does not achieve its goal, and therefore reject the chicken rule. However, this conclusion is very severe. We cannot simply drop the chicken rule and open the gates to the (much more common!) spurious proofs. We would need an altogether different way of rejecting the spurious proofs; perhaps a full account of logical counterfactuals.

Furthermore, it is possible to come up with variants of Troll Bridge which counter some such proposals. In particular, Troll Bridge was originally invented to counter proof-length counterfactuals, which essentially generalize chicken rules, and therefore lead to the same Troll Bridge problems).

Another possible conclusion could be that Troll Bridge is simply too hard, and we need to accept that agents will be vulnerable to this kind of reasoning.

comment by Gurkenglas · 2019-08-23T19:39:29.537Z · score: 27 (19 votes) · LW · GW

Written slightly differently, the reasoning seems sane: Suppose I cross. I must have proven it's a good idea. Aka I proved that I'm consistent. Aka I'm inconsistent. Aka the bridge blows up. Better not cross.

comment by abramdemski · 2019-08-26T20:24:30.923Z · score: 12 (4 votes) · LW · GW

I agree with your English characterization, and I also agree that it isn't really obvious that the reasoning is pathological. However, I don't think it is so obviously sane, either.

• It seems like counterfactual reasoning about alternative actions should avoid going through "I'm obviously insane" in almost every case; possibly in every case. If you think about what would happen if you made a particular chess move, you need to divorce the consequences from any "I'm obviously insane in that scenario, so the rest of my moves in the game will be terrible" type reasoning. You CAN'T assess that making a move would be insane UNTIL you reason out the consequences w/o any presumption of insanity; otherwise, you might end up avoiding a move only because it looks insane (and it looks insane only because you avoid it, so you think you've gone mad if you take it). This principle seems potentially strong enough that you'd want to apply it to the Troll Bridge case as well, even though in Troll Bridge it won't actually help us make the right decision (it just suggests that expecting the bridge to blow up isn't a legit counterfactual).
• Also, counterfactuals which predict that the bridge blows up seem to be saying that the agent can control whether PA is consistent or inconsistent. That might be considered unrealistic.
comment by Gurkenglas · 2019-08-26T21:20:04.654Z · score: 4 (2 votes) · LW · GW

Troll Bridge is a rare case where agents that require proof to take action can prove they would be insane to take some action before they've thought through its consequences. Can you show how they could unwisely do this in chess, or some sort of Troll Chess?

I don't see how this agent seems to control his sanity. Does the agent who jumps off a roof iff he can (falsely) prove it wise choose whether he's insane by choosing whether he jumps?

comment by abramdemski · 2019-09-04T04:15:12.224Z · score: 11 (4 votes) · LW · GW
I don't see how this agent seems to control his sanity.

The agent in Troll Bridge thinks that it can make itself insane by crossing the bridge. (Maybe this doesn't answer your question?)

Troll Bridge is a rare case where agents that require proof to take action can prove they would be insane to take some action before they've thought through its consequences. Can you show how they could unwisely do this in chess, or some sort of Troll Chess?

I make no claim that this sort of case is common. Scenarios where it comes up and is relevant to X-risk might involve alien superintelligences trolling human-made AGI. But it isn't exactly high on my list of concerns. The question is more about whether particular theories of counterfactual are right. Troll Bridge might be "too hard" in some sense -- we may just have to give up on it. But, generally, these weird philosophical counterexamples are more about pointing out problems. Complex real-life situations are difficult to deal with (in terms of reasoning about what a particular theory of counterfactuals will actually do), so we check simple examples, even if they're outlandish, to get a better idea of what the counterfactuals are doing in general.

comment by Gurkenglas · 2019-09-04T12:05:30.123Z · score: 1 (1 votes) · LW · GW

Correct. I am trying to pin down exactly what you mean by an agent controlling a logical statement. To that end, I ask whether an agent that takes an action iff a statement is true controls the statement through choosing whether to take the action. ("The Killing Curse doesn't crack your soul. It just takes a cracked soul to cast.")

Perhaps we could equip logic with a "causation" preorder such that all tautologies are equivalent, causation implies implication, and whenever we define an agent, we equip its control circuits with causation. Then we could say that A doesn't cross the bridge because it's not insane. (I perhaps contentiously assume that insanity and proving sanity are causally equivalent.)

If we really wanted to, we could investigate the agent that only accepts utility proofs that don't go causally backwards. (Or rather, it requires that its action provably causes the utility.)

You claimed this reasoning is unwise in chess. Can you give a simple example illustrating this?

comment by abramdemski · 2019-10-02T20:35:44.375Z · score: 2 (1 votes) · LW · GW
Correct. I am trying to pin down exactly what you mean by an agent controlling a logical statement. To that end, I ask whether an agent that takes an action iff a statement is true controls the statement through choosing whether to take the action. ("The Killing Curse doesn't crack your soul. It just takes a cracked soul to cast.")

The point here is that the agent described is acting like EDT is supposed to -- it is checking whether its action implies X. If yes, it is acting as if it controls X in the sense that it is deciding which action to take using those implications. I'm not arguing at all that we should think "implies X" is causal, nor even that the agent has opinions on the matter; only that the agent seems to be doing something wrong, and one way of analyzing what it is doing wrong is to take a CDT stance and say "the agent is behaving as if it controls X" -- in the same way that CDT says to EDT "you are behaving as if correlation implies causation" even though EDT would not assent to this interpretation of its decision.

If we really wanted to, we could investigate the agent that only accepts utility proofs that don't go causally backwards. (Or rather, it requires that its action provably causes the utility.)
You claimed this reasoning is unwise in chess. Can you give a simple example illustrating this?

I think you have me the wrong way around; I was suggesting that certain causally-backwards reasoning would be unwise in chess, not the reverse. In particular, I was suggesting that we should not judge a move poor because we think the move is something only a poor player would do, but always the other way around. For example, suppose we have a prior on moves which suggests that moving a queen into danger is something only a poor player would do. Further suppose we are in a position to move our queen into danger in a way which forces checkmate in 4 moves. I'm saying that if we reason "I could move my queen into danger to open up a path to checkmate in 4. However, only poor players move their queen into danger. Poor players would not successfully navigate a checkmate-in-4. Therefore, if I move my queen into danger, I expect to make a mistake costing me the checkmate in 4. Therefore, I will not move my queen into danger." That's an example of the mistake I was pointing at.

Note: I do not personally endorse this as an argument for CDT! I am expressing these arguments because it is part of the significance of Troll Bridge. I think these arguments are the kinds of things one should grapple with if one is grappling with Troll Bridge. I have defended EDT from these kinds of critiques extensively elsewhere. My defenses do not work against Troll Bridge, but they do work against the chess example. But I'm not going into those defenses here because it would distract from the points relevant to Troll Bridge.

comment by Gurkenglas · 2019-10-03T12:20:16.458Z · score: 1 (1 votes) · LW · GW

If I'm a poor enough player that I merely have evidence, not proof, that the queen move mates in four, then the heuristic that queen sacrifices usually don't work out is fine and I might use it in real life. If I can prove that queen sacrifices don't work out, the reasoning is fine even for a proof-requiring agent. Can you give a chesslike game where some proof-requiring agent can prove from the rules and perhaps the player source codes that queen sacrifices don't work out, and therefore scores worse than some other agent would have? (Perhaps through mechanisms as in Troll bridge.)

comment by abramdemski · 2019-10-04T23:45:44.998Z · score: 2 (1 votes) · LW · GW

The heuristic can override mere evidence, agreed. The problem I'm pointing at isn't that the heuristic is fundamentally bad and shouldn't be used, but rather that it shouldn't circularly reinforce its own conclusion by counting a hypothesized move as differentially suggesting you're a bad player in the hypothetical where you make that move. Thinking that way seems contrary to the spirit of the hypothetical (whose purpose is to help evaluate the move). It's fine for the heuristic to suggest things are bad in that hypothetical (because you heuristically think the move is bad); it seems much more questionable to suppose that your subsequent moves will be worse in that hypothetical, particularly if that inference is a lynchpin if your overall negative assessment of the move.

What do you want out of the chess-like example? Is it enough for me to say the troll could be the other player, and the bridge could be a strategy which you want to employ? (The other player defeats the strategy if they think you did it for a dumb reason, and they let it work if they think you did it smartly, and they know you well, but you don't know whether they think you're dumb, but you do know that if you were being dumb then you would use the strategy.) This is can be exactly troll bridge as stated in the post, but set in chess with player source code visible.

I'm guessing that's not what you want, but I'm not sure what you want.

comment by Gurkenglas · 2019-10-05T13:00:03.225Z · score: 1 (1 votes) · LW · GW

I started asking for a chess example because you implied that the reasoning in the top-level comment stops being sane in iterated games.

In a simple iteration of Troll bridge, whether we're dumb is clear after the first time we cross the bridge. In a simple variation, the troll requires smartness even given past observations. In either case, the best worst-case utility bound requires never to cross the bridge, and A knows crossing blows A up. You seemed to expect more.

Suppose my chess skill varies by day. If my last few moves were dumb, I shouldn't rely on my skill today. I don't see why I shouldn't deduce this ahead of time and, until I know I'm smart today, be extra careful around moves that to dumb players look extra good and are extra bad.

More concretely: Suppose that an unknown weighting of three subroutines approval-votes on my move: Timmy likes moving big pieces, Johnny likes playing good chess, and Spike tries to win in this meta. Suppose we start with move A, B or C available. A and B lead to a Johnny gambit that Timmy would ruin. Johnny thinks "If I play alone, A and B lead to 80% win probability and C to 75%. I approve exactly A and B.". Timmy gives 0, 0.2 and 1 of his maximum vote to A, B and C. Spike wants the gambit to happen iff Spike and Johnny can outvote Timmy. Spike wants to vote for A and against B. How hard Spike votes for C trades off between his test's false positive and false negative rates. If B wins, ruin is likely. Spike's reasoning seems to require those hypothetical skill updates you don't like.

comment by abramdemski · 2019-10-05T21:09:29.968Z · score: 2 (1 votes) · LW · GW
I started asking for a chess example because you implied that the reasoning in the top-level comment stops being sane in iterated games.
In a simple iteration of Troll bridge, whether we're dumb is clear after the first time we cross the bridge.

Right, OK. I would say "sequential" rather than "iterated" -- my point was about making a weird assessment of your own future behavior, not what you can do if you face the same scenario repeatedly. IE: Troll Bridge might be seen as artificial in that the environment is explicitly designed to punish you if you're "dumb"; but, perhaps a sequential game can punish you more naturally by virtue of poor future choices.

Suppose my chess skill varies by day. If my last few moves were dumb, I shouldn't rely on my skill today. I don't see why I shouldn't deduce this ahead of time

Yep, I agree with this.

I concede the following points:

• If there is a mistake in the troll-bridge reasoning, predicting that your next actions are likely to be dumb conditional on a dumb-looking action is not an example of the mistake.
• Furthermore, that inference makes perfect sense, and if it is as analogous to the troll-bridge reasoning as I was previously suggesting, the troll-bridge reasoning makes sense.

However, I still assert the following:

• Predicting that your next actions are likely to be dumb conditional on a dumb looking action doesn't make sense if the very reason why you think the action looks dumb is that the next actions are probably dumb if you take it.

IE, you don't have a prior heuristic judgement that a move is one which you make when you're dumb; rather, you've circularly concluded that the move would be dumb -- because it's likely to lead to a bad outcome -- because if you take that move your subsequent moves are likely to be bad -- because it is a dumb move.

I don't have a natural setup which would lead to this, but the point is that it's a crazy way to reason rather than a natural one.

The question, then, is whether the troll-bridge reasoning is analogous to to this.

I think we should probably focus on the probabilistic case (recently added to the OP), rather than the proof-based agent. I could see myself deciding that the proof-based agent is more analogous to the sane case than the crazy one. But the probabilistic case seems completely wrong.

In the proof-based case, the question is: do we see the Löbian proof as "circular" in a bad way? It makes sense to conclude that you'd only cross the bridge when it is bad to do so, if you can see that proving it's a good idea is inconsistent. But does the proof that that's inconsistent "go through" that very inference? We know that the troll blows up the bridge if we're dumb, but that in itself doesn't constitute outside reason that crossing is dumb.

But I can see an argument that our "outside reason" is that we can't know that crossing is safe, and since we're a proof-based agent, would never take the risk unless we're being dumb.

However, this reasoning does not apply to the probabilistic agent. It can cross the bridge as a calculated risk. So its reasoning seems absolutely circular. There is no "prior reason" for it to think crossing is dumb; and, even if it did think it more likely dumb than not, it doesn't seem like it should be 100% certain of that. There should be some utilities for the three outcomes which preserve the preference ordering but which make the risk of crossing worthwhile.

comment by AlexMennen · 2019-09-15T16:34:59.969Z · score: 2 (1 votes) · LW · GW

I think the counterfactuals used by the agent are the correct counterfactuals for someone else to use while reasoning about the agent from the outside, but not the correct counterfactuals for the agent to use while deciding what to do. After all, knowing the agent's source code, if you see it start to cross the bridge, it is correct to infer that it's reasoning is inconsistent, and you should expect to see the troll blow up the bridge. But while deciding what to do, the agent should be able to reason about purely causal effects of its counterfactual behavior, screening out other logical implications.

Also, counterfactuals which predict that the bridge blows up seem to be saying that the agent can control whether PA is consistent or inconsistent.

Disagree that that's what's happening. The link between the consistency of the reasoning system and the behavior of the agent is because the consistency of the reasoning system controls the agent's behavior, rather than the other way around. Since the agent is selecting outcomes based on their consequences, it does make sense to speak of the agent choosing actions to some extent, but I think speaking of logical implications of the agent's actions on the consistency of formal systems as "controlling" the consistency of the formal system seems like an inappropriate attribution of agency to me.

comment by abramdemski · 2019-10-02T20:42:46.367Z · score: 6 (3 votes) · LW · GW

I agree with everything you say here, but I read you as thinking you disagree with me.

I think the counterfactuals used by the agent are the correct counterfactuals for someone else to use while reasoning about the agent from the outside, but not the correct counterfactuals for the agent to use while deciding what to do.

Yeah, that's the problem I'm pointing at, right?

Disagree that that's what's happening. The link between the consistency of the reasoning system and the behavior of the agent is because the consistency of the reasoning system controls the agent's behavior, rather than the other way around. Since the agent is selecting outcomes based on their consequences, it does make sense to speak of the agent choosing actions to some extent, but I think speaking of logical implications of the agent's actions on the consistency of formal systems as "controlling" the consistency of the formal system seems like an inappropriate attribution of agency to me.

I think we just agree on that? As I responded [LW · GW] to another comment here:

The point here is that the agent described is acting like EDT is supposed to -- it is checking whether its action implies X. If yes, it is acting as if it controls X in the sense that it is deciding which action to take using those implications. I'm not arguing at all that we should think "implies X" is causal, nor even that the agent has opinions on the matter; only that the agent seems to be doing something wrong, and one way of analyzing what it is doing wrong is to take a CDT stance and say "the agent is behaving as if it controls X" -- in the same way that CDT says to EDT "you are behaving as if correlation implies causation" even though EDT would not assent to this interpretation of its decision.
comment by cousin_it · 2019-08-24T00:08:15.611Z · score: 3 (3 votes) · LW · GW

Agree and strong upvote. I never understood why Abram and others take troll bridge so seriously.

comment by reavowed · 2019-08-28T23:16:02.937Z · score: 7 (5 votes) · LW · GW

I'm having difficulty following the line of the proof beginning "so, either way, PA is inconsistent". We have and , which together imply that , but I'm not immediately seeing how this leads to ?

comment by reavowed · 2019-08-31T10:18:35.189Z · score: 5 (3 votes) · LW · GW

Ah, got there. From , we get specifically and thus . But we have directly as a theorem (axiom?) about the behaviour of , and we can lift this to , so also and thus .

comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2019-08-24T07:15:37.741Z · score: 5 (6 votes) · LW · GW

Seems to me that if an agent with a reasonable heuristic for logical uncertainty came upon this problem, and was confident but not certain of its consistency, it would simply cross because expected utility would be above zero, which is a reason that doesn't betray an inconsistency. (Besides, if it survived it would have good 3rd party validation of its own consistency, which would probably be pretty useful.)

comment by abramdemski · 2019-08-26T20:30:32.546Z · score: 11 (4 votes) · LW · GW

I agree that "it seems that it should". I'll try and eventually edit the post to show why this is (at least) more difficult to achieve than it appears. The short version is that a proof is still a proof for a logically uncertain agent; so, if the Löbian proof did still work, then the agent would update to 100% believing it, eliminating its uncertainty; therefore, the proof still works (via its Löbian nature).

comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2019-09-15T19:11:25.065Z · score: 3 (2 votes) · LW · GW

The proof doesn't work on a logically uncertain agent. The logic fails here:

Examining the source code of the agent, because we're assuming the agent crosses, either PA proved that crossing implies U=+10, or it proved that crossing implies U=0.

A logically uncertain agent does not need a proof of either of those things in order to cross, it simply needs a positive expectation of utility, for example a heuristic which says that there's a 99% chance crossing implies U=+10.

Though you did say there's a version which still works for logical induction. Do you have a link to where I can see that version of the argument?

Edit: Now I still see the logic. On the assumption that the agent crosses but also proves that U=-10, the agent must have a contradiction somewhere, because that, and the logical uncertainty agents I'm aware of have a contradiction upon proving U=-10 because they prove that they will not cross, and then immediately cross in a maneuver meant to prevent exactly this kind of problem.

Wait but proving crossing implies U=-10 does not mean that prove they will not cross, exactly because they might still cross, if they have a contradiction.

God this stuff is confusing. I still don't think the logic holds though.

comment by abramdemski · 2019-10-02T23:49:05.246Z · score: 10 (2 votes) · LW · GW

I've now edited the post to give the version which I claim works in the empirically uncertain case, and give more hints for how it still goes through in the fully logically uncertain case.

comment by Donald Hobson (donald-hobson) · 2019-08-24T07:03:11.356Z · score: 4 (3 votes) · LW · GW

Viewed from the outside, in the logical counterfactual where the agent crosses, PA can prove its own consistency, and so is inconsistent. There is a model of PA in which "PA proves False". Having counterfactualed away all the other models, these are the ones left. Logical counterfactualing on any statement that can't be proved or disproved by a theory should produce the same result as adding it as an axiom. Ie logical counterfactualing ZF on choice should produce ZFC.

The only unreasonableness here comes from the agent's worst case optimizing behaviour. This agent is excessively cautious. A logical induction agent, with PA as a deductive process will assign some prob P strictly between 0 and 1 to "PA is consistent". Depending on which version of logical induction you run, and how much you want to cross the bridge, crossing might be worth it. (the troll is still blowing up the bridge iff PA proves False)

A logical counterfactual where you don't cross the bridge is basically a counterfactual world where your design of logical induction assigns lower prob to "PA is consistent". In this world it doesn't cross and gets zero.

The alternative is a logical factual where it expects +ve util.

So if we make logical induction like crossing enough, and not mind getting blown up much, it crosses the bridge. Lets reverse this. An agent really doesn't want blown up.

In the counterfactual world where it crosses, logical induction assigns more prob to "PA is consistant". The expected utility procedure has to use its real probability distribution, not ask the counterfactual agent for its expected util.

I am not sure what happens after this, I think you still need to think about what you do in impossible worlds. Still working it out.

comment by abramdemski · 2019-10-02T23:47:17.758Z · score: 10 (2 votes) · LW · GW

I've now edited the post to address uncertainty more extensively.

comment by abramdemski · 2019-08-26T20:40:37.414Z · score: 4 (2 votes) · LW · GW

I don't totally disagree, but see my reply to Gurkenglas [AF · GW] as well as my reply to Andrew Sauer [AF · GW]. Uncertainty doesn't really save us, and the behavior isn't really due to the worst-case-minimizing behavior. It can end up doing the same thing even if getting blown up is only slightly worse than not crossing! I'll try to edit the post to add the argument wherein logical induction fails eventually (maybe not for a week, though). I'm much more inclined to say "Troll Bridge is too hard; we can't demand so much of our counterfactuals" than I am to say "the counterfactual is actually perfectly reasonable" or "the problem won't occur if we have reasonable uncertainty".

comment by AprilSR · 2019-08-24T03:03:52.488Z · score: 4 (3 votes) · LW · GW

"(It makes sense that) A proof-based agent can't cross a bridge whose safety is dependent on the agent's own logic being consistent, since proof-based agents can't know whether their logic is consistent."

If the agent crosses the bridge, then the agent knows itself to be consistent.

The agent cannot know whether they are consistent.

Therefore, crossing the bridge implies an inconsistency (they know themself to be consistent, even though that's impossible.)

The counterfactual reasoning seems quite reasonable to me.

comment by abramdemski · 2019-08-26T20:25:29.383Z · score: 4 (2 votes) · LW · GW

See my reply to Gurkenglass [AF · GW].

comment by ESRogs · 2019-08-24T00:19:38.513Z · score: 4 (4 votes) · LW · GW
there is a troll who will blow up the bridge with you on it, if you cross it "for a dumb reason"

Does this way of writing "if" mean the same thing as "iff", i.e. "if and only if"?

comment by abramdemski · 2019-08-26T20:26:46.360Z · score: 4 (2 votes) · LW · GW

No, but I probably should have said "iff" or "if and only if". I'll edit.

comment by Stuart_Armstrong · 2019-10-01T15:05:18.839Z · score: 2 (1 votes) · LW · GW

Interesting.

I have two issues with the reasoning as presented; the second one is more important.

First of all, I'm unsure about "Rather, the point is that the agent's "counterfactual" reasoning looks crazy." I think we don't know the agent's counterfactual reasoning. We know, by Löb's theorem, that "there exists a proof that (proof of L implies L)" implies "there exists a proof of L". It doesn't tell us what structure this proof of L has to take, right? Who knows what counterfactuals are being considered to make that proof? (I may be misunderstanding this).

Second of all, it seems that if we change the last line of the agent to [else, "cross"], the argument fails. Same if we insert [else if A()="cross" ⊢ U=-10, then output "cross"; else if A()="not cross" ⊢ U=-10, then output "not cross"] above the last line. In both cases, this is because U=-10 is now possible, given crossing. I'm suspicious when the argument seems to depend so much on the structure of the agent.

To develop that a bit, it seems the agent's algorithm as written implies "If I cross the bridge, I am consistent" (because U=-10 is not an option). If we modify the algorithm as I just suggested, then that's no longer the case; it can consider counterfactuals where it crosses the bridge and is inconsistent (or, at least, of unknown consistency). So, given that, the agent's counterfactual reasoning no longer seems so crazy, even if it's as claimed. That's because the agent's reasoning needs to deduce something from "If I cross the bridge, I am consistent" that it can't deduce without that. Given that statement, then being Löbian or similar seems quite natural, as those are some of the few ways of dealing with statements of that type.

comment by abramdemski · 2019-10-02T21:04:21.145Z · score: 2 (1 votes) · LW · GW
First of all, I'm unsure about "Rather, the point is that the agent's "counterfactual" reasoning looks crazy." I think we don't know the agent's counterfactual reasoning. We know, by Löb's theorem, that "there exists a proof that (proof of L implies L)" implies "there exists a proof of L". It doesn't tell us what structure this proof of L has to take, right? Who knows what counterfactuals are being considered to make that proof? (I may be misunderstanding this).

The agent as described is using provable consequences of actions to make decisions. So, it is using provable consequences as counterfactuals. At least, that's the sense in which I mean it -- forgive my terminology if this doesn't make sense to you. I could have said "the agent's conditional reasoning" or "the agent's consequentialist reasoning" etc.

Second of all, it seems that if we change the last line of the agent to [else, "cross"], the argument fails. Same if we insert [else if A()="cross" ⊢ U=-10, then output "cross"; else if A()="not cross" ⊢ U=-10, then output "not cross"] above the last line. In both cases, this is because U=-10 is now possible, given crossing. I'm suspicious when the argument seems to depend so much on the structure of the agent.

I think the benefit of the doubt does not go to the agent, here. If the agent's reasoning is sensible only under certain settings of the default action clause, then one would want a story about how to set the default action clause ahead of time so that we can ensure that the agent's reasoning is always sensible. But this seems impossible.

However, I furthermore think the example can be modified to make the agent's reasoning look silly in any case.

If the last line of the agent reads "cross" rather than "not cross", I think we can recover the argument by changing what the troll is doing. The general pattern is supposed to be: the troll blows up the bridge if we cross "for a dumb reason" -- where "dumb" is targeted at agents who do anything analogous to epsilon exploration or the chicken rule.

So, we modify the troll to blow up the bridge if PA is inconsistent OR if the agent reaches its "else" clause.

The agent can no longer be accused of reasoning as if it controlled PA. However, it still sees the bridge blowing up as a definite consequence of its crossing the bridge. This still doesn't seem sensible, because its very reason for believing this involves a circular supposition that it's provable -- much like my chess example [LW · GW] where an agent concludes that an action is poor due to a probabilistic belief that it would be a poor player if it took that sort of action.

[Note that I have not checked the proof for the proposed variant in detail, so, could be wrong.]

comment by Stuart_Armstrong · 2019-10-03T13:32:20.434Z · score: 2 (1 votes) · LW · GW

If the agent's reasoning is sensible only under certain settings of the default action clause

That was my first rewriting; the second is an example of a more general algorithm that would go something like this. If we assume that both probabilities and utilities are discrete, all of the form q/n for some q, and bounded above and below by N, then something like this (for EU the expected utility, and Actions the set of actions, and b some default action):

for q integer in N*n^2 to -N*n^2 (ordered from highest to lowest):
for a in Actions:
if A()=a ⊢ EU=q/n^2 then output a
else output b


Then the Löbian proof fails. The agent will fail to prove any of those "if" implications, until it proves "A()="not cross" ⊢ EU=0". Then it outputs "not cross"; the default action b is not relevant. Also not relevant, here, is the order in which a is sampled from "Actions".

comment by abramdemski · 2019-10-04T23:25:44.865Z · score: 2 (1 votes) · LW · GW

I don't see why the proof fails here; it seems to go essentially as usual.

Reasoning in PA:

Suppose a=cross->u=-10 were provable.

Further suppose a=cross.

Note that we can see there's a proof that not crossing gets 0, so it must be that a better (or equal) value was found for crossing, which must have been +10 unless PA is inconsistent, since crossing implies that u is either +10 or -10. Since we already assumed crossing gets -10, this leads to trouble in the usual way, and the proof proceeds from there.

(Actually, I guess everything is a little messier since you haven't stipulated the search order for actions, so we have to examine more cases. Carrying out some more detailed reasoning: So (under our assumptions) we know PA must have proved (a=cross -> u=10). But we already supposed that it proves (a=cross -> u=-10). So PA must prove not(a=cross). But then it must prove prove(not(a=cross)), since PA has self-knowledge of proofs. Which means PA can't prove that it'll cross, if PA is to be consistent. But it knows that proving it doesn't take an action makes that action very appealing; so it knows it would cross, unless not crossing was equally appealing. But it can prove that not crossing nets zero. So the only way for not crossing to be equally appealing is for PA to also prove not crossing nets 10. For this to be consistent, PA has to prove that the agent doesn't not-cross. But now we have PA proving that the agent doesn't cross and also that it doesn't not cross! So PA must be inconsistent under our assumptions. The rest of the proof continues as usual.)

comment by Stuart_Armstrong · 2019-10-13T14:48:43.227Z · score: 2 (1 votes) · LW · GW

You are entirely correct; I don't know why I was confused.

However, looking at the proof again, it seems there might be a potential hole. You use Löb's theorem within an assumption sub-loop. This seems to assume that from "", we can deduce "".

But this cannot be true in general! To see this, set . Then , trivially; if, from that, we could deduce , we would have for any . But this statement, though it looks like Löb's theorem, is one that we cannot deduce in general (see Eliezer's "medium-hard problem" here [LW · GW]).

Can this hole be patched?

(note that if , where is a PA proof that adds A as an extra axiom, then we can deduce ).

comment by Chris_Leong · 2019-09-01T13:29:56.879Z · score: 2 (1 votes) · LW · GW

I'm finding some of the text in the comic slightly hard to read.

comment by abramdemski · 2019-09-04T04:05:21.169Z · score: 4 (2 votes) · LW · GW

Yep, sorry. The illustrations were not actually originally meant for publication; they're from my personal notes. I did it this way (1) because the pictures are kind of nice, (2) because I was frustrated that no one had written a good summary post on Troll Bridge yet, (3) because I was in a hurry. Ideally I'll edit the images to be more suitable for the post, although adding the omitted content is a higher priority.

comment by Jiro · 2019-08-23T19:12:21.858Z · score: 1 (3 votes) · LW · GW

How can you (in general) conclude something by examining the source code of an agent, without potentially implicating the Halting Problem?

comment by Gurkenglas · 2019-08-23T23:27:37.065Z · score: 13 (5 votes) · LW · GW

Nothing stops the Halting problem being solved in particular instances. I can prove that some agent halts, and so can it. See FairBot in Robust Cooperation in the Prisoner's Dilemma.

comment by abramdemski · 2019-08-26T20:32:13.571Z · score: 4 (2 votes) · LW · GW

In this case, we have (by assumption) an output of the program, so we just look at the cases where the program gives that output.