Two Issues with Playing Chicken with the Universe
post by Chris_Leong · 2022-12-31T06:47:52.988Z · LW · GW · 4 commentsContents
4 comments
If you are unfamiliar with the 5-and-10 problem, please refer to the Action Counterfactuals section of this post on Embedded Agency [LW · GW]. I regret that I am unable to recommend a resource for learning about the concept of "Playing Chicken with the Universe". If any reader has recommendations for such a resource, I welcome their suggestions in the comments section below.
Consider a variant of the 5-and-10 problem where the agent visualises an image of the number they are going to select immediately before making their decision, and never entertains such an image at any other time. Further, imagine that we have access to the agent's brain scans, we can use it to demonstrate that the agent will select the number 5. It is highly likely that we would similarly be able to prove that the agent would imagine 5, without first proving that it will choose 5. We should also have enough information about the agent to show that conditional on the agent choosing 10, it will first have imagined 10 so it will imagine both 5 and 10. This contradiction is a spurious counterfactual and it would allow us to prove that in the condition where we choose the option 10 would give us whatever utility we want to prove. Playing chicken with the universe doesn't prevent this as the agent never proves it will or will not take a particular option, but instead proves facts about correlates.
It is possible to demonstrate a similar issue by utilitising perfect predictors instead of making the agent imagine its choice. Imagine we have an agent which chooses 5 and we have access to the agent's brain scans, plus technical details of how the predictor works. In at least some scenarios, we should be able to use our knowledge of the brain scans + how the predictor works to prove that the predictor will predict the agent choosing 5, without first proving anything about what it will choose. Conditional on choosing 10, we could show it would be predicted to take 10, which would again give us a contradiction as we would then be expecting the predictor to predict both 5 and 10. Again, playing chicken with the universe doesn't seem to offer a resolution to this issue.
Playing chicken with the universe is a hack. It attempts to solve the paradox of imagining an agent that takes 5 and then conditioning on it taking 10 by sweeping the issue under the rug. However, even though this patch works in some cases, it doesn't solve the underlying issue that we haven't really defined how counterfactuals should be constructed, just one contradiction that we should avoid. I recommend trying to solve the hard core of the problem instead (and that is what much of my research focuses on).
4 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2022-12-31T10:21:46.564Z · LW(p) · GW(p)
conditional on the agent choosing 10, it will first have imagined 10 so it will imagine both 5 and 10.
Can you explain this point a bit? I am missing how the setup is a game of chicken, and why the agent should imagine both 5 and 10 since you are conditionalizing on the agent selecting 5 in one case and 10 in the other case? My inclination is to imagine two possible worlds, one where the agent imagines and chooses 5 and another where the agent imagines and chooses 10, not both at once. Only one of these possible worlds turns out to be the actual. Someone modeling the agent in some way can predict that the agent will pick 5 after imagining 5 and will pick 10 after imagining 10. But it seems like you are saying more than that.
Replies from: Chris_Leong, Chris_Leong↑ comment by Chris_Leong · 2023-01-01T12:43:20.850Z · LW(p) · GW(p)
I asked my friend for a resource that explained the 5-and-10 problem well and he provided this link [LW · GW]. Unfortunately, I still don't have a good link for "Playing Chicken with the universe".
↑ comment by Chris_Leong · 2022-12-31T12:17:48.266Z · LW(p) · GW(p)
I’m discussing an agent that does in fact take 5 which imagines taking 10 instead. There have been some discussions of decision theory using proof-based agents and how they can run in spurious counterfactual. If you’re confused, you can try searching the archive of this website. I tried earlier today, but couldn’t find particularly good resources to recommend. I couldn’t find a good resource for playing chicken with the universe either.
(I may write a proper article at some point in the future to explain these concepts if I can’t find an article that explains them well)
Replies from: shminux↑ comment by Shmi (shminux) · 2022-12-31T21:49:50.295Z · LW(p) · GW(p)
I’m discussing an agent that does in fact take 5 which imagines taking 10 instead.
Ah, I missed that. That seems like a mental quirk rather than anything fundamental. Then again, maybe you mean something else.