Perspective Based Reasoning Could Absolve CDT
post by dadadarren · 2023-10-08T11:22:49.458Z · LW · GW · 5 commentsContents
Anthropics A Common Decision-Based Argument The Metaphysical Side CDT's Bug Is Actually A Feature The Problems None 5 comments
I have been advocating perspective-based reasoning (PBR) as the solution to anthropic paradoxes for the past few years. Here I would like to explore its implication on the decision theory side, most notably how it could absolve the shortcomings of CDT.
The concept of PBR is simple: treat perspective and accompanying indexicals as a fundamental part of basic logic, something that cannot be explained otherwise. e.g. you know this physical person is you, not because this person is logically significant or physically unique in some way, but because you can inherently tell that is the self. You simply know this since the subjective feeling is immediately linked to it. There's no other rationale.
Anthropics
The defining feature of anthropic paradoxes is the use of perspective-based indexicals in question formulation while not attaching them to specific physical observers (or observer-moments). In The Presumptuous Philosopher: the evidence of "my existence", in The Doomsday Argument: "the birth rank of me/our generation" and in The Sleeping Beauty Problem: "the awakening today" are all examples of this.
People tend to solve these questions with a physicalist view from nowhere, thus disregarding perspectives in reasoning. Not able to treat the indexicals as primitive, they are arbitrarily transcoded to mean something else [LW · GW], e.g. treat them as random samples. That is what leads to paradoxes.
In contrast, these questions should be answered from the given perspective, the indexicals treated as primitively understood. So self-existence is presupposed prior knowledge, and the probability distribution of indexicals among some assumed reference class is ill-defined. PBR therefore would reject both Doomsday Argument and Presumptuous Philosopher.
A Common Decision-Based Argument
It goes like this: if every observer in my situation makes the same decision as I do then the group would be better off overall. Therefore the probability that would lead me to such a decision is the right one. (what is considered "better", e.g. by measuring the average or sum of rewards, differs from camp to camp) It is often used to argue for the validity of the probability distribution of self among a supposed reference class, e.g. "the probability that I am the first 90% of human beings born into the world".
The supposed probability concerns the indexical I specifically, the corresponding decision should be about straightforward self-interest, e.g. how to maximize my reward. However, there is no rational strategy for this objective [LW · GW]. Instead, arguments of this type attempt to use the aggregate interest of a group defined by similar features. It removes the indexical component in decision-making even though it is attempting to justify an indexical probability. Unsurprisingly it generates answers that agree with the conventional non-perspective solutions as they are based on the same mistake.
It is worth noting that such arguments use an acausal model: my decision does not affect the decisions nor rewards of other observers in the supposed group. Whereas the straightforward self-interest objective is causal: my decision affects my reward and is evaluated with an objective concerning myself only.
The Metaphysical Side
As much as I would like to avoid it, to fully clarify how PBR absolves CDT the metaphysical side has to be explored. PBR regards perspectives as fundamental, which means they cannot be "explained away" or transcended in reasoning. It is therefore against the view-from-nowhere physicalism and incompatible with Lesswrong's favourite Many Worlds Interpretation.
PBR won't consider science as a direct examination of the world's underlying truth [LW · GW]. Any examination has to come from a predefined perspective of someone or something. A physical theory is "objective" not because it faithfully reflects the world's underlying truth, but because it can be used from a wide range of perspectives while remaining accurate in each case.
PBR suggests a physical analysis has to be performed from a predefined perspective. Exactly which thing's perspective is an exogenous input, i.e. "what thing is the self" is not a part involves physics. From this predefined point of view, the world around it can be described based on interactions received by the self from its environment. PBR naturally leads to Copenhagen-ish interpretations, especially Relational Quantum Mechanics.
It should be noted in the above framework because analyses are based on interactions from the environment, the "self" at the perspective center is not in the scope. In other words, physically studying something requires the perspective of another thing which interacts with it.
CDT's Bug Is Actually A Feature
The commonly regarded "flaw" is CDT does not include the decision maker as part of its model. So its analysis of the world is never complete. Indeed, it would be a serious problem if one follows the conventional non-perspective/view-from-nowhere school of thought as they would consider everything analyzable. Problems like Newcomb and Parfit's Hitchhiker cause trouble for CDT precisely by including analyses of the decision-maker.
However, according to PBR, a world model excluding the decision-maker is not a problem at all. In fact, it is exactly supposed to be so if the decision-maker is the self at the perspective center. In other words, if decisions are inherently first-person concepts. And there are good arguments for it.
Suppose you study this Plinko board. By analyzing its structure and function you can deduce the outcomes for given inputs. You may even be able to predict the outcome with some accuracy. But in this situation, it is absurd to say the Plinko board "decides" to put the ball somewhere. There is no "decision-making" on its part in such an analysis. We can change the subject from a simple board to more complex machines like computers or brains, yet the argument remains. Therefore it is highly plausible that "decision-making" is an internal reflective concept limited to the self, where such analysis does not apply.
The Problems
The Newcomb problem involves examining you and predicting the box(es) you would open. By putting you in the scope of analysis this question is not formulated from your perspective. Therefore, there is no "decision" to be made on your part. The subject would simply take whatever box(es) it is supposed to take, much like the Plinko board would put the ball in whichever slot it is supposed to put. Since there is no decision in the first place, it does not concern decision theories nor be used against CDT.
Alternatively, if the question is meant to be a decision problem it shall be presented from your perspective. It must not involve analyzing the decision-maker and thus recognize the decision as unpredictable. Omega's accuracy could be understood as a statistical phenomenon, e.g. 90% of people 1-boxes and Omega always guess one box. In that case, CDT's recommendation of 2-boxing is completely rational.
Those are two different problems based on different perspectives. They do not contradict each other nor pose problems for CDT. Troubles only arise if we reason non-perspectively and insist the analysis and decision be put into one framework by reasoning with a view from nowhere.
Just like some anthropic problems concern observers while some anthropic problems concern different moments of the same observer, Parfit's Hitchhiker involves chronological perspectives. When answering the driver's question of whether you would pay him later, you are not making a decision. Instead, you are trying to analyze and make a prediction. The subject of that analysis might be someone who would regard your current perspective as their past, and that analysis could involve simulating being in the later perspective and making a decision. Yet at that moment, you are not deciding whether to pay him but presenting the result of your forecast.
Alternatively, if the driver has already driven you to town, then from the current perspective, deciding not to pay him based on CDT would again be the rational decision. Mixing the two perspectives and pitting the analysis and decision against each other is what leads to paradoxes.
On a final note, CDT's compatibility with precommitment is also to be expected according to PBR. Precommitment moves the supposed "decision" from the part being analyzed/predicted to a part not being analyzed/predicted. It therefore enables decision and analysis to be considered together from a given perspective. Thus CDT would correctly conclude if precommitment is possible, deciding to 1-box and pay the driver is the better option.
5 comments
Comments sorted by top scores.
comment by Gunnar_Zarncke · 2023-10-09T07:03:48.455Z · LW(p) · GW(p)
As much as I like PBR, in this post, I have the impression that treating the first-person perspective as axiomatic is taken took far into an objective of its own. I think that the goal should not be to give up on figuring out what the relations between different perspectives are or what physical structures give rise to them.
Replies from: dadadarren↑ comment by dadadarren · 2023-10-09T11:40:49.574Z · LW(p) · GW(p)
Understandable. As much as I firmly believe in my theory, I have to admit I have a hard time making it look convincing.
comment by Gunnar_Zarncke · 2023-10-09T06:59:11.307Z · LW(p) · GW(p)
Given how much the post treats the first-person perspective as axiomatic and argues meta-physically, I was surprised about the last paragraph:
Precommitment moves the supposed "decision" from the part being analyzed/predicted to a part not being analyzed/predicted. It therefore enables decision and analysis to be considered together from a given perspective. Thus CDT would correctly conclude if precommitment is possible, deciding to 1-box and pay the driver is the better option.
How would the procedure look like that arrives at this conclusion? It seems to take perspectives into account, but maybe not as decision points?
Replies from: dadadarren↑ comment by dadadarren · 2023-10-09T11:37:25.768Z · LW(p) · GW(p)
The conflict arises when the self at the perspective center is making the decision but is also being analyzed. With CDT it leads to a self-referential-like paradox: I'm making the decision (which according to CDT is based on agency and unpredictable) yet there really is no decision but merely generating an output.
Precommitments sidestep this by saying there is no decision at the point being analyzed. It essentially moves the decision to a different observer-moment. Thus allowing the analysis to be taken into account in the decision analysis. In Newcomb, this is like instead of asking what should I do when facing the two boxes, asking what kind of machine/brain should I design so it would perform well in the Newcomb experiments.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2023-10-09T13:13:54.541Z · LW(p) · GW(p)
I think that's maybe the point people can agree on: To build a machine that performs well. That goes beyond building a decision procedure that performs well in many specific situations (that would each correspond to observer moments) but not in a succession of them, or in situations that would require its own analyzability. Building such a machine requires specifying what it optimizes over, which will be potentially very many observer moments.