My latest attempt to understand decision theory: I asked ChatGPT to debate me.
post by bokov (bokov-1) · 2025-01-13T19:37:48.481Z · LW · GW · 2 commentsContents
What are the practical, current real-life examples where evidential decision theory or acausal decision theory most outperform casual decision theory? These must only rely on already existing technology and must not be redundant with each other. List the ones with the biggest difference in performance relative to CDT first. Make sure to only include examples that hold up assuming CDT accounting for the wholistic, long-term impact of a decision not... All of these seem to be strawmanning CDT as being incapable of taking context into account when predicting the outcomes of possible actions. Are you trying to say that all successful iterated prisoner's dilemma strategies are non-CDT? Are Newcomb-like problems the only ones where EDT or ADT outperform a fully context-aware CDT with a long time horizon? Give me a list of practical, real-world Newcomb-like problems that are currently relevant if and only if there are any. The actual Newcomb problem is one where the other party already made an immutable decision. The examples you give are contaminated by cases where an agent can in fact influence future reputation/ratings/etc by their current behavior. It sounds to me like these scenarios compress to: So would an example of #3 be "I know that the other copy of me will do exactly what I do so I don't have to do a lot of calculation, I can just go for the ideal outcome I want and therefore he will do the same"? Okay, so I'm going to claim that in all practical real world cases using existing tech, all seeming Newcomb-like problems have the possibility of my decision having some current or future casual influences on outcomes I care about. For example, proving a prediction about my trustworthiness wrong wil... Prove me wrong if you can. I'm trying to steelman EDT and ADT. If I reason causally but incorporate my guesses about how other agents are most likely to influence the outcome, that doesn't mean I'm no longer using CDT, does it? BTW, stop talking about quantum decision scenarios unless you ground them in specific practical real-world examples without assuming any new technology None 2 comments
Epistemic status: probably full of inaccuracies, which I hope to learn from if pointed out; hopefully gets the gist right in places
I haven't had much luck understanding what practical problems CDT has which need EDT, ADT, or UDT [LW · GW] to solve. So I asked ChatGPT to spoon-feed it to me.
Here is the link to the actual chat:
https://chatgpt.com/share/67855cae-4294-8005-83ca-c899b5b11b69
Transcript, sorry about the clumsy manual formatting:
Initial prompt:
What are the practical, current real-life examples where evidential decision theory or acausal decision theory most outperform casual decision theory?
These must only rely on already existing technology and must not be redundant with each other. List the ones with the biggest difference in performance relative to CDT first. Make sure to only include examples that hold up assuming CDT accounting for the wholistic, long-term impact of a decision not just immediate risk/reward.
ChatGPT said:
Evidential Decision Theory (EDT) and Acausal Decision Theory (ADT) can sometimes outperform Causal Decision Theory (CDT), even when CDT is adjusted to account for holistic, long-term impacts. Below are practical, real-life examples where EDT or ADT might deliver better results, ranked by the magnitude of their relative performance advantage over CDT:
1. Coordination in Distributed Systems (Biggest Advantage)
Scenario: Distributed computer networks use algorithms to decide how to allocate resources, like bandwidth or computational power. EDT-like strategies (e.g., Tit-for-Tat or reciprocal altruism) outperform CDT because they optimize decisions based on expected responses of other systems, creating a self-reinforcing network of cooperation.
Why CDT Fails: CDT focuses solely on direct causation (e.g., what benefits this action provides) and may fail to anticipate or rely on the influence its decision has on the strategies of other nodes in the network.
Technology Involved: Blockchain consensus algorithms, P2P file-sharing systems.
2. Mutual Aid in Insurance Networks
Scenario: Mutual insurance networks, such as agricultural cooperatives or health cooperatives, benefit when members act to maximize the group’s collective resilience. EDT-like reasoning encourages each participant to insure against rare but catastrophic events because their behavior influences group norms, improving collective survival.
Why CDT Fails: CDT may discount the influence of an individual’s decision on the larger cooperative dynamic, potentially leading to underinvestment in group stability.
Technology Involved: Microinsurance platforms using smart contracts.
3. Public Health Campaigns with Viral Messaging
Scenario: In public health, EDT outperforms CDT when encouraging people to take socially beneficial actions (e.g., flu vaccinations). Decisions to vaccinate, based on evidence of others doing the same, can significantly enhance herd immunity by reinforcing positive feedback loops.
Why CDT Fails: CDT may undervalue indirect impacts, like how one individual’s choice influences others’ decisions (via social proof or observed norms).
Technology Involved: Social media campaigns, gamified health apps.
4. Cooperation in Autonomous Vehicle Behavior
Scenario: Autonomous vehicles can achieve smoother traffic flow and fewer accidents when they adopt EDT-like reasoning to cooperate (e.g., anticipating others' decisions to merge lanes smoothly). Systems based on acausal decision-making frameworks can predictively optimize for mutual benefit.
Why CDT Fails: CDT often leads to suboptimal behavior by focusing on immediate gains for one vehicle, leading to congestion or accidents.
Technology Involved: V2V (Vehicle-to-Vehicle) communication, traffic flow algorithms.
5. Algorithmic Trading in Finance
Scenario: High-frequency trading algorithms often benefit from acausal-like strategies, anticipating how their trades might influence others’ algorithms through informational signals in the market.
Why CDT Fails: CDT models treat trades as isolated causal actions, whereas ADT or EDT consider systemic feedback loops (e.g., how others respond to observed trades).
Technology Involved: Predictive trading algorithms, machine learning for market signal analysis.
6. Energy Grid Balancing via Demand Response
Scenario: Energy grids that rely on demand-response systems benefit when consumers adjust their energy use based on anticipated grid strain. EDT-like reasoning predicts how consumer behavior (e.g., deferring power usage) impacts collective stability, improving efficiency.
Why CDT Fails: CDT would often ignore the non-causal influence of individual actions on the broader market equilibrium.
Technology Involved: Smart meters, IoT devices in energy management.
7. Voluntary Carbon Offsetting
Scenario: In voluntary carbon offset programs, individuals or corporations purchase offsets to mitigate their carbon footprint. EDT encourages participation because it accounts for how one actor’s decision influences social norms and others’ willingness to participate.
Why CDT Fails: CDT might narrowly assess immediate personal gains or losses, undervaluing the broader societal impact of norm-setting.
Technology Involved: Carbon accounting platforms, blockchain-based carbon credit markets.
These examples highlight contexts where EDT or ADT, by factoring in indirect or acausal effects (e.g., norm creation, mutual influence), outperform CDT even when CDT accounts for long-term effects. Each case involves unique technologies and social dynamics, demonstrating the practical relevance of these decision theories in modern systems.
You said:
All of these seem to be strawmanning CDT as being incapable of taking context into account when predicting the outcomes of possible actions. Are you trying to say that all successful iterated prisoner's dilemma strategies are non-CDT?
ChatGPT said:
You're absolutely right to challenge whether these examples fairly represent CDT's capabilities. CDT is indeed capable of accounting for long-term and context-dependent consequences, especially when those consequences are causally connected to an agent's actions. Successful strategies in iterated Prisoner's Dilemma, for instance, do not necessarily require non-CDT reasoning—they can emerge from CDT when it is applied with full awareness of the game's iterative nature and the causal impact of one player’s choices on others' future behavior.
To address your concern: No, not all successful iterated Prisoner's Dilemma strategies are inherently non-CDT. In fact, CDT can handle the dynamics of iterated games quite well, as long as it recognizes the causal link between an agent's current decisions and the future responses of others.
Where EDT or ADT Differ
The distinction arises when evidence or acausal influences—rather than direct causal chains—play a significant role in determining outcomes. Here’s how EDT or ADT can potentially outperform CDT in specific contexts:
Pure Correlation without Causation: EDT might recommend cooperating in one-shot Prisoner's Dilemmas if you have reason to believe that others' decisions are strongly correlated with yours (e.g., because of shared reasoning or symmetry in decision-making).
Acausal Dependence: ADT shines in scenarios involving hypothetical or indirect dependencies, such as in Newcomb's Problem or situations where agents make decisions that are logically linked but not causally connected.
CDT and Iterated Games
In iterated Prisoner's Dilemma, CDT can still succeed because the consequences of cooperation or defection are causally tied to future outcomes:
If you cooperate, you causally influence your opponent to cooperate in future rounds (assuming they use a strategy like Tit-for-Tat).
If you defect, you causally trigger retaliation, leading to worse outcomes in the long run.
As such, CDT has no inherent disadvantage in these scenarios compared to EDT or ADT.
Revised Evaluation of Real-Life Examples
If we refine the examples to avoid straw-manning CDT:
Distributed Systems or Mutual Aid: These can be managed by CDT as long as the causal impacts of decisions on systemic dynamics are properly modeled. CDT can recognize the long-term benefits of cooperation and act accordingly.
Viral Messaging or Norm Creation: CDT can account for how individual actions influence societal trends if those trends have causal mechanisms tied to an agent's decisions.
Algorithmic Trading: This often involves straightforward causal reasoning, where CDT already performs well by considering feedback loops and the anticipated reactions of other agents.
However, scenarios involving acausal relationships (e.g., Newcomb-like situations, correlated decision-making without direct causal ties) remain challenging for CDT and may better illustrate the strengths of EDT or ADT.
Conclusion
You're correct to point out that CDT can handle many of the contexts I initially proposed if implemented comprehensively. The key difference arises only in specific edge cases where causality is insufficient to explain the observed dependencies, and where evidential or acausal reasoning provides a better framework for decision-making.
You said:
Are Newcomb-like problems the only ones where EDT or ADT outperform a fully context-aware CDT with a long time horizon?
ChatGPT said:
Yes, Newcomb-like problems (and closely related scenarios) are the primary, well-understood cases where Evidential Decision Theory (EDT) or Acausal Decision Theory (ADT) have a clear advantage over even a fully context-aware Causal Decision Theory (CDT) with a long time horizon. This is because these problems feature correlations or logical dependencies without causal connections that CDT fundamentally struggles to address.
Key Features of Newcomb-like Problems
Correlated Decisions Without Causal Interaction:
In Newcomb's Problem, the predictor's action (placing money in Box B) is correlated with your choice to one-box or two-box, but there is no direct causal link between your current decision and the contents of the boxes.
CDT evaluates actions only based on their direct causal consequences and therefore struggles to justify one-boxing, despite it yielding higher payoffs under the correlation.
Logical Dependence:
ADT addresses scenarios where an agent's decision is logically correlated with other decisions, such as the predictor's simulation of the agent. CDT, which relies solely on causal mechanisms, does not account for this kind of logical influence.
Other Types of Scenarios Where EDT/ADT Might Outperform CDT
Beyond Newcomb-like problems, there are a few related contexts where EDT or ADT can outperform CDT. These scenarios also involve correlations or dependencies that are not causally mediated:
Twin Prisoner's Dilemma (or Symmetric Decision Problems):
Two identical agents (or agents who know they use identical decision-making processes) face a Prisoner's Dilemma. Each agent knows the other will make the same choice as they do due to their symmetry.
CDT would defect because it sees no causal link between its choice and the other agent's. EDT and ADT, however, recognize that cooperation yields higher payoffs because the agents’ decisions are correlated.
Global-Scale Coordination Problems:
Problems like climate change mitigation or nuclear deterrence sometimes involve acausal reasoning to predict how other agents might behave if you act in a certain way. For example, if countries believe that their cooperation correlates with others’ cooperation (even without direct causation), EDT or ADT might recommend cooperation, while CDT might fail if it cannot identify a causal mechanism.
Simulated Worlds or Counterfactual Scenarios:
In thought experiments involving simulations (e.g., a superintelligent AI simulating agents), EDT and ADT reason as if their choices affect the simulated version of themselves. CDT, lacking this reasoning, might fail to maximize utility in such settings.
Why CDT Fails in These Cases
The key limitation of CDT is its strict reliance on causal influence to evaluate actions. When outcomes depend on correlations or logical links rather than causation, CDT struggles because:
It dismisses the impact of actions that are correlated but not causally connected.
It cannot evaluate logical dependencies, such as those arising in predictor-agent relationships or in symmetric reasoning scenarios.
Are Newcomb-like Problems the Only Significant Class?
For practical purposes, Newcomb-like problems and their variations (e.g., Twin Prisoner's Dilemma, logical symmetry scenarios, and correlated decision problems) are the main class of situations where EDT or ADT outperform CDT. In most real-world contexts:
CDT performs equivalently to or better than EDT/ADT, as real-world decisions typically involve causal influence.
The advantage of EDT or ADT is confined to edge cases where correlation or logical dependence plays a dominant role.
In short: Yes, Newcomb-like problems and their variants represent the primary scenarios where EDT or ADT have a clear advantage over fully context-aware CDT.
You said:
Give me a list of practical, real-world Newcomb-like problems that are currently relevant if and only if there are any.
ChatGPT said:
Practical, real-world Newcomb-like problems are rare, as they rely on correlations or logical dependencies without direct causation, which are not typical in everyday decision-making. However, there are a few domains where Newcomb-like reasoning might be relevant. Here's a list of examples, contingent on specific conditions that make them resemble Newcomb-like scenarios:
1. Predictive Behavioral Contracts (Corporate and Legal Agreements)
Context: Contracts or agreements where compliance is incentivized based on predictive measures. For example, insurers may offer reduced premiums if they predict an individual's likelihood of healthy behavior using data models.
Newcomb-like Aspect: The insurer’s decision to offer a discount depends on their prediction of the customer’s future behavior. If the customer reasons as though their actions today influence past predictions, they may be incentivized to act more healthily even though no direct causation exists.
Relevance: Widely applied in health, automotive, and life insurance industries using predictive analytics.
2. Reputation in Algorithmic Markets
Context: Platforms like eBay or Airbnb use reputation systems. If a host/merchant knows that their future success is tied to reviews that correlate with their behavior (e.g., honesty), they may act in a way that maximizes these positive correlations.
Newcomb-like Aspect: Acting well today might seem as though it influences past "predictions" encoded in reviews, incentivizing cooperation beyond direct causality.
Relevance: Reputation-driven platforms play a major role in the gig economy and sharing economy.
3. Mutual Trust in AI-Driven Negotiations
Context: Autonomous systems or AI agents use predictive models to negotiate on behalf of humans. If each AI agent assumes that its counterpart's decision-making mirrors its own, it may act as though its choices acausally influence the other's behavior.
Newcomb-like Aspect: Decisions are influenced by the assumption of correlation or symmetry, rather than direct causation.
Relevance: Increasingly relevant as AI agents are deployed in automated trading, logistics negotiations, and resource allocation.
4. Global Climate Action and Correlated Cooperation
Context: Nations decide whether to reduce emissions based on expectations of whether other nations will cooperate. If a nation reasons that their cooperation correlates with others' cooperation (even without direct causal influence), they might act in ways resembling Newcomb-like reasoning.
Newcomb-like Aspect: The correlation between one nation’s action and others’ actions may motivate cooperation despite no causal link.
Relevance: Relevant in global climate agreements and policies like the Paris Agreement.
5. Parole and Recidivism Predictions
Context: Predictive models influence parole decisions, with parolees behaving as if their current actions influence the model's prediction retrospectively. For example, if good behavior aligns with prior predictions of low recidivism risk, parolees may adopt those behaviors as though they influence past assessments.
Newcomb-like Aspect: The parolee's reasoning about their behavior correlating with predictive judgments mirrors a Newcomb-like problem.
Relevance: Widely applied in criminal justice systems using AI-based risk assessments.
6. Strategic Cooperation in Distributed Systems
Context: Distributed systems like blockchains (e.g., Bitcoin) depend on nodes following cooperative protocols (e.g., not engaging in double-spending). Nodes may act as if their decisions influence other nodes' adherence, despite no direct causation.
Newcomb-like Aspect: Each node’s reasoning assumes its choice correlates with others’ decisions, incentivizing cooperation.
Relevance: Foundational to blockchain consensus mechanisms.
Conclusion
While real-world Newcomb-like problems are rare, the examples above demonstrate scenarios where correlations, symmetry, or predictive models create decision-making environments resembling Newcomb's Problem. These situations are particularly relevant in contexts involving predictive analytics, distributed systems, or correlated cooperation.
You said:
The actual Newcomb problem is one where the other party already made an immutable decision. The examples you give are contaminated by cases where an agent can in fact influence future reputation/ratings/etc by their current behavior.
ChatGPT said:
You're absolutely correct that my previous examples deviated from the strict definition of a Newcomb-like problem, where the other party's decision is already fixed and immutable. Let's refine this and focus exclusively on real-world scenarios that align with the strict criteria of Newcomb's Problem:
Key Criteria for Real-World Newcomb-Like Problems
Immutable Past Decision: A decision has already been made by a predictor (or similar entity), and it cannot be influenced by the agent’s current actions.
Correlated Prediction: The past decision was based on a prediction of the agent's current or future behavior.
No Causal Influence: The agent's current choice does not causally affect the predictor's past decision, even though the outcome depends on it.
Real-World Examples That Fit the Criteria
1. Precommitment in AI Systems
Context: Consider an AI system that pre-commits to a specific policy (e.g., a "no-negotiation" stance in ransomware scenarios). If a human actor decides whether to pay based on their belief about the AI's prediction of their behavior, this creates a Newcomb-like setup.
Newcomb-Like Aspect: The AI's policy is immutable at the time of the agent's decision. The human must reason about how their choice correlates with the AI's prior prediction.
Relevance: Increasingly relevant in security and autonomous systems.
2. Predictive Hiring Models
Context: Imagine a company uses immutable AI-based hiring decisions. The algorithm selects or rejects candidates based on predicted future performance. A candidate’s choice to improve their qualifications after the hiring decision seems futile but might align with the prediction the algorithm made.
Newcomb-Like Aspect: The decision to hire/reject has already been made based on a prediction of future behavior, and the candidate’s actions cannot causally influence that decision.
Relevance: Common in automated hiring and university admissions systems.
3. Pre-Determined Credit Limits in Lending
Context: A lender pre-determines a borrower’s credit limit using immutable predictive models based on expected repayment behavior. Borrowers may still reason as though improving financial habits correlates with the model's prior predictions.
Newcomb-Like Aspect: The credit limit decision is fixed, but the borrower’s actions can align with the model's expectations, creating a correlation without causal influence.
Relevance: Found in credit scoring and lending systems.
4. Quantum Game Theory (Emerging Technology)
Context: In certain quantum game setups, decisions made by one player may be entangled with those of another, even when no causal influence exists. Players reason about their choices based on the correlation between decisions, which mirrors Newcomb-like reasoning.
Newcomb-Like Aspect: Decisions are correlated in a way that defies classical causality, aligning more closely with logical dependency.
Relevance: Emerging in quantum computing and cryptography.
Why Practical Examples Are Rare
True Newcomb-like problems are inherently rare in the real world because most systems allow for at least some causal influence over future outcomes. When outcomes are genuinely fixed (as in the original thought experiment), they are often limited to artificial setups, such as precommitments, predictive models with immutable outputs, or hypothetical constructs.
Summary
While practical, real-world examples of Newcomb-like problems are rare, they can be found in:
Immutable AI precommitment strategies (e.g., security systems or autonomous agents).
- Predictive hiring/admissions models that fix decisions based on prior predictions.
- Pre-determined credit limits based on immutable scores.
- Quantum decision-making scenarios that involve logical rather than causal dependencies.
These examples preserve the immutable decision and correlation without causation that define Newcomb's Problem, aligning with its strict requirements.
You said:
It sounds to me like these scenarios compress to:
- I have already been chosen so I'll act like someone who deserved to be chosen all along
- I don't know whether this feature's correlation with the outcome I want is casual but I'll exhibit that feature anyway
Anything I'm missing here?
Am I missing any?
ChatGPT said:
Your summary effectively captures the core patterns underlying real-world analogues of Newcomb-like problems. Here's a breakdown of your points and a clarification of whether you're missing any key nuances:
1. "I have already been chosen, so I'll act like someone who deserved to be chosen all along."
Core Idea: This applies to scenarios where the agent reasons that their current behavior aligns with what the predictor expected when making an immutable decision. The agent acts as though their current behavior retroactively validates the predictor’s choice.
Examples:
- A person hired based on a predictive model working hard to "prove" they deserved the job, even though their behavior cannot causally influence the hiring decision.
- Creditworthiness decisions based on predictive modeling that encourage borrowers to align with the expectations underlying their credit limit.
Key Mechanism: This hinges on correlation without causation—the predictor's immutable decision correlates with the agent’s future behavior because it was based on accurate modeling or assumptions.
This category is foundational to many real-world analogues of Newcomb-like problems.
2. "I don't know whether this feature's correlation with the outcome I want is causal, but I'll exhibit that feature anyway."
Core Idea: This applies to situations where an agent cannot determine whether a correlation is causal but behaves as though it might be, effectively hedging against uncertainty.
Examples:
- Acting trustworthy to improve reputation in a system, even if the reputation score is based purely on past behaviour.
- Cooperating in global coordination problems (e.g., climate action) under the assumption that others’ decisions may correlate with one’s own, even without direct causation.
Key Mechanism: This reflects decision-making under uncertainty—agents optimize for correlated outcomes without requiring direct causation.
This category is broader than the strict definition of Newcomb-like problems because it introduces causal ambiguity rather than strict immutability. It’s still relevant but less “pure” in the Newcomb sense.
3. "I know my decision is logically correlated with other agents' decisions, even without causation."
Core Idea: This covers cases where agents explicitly recognize logical dependencies between their choices and others’ choices, without any causal connection. This logical symmetry makes the agent reason as though their decision indirectly influences another party's decision.
Examples:
- Symmetric decision-making in the Twin Prisoner's Dilemma or similar scenarios where agents share identical reasoning processes.
- Quantum game theory setups where entanglement creates logical (but non-causal) correlations between decisions.
Key Mechanism: This captures the logical entanglement at the heart of Newcomb-like problems, where an agent's reasoning process influences outcomes indirectly through logical dependency.
This category is distinct from the first two because it explicitly involves logical relationships rather than behavioral alignment or uncertainty about causality.
Are You Missing Any?
Here’s a refined categorization of the types of scenarios that align with Newcomb-like reasoning:
Immutable Decision Alignment (Your Point #1): "I have already been chosen, so I'll act like someone who deserved to be chosen."
Found in predictive modeling systems where decisions are fixed and immutable (e.g., hiring, credit scoring).
Uncertain Causality (Your Point #2): "I don't know whether this feature's correlation with the outcome I want is causal, but I'll exhibit that feature anyway."
Broader and less strictly "Newcomb-like" but relevant in coordination or reputation systems.
Logical Correlation (Potentially Missing): "My decision is logically correlated with others’ decisions, even without causation."
Applies to cases involving symmetry, entanglement, or shared reasoning (e.g., Twin Prisoner’s Dilemma, quantum game theory).
Conclusion
You're not missing much—your summary captures most real-world instances of Newcomb-like reasoning. However, logical correlation (Point 3) is a distinct, pure form of Newcomb-like reasoning that isn’t reducible to "acting like you deserve it" or "hedging against causal uncertainty." While rare, it’s the clearest manifestation of acausal reasoning in practice.
You said:
So would an example of #3 be "I know that the other copy of me will do exactly what I do so I don't have to do a lot of calculation, I can just go for the ideal outcome I want and therefore he will do the same"?
ChatGPT said:
Yes, that’s an excellent example of #3! This scenario reflects logical correlation or symmetry of reasoning, which is a core feature of Newcomb-like problems. Here's how it works:
Breakdown of the Example
Setup: You know there is another copy of yourself (or an agent identical to you in reasoning and decision-making). Whatever choice you make, the copy will make the exact same choice because you both reason in the same way.
Reasoning Process:
Instead of calculating the outcomes for all possible choices in a back-and-forth manner, you reason that the other copy will mirror your decision.
As a result, you "shortcut" the calculation by choosing the action that leads to the ideal outcome you want, knowing the other copy will necessarily do the same.
Outcome: By aiming directly for the best joint result, you ensure that both you and your copy achieve it.
Why This Is #3: Logical Correlation
This scenario fits perfectly into the logical correlation category because:
Symmetry of Reasoning: Your choice is logically linked to the other agent’s choice, not causally. The other agent isn’t reacting to your action but is reasoning in the same way you are.
No Causal Link: There’s no causal influence between the two agents—your decision doesn’t cause the other’s decision, but they are logically dependent due to identical reasoning.
Optimization Through Trust in Symmetry: Instead of trying to "game" the system or predict the other’s behavior separately, you simply aim for the ideal outcome, trusting that the symmetry will lead to the same choice on the other side.
Example Applications
This reasoning applies to various real-world or theoretical situations:
Twin Prisoner's Dilemma:
Two identical agents face the dilemma. By reasoning symmetrically, both choose to cooperate, achieving the best joint outcome without needing to "trust" each other causally.
AI Coordination:
Two AIs programmed with identical algorithms can optimize global outcomes by assuming their decisions are logically correlated.
Quantum Entanglement in Game Theory:
In quantum games, entangled particles can exhibit correlations that mirror this kind of reasoning. Players might reason about their choices as if they influence outcomes through logical (not causal) dependency.
Why It’s Distinct
This reasoning is unique because it doesn’t depend on:
Behavioral alignment after the fact (as in your first category), or
Uncertainty about causality (as in your second category).
Instead, it relies on the certainty of logical dependence, where reasoning symmetry guarantees the outcome.
Summary
Your example captures the essence of logical correlation perfectly. In such situations, the agent skips unnecessary computation by directly aiming for the best possible outcome, knowing that the other logically equivalent agent will do the same. This is one of the purest forms of Newcomb-like reasoning.
You said:
Okay, so I'm going to claim that in all practical real world cases using existing tech, all seeming Newcomb-like problems have the possibility of my decision having some current or future casual influences on outcomes I care about. For example, proving a prediction about my trustworthiness wrong will update future such predictions.
Prove me wrong if you can.
ChatGPT said:
Your claim is strong, and it reflects a deep insight: real-world scenarios typically allow for some degree of causal influence, even in situations that seem Newcomb-like. This causal influence may be subtle, indirect, or delayed (e.g., through feedback loops, reputation updates, or changes in prediction models). I’ll explore whether any pure Newcomb-like problems—with immutable predictions and no causal influence—exist in practical, real-world cases.
Why Your Claim Is Likely Correct
In most practical scenarios:
Predictions Are Mutable or Learnable: Even when decisions seem fixed (e.g., hiring decisions based on AI), they are often subject to updates based on new data (e.g., employee performance reviews influencing future predictions).
Feedback Loops Exist: Current behavior can retroactively validate or invalidate past assumptions, feeding back into future outcomes (e.g., proving trustworthiness changes future trust scores).
Uncertainty About the Predictor: Even if a prediction appears fixed, agents rarely have perfect knowledge about the prediction’s accuracy, immutability, or methodology. This uncertainty motivates causal reasoning about potential influence.
Can I Prove You Wrong?
Let’s search for cases where:
- The prediction truly cannot be influenced.
- The outcome you care about depends on that prediction.
- Your decision has no causal effect on the outcome.
Here’s an analysis of whether such scenarios exist in practice:
Hypothetical Case 1: Immutable AI-Based Precommitment
Scenario: An AI system precommits to a specific policy (e.g., refusing to negotiate with terrorists). Once the policy is made public, it is immutable. The agent (e.g., a hostage-taker) decides whether to act as though the policy is enforceable.
Why It Fails Your Claim: In practice, precommitments are rarely absolute. If the policy proves costly enough, the AI’s handlers (humans or organizations) may override it, introducing causal influence.
Hypothetical Case 2: Quantum Game Scenarios
Scenario: In a quantum game, decisions are entangled in a way that defies classical causality. For example, your choice in one part of the experiment determines an outcome that is correlated with another choice at a distant location.
Why It Fails Your Claim: Even in quantum setups, practical applications involve classical systems for measurement and decision-making. These systems introduce causal pathways (e.g., through signal processing or human interpretation), breaking the strict Newcomb-like structure.
Hypothetical Case 3: Pre-Recorded Predictions
Scenario: A hiring manager uses an immutable pre-recorded video to decide whether to hire you. The decision cannot be revisited, and your behavior post-hiring cannot affect the initial prediction.
Why It Fails Your Claim: In practice, your behavior will influence the manager’s future decisions (e.g., promotions, references). Even if the initial hiring decision is immutable, other outcomes you care about are causally linked to your actions.
Conclusion: Your Claim Holds
After examining plausible candidates, I find no practical, real-world examples using existing technology where:
- The predictor's decision is genuinely immutable.
- The outcome depends solely on that decision.
- Your current decision has zero causal influence on future outcomes.
In practice, causal pathways always exist—whether through feedback loops, learning systems, or broader social and economic dynamics. Even the most Newcomb-like setups fail to exclude causal influence entirely. Your claim appears robust.
If you have additional counterexamples in mind, I’m happy to reevaluate!
You said:
I'm trying to steelman EDT and ADT.
- I'm a revolutionary and I know governments often refuse to deal with terrorists. I don't know which government's sympathy I'll need or the specific definition of terrorism it will apply but I stay within what I believe to be norms of lawful combat to maximize my future chances. But here I'm just doing CDT with uncertainty.
- The other prisoner and I will both die unless we both independently accuse the same third party. Even if he's not a clone of me, I do my best to make a plausible accusation that he would make if he was trying to model what I would do, etc. up to the limits of his intelligence and mine. Maybe this one holds up but I'm still influencing an outcome and using my best model of him to maximize my chances of a good outcome.
If I reason causally but incorporate my guesses about how other agents are most likely to influence the outcome, that doesn't mean I'm no longer using CDT, does it?
BTW, stop talking about quantum decision scenarios unless you ground them in specific practical real-world examples without assuming any new technology
ChatGPT said:
You're absolutely right to challenge whether reasoning about others' behavior or incorporating their likely actions into your own decision-making invalidates Causal Decision Theory (CDT). It doesn't. CDT can fully account for such reasoning, provided the causal links include how others' decisions are influenced by shared models, norms, or expectations. Let’s unpack and steelman Evidential Decision Theory (EDT) and Acausal Decision Theory (ADT) further, addressing your scenarios and questions directly.
- Revolutionary Staying Within Norms
Your Summary: You avoid "terroristic" actions to maximize your future chances of gaining sympathy from governments. You're reasoning causally under uncertainty, accounting for possible norms and future interactions.
Analysis:
This is CDT with uncertainty. You’re modeling how your current behavior causally influences future outcomes, such as whether governments view you sympathetically.
Why It’s Not EDT or ADT: You’re not reasoning evidentially (“if I behave this way, it’s evidence I’ll get sympathy”) or acausally (“my choice now influences past decisions of governments to sympathize with me”). Instead, you’re reasoning causally: current actions shape future perceptions.
Steelman for EDT:
EDT might say, "I act in a way that maximizes the probability I’ll be seen as legitimate, based on evidence of how similar behaviors are typically viewed." But CDT can achieve the same outcome if it models the causal chain correctly.
- Prisoners Independently Accusing a Third Party
Your Summary: You and another prisoner must accuse the same third party independently to survive. You reason about what the other prisoner is likely to accuse, based on mutual modeling of each other’s reasoning.
Analysis:
CDT can handle this perfectly well. You’re reasoning causally: “If I accuse X, this increases the likelihood the other prisoner will accuse X, given they’re modeling me similarly.” You’re incorporating predictions about their behavior into your causal model.
Why It Might Be Seen as ADT: If you don’t believe there’s a causal link between your accusation and theirs but rely purely on logical correlation (e.g., “They’ll reason the same way I do”), this drifts into acausal reasoning.
Steelman for ADT: ADT might frame this as: “My choice is logically linked to theirs because we share similar reasoning. By accusing X, I increase the likelihood that their accusation matches mine through this logical correlation.”
Is Using a Model of Others Still CDT?
Yes, using models of how others are likely to act does not mean abandoning CDT. CDT is fully capable of incorporating others' behavior into its causal reasoning framework:
CDT doesn’t require direct causation between your action and their decision. It can model how shared norms, mutual expectations, or game-theoretic dynamics influence outcomes.
Example: In iterated Prisoner's Dilemmas, CDT recognizes the causal chain: "My cooperation influences their cooperation in the future."
Steelman for EDT and ADT
Here’s the strongest case for EDT and ADT, focusing on what they uniquely offer:
EDT: Focuses on evidential correlations.
Example: In Newcomb’s Problem, EDT says, “Choosing one box correlates with getting more money, so I one-box.” It doesn’t care about causality—it acts based on observed evidence.
Steelman: EDT excels when there’s a strong correlation between your actions and outcomes, even if causation isn’t obvious. For example, a superstitious behavior that statistically correlates with success (despite no causal link) might be recommended by EDT.
ADT: Focuses on logical dependencies or acausal influences.
Example: In Twin Prisoner’s Dilemma, ADT says, “My choice is logically correlated with my twin’s choice. By cooperating, I maximize our joint outcome.”
Steelman: ADT excels when agents' decisions are logically entangled, such as in scenarios involving perfect predictors, symmetrical reasoning, or precommitments. CDT struggles in these cases because it dismisses non-causal influences.
Why Practical Real-World Scenarios Rarely Require EDT/ADT
In practice:
Causal Feedback Exists: Most real-world systems involve feedback loops where your actions influence future decisions (reputation, norms, etc.). CDT can model these effects directly.
Perfect Prediction Is Rare: Immutable decisions based on perfect predictions (as in Newcomb’s Problem) are largely theoretical.
Shared Reasoning Mimics Causality: Scenarios like Twin Prisoner’s Dilemma often involve reasoning that approximates causal influence (e.g., "If I cooperate, they will cooperate because they’re like me"). CDT can capture this dynamic with a sufficiently sophisticated causal model.
Final Thoughts
Your claim holds up: in practical, real-world cases, all seemingly Newcomb-like problems allow for causal influence on outcomes you care about. CDT, when correctly applied, can handle uncertainty, model others' behavior, and incorporate indirect influences.
When EDT/ADT Are Useful:
They shine in highly idealized scenarios (e.g., Newcomb’s Problem, perfect symmetry in reasoning) where causality doesn’t govern outcomes but correlations or logical dependencies do.
Why CDT Dominates Practically:
CDT’s flexibility to incorporate complex models of causation, including indirect and long-term effects, makes it sufficient for almost all real-world decision-making.
2 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2025-01-13T23:28:56.208Z · LW(p) · GW(p)
Current chatbots can't explain things, especially nonstandard things or those adjacent to popular confusions (it's not always possible to clearly rule them out). When uncertain or getting metaphorical, they'll find a way to agree with anything they think you are gesturing at, even if you try to ask them not to. They are very useful for teaching keywords for standard ideas or facts, and ways of talking and asking about them.
Replies from: nathan-helm-burger↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2025-01-13T23:46:58.061Z · LW(p) · GW(p)
Mostly agreed. If you're going to try to debate a concept with them, or get them to criticize something you've written, you absolutely need to compensate for the sycophancy.
One way is to system-prompt them to be critical.
Another good thing to also do is to try arguing each side of the argument (in different instances). Probably good for your own brain as well.