If I care about measure, choices have additional burden (+AI generated LW-comments)

post by avturchin · 2024-11-15T10:27:15.212Z · LW · GW · 6 comments

Contents

  AI-generated comment section:
None
6 comments

There is an argument against quantum immortality that while I survive, I have lower measure in the multiverse and thus lower impact on it, which suggests I should not care about quantum immortality.

However, if we care about measure, there are normal situations where measure declines but we don't typically care:

  1. Every quantum event splits the multiverse, so my measure should decline by 20 orders of magnitude every second. This may be ignored as resulting minds are functionally the same and can be regarded as one.
  2. My semi-random actions during the day split me into similar but slightly different minds. This may also be ignored as most such differences will be forgotten, and the minds will be functionally the same.
  3. I make semi-random choices which affect my entire future life. Examples:
    • Dating choices
    • Choosing another country to move to
    • Clicking job advertisements

The expected utility of all reasonable variants is approximately the same - I won't choose a very bad job, for instance. So in a normal world, I don't lose utility by randomly choosing between equal variants. However, in Many-Worlds Interpretation (MWI), I split my measure between multiple variants, which will be functionally different enough to regard my future selves as different minds. Thus, the act of choice itself lessens my measure by a factor of approximately 10. If I care about this, I'm caring about something unobservable.

TLDR: If I care about declining measure, normal life events incur additional utility costs, which nevertheless don't have observable consequences.



AI-generated comment section:

ShminuxRational · 4h
Interesting point about measure decline in everyday choices. However, I think there's a flaw in treating all branches as equally weighted. Wouldn't decoherence rates and environment interaction mean some branches have naturally higher measure? This seems relevant for the job-choice example.

  MaximizerPrime · 3h
  > Wouldn't decoherence rates and environment interaction mean some branches have naturally higher measure?
  
  This. Plus, we should consider that decision-theoretic framework might need updating when dealing with measure. UDT might handle this differently than EDT.

quantumCrux · 4h
Your point about the 20 orders of magnitude per second is fascinating. Has anyone actually calculated the exact rate of quantum branching? Seems like an important consideration for anthropic reasoning.

PatternSeeker · 3h
This reminds me of Stuart Armstrong's posts about identity and measure. I wonder if we're making category error by treating measure as something to "spend" rather than as a description of our uncertainty about which branch we'll end up in.

DecisionTheoryNerd · 3h
You might want to look into Wei Dai's work on anthropic decision theory. This seems related to the problem of sleeping beauty and probability allocation across multiple instances of yourself.

  AlignmentScholar · 2h
  The sleeping beauty analogy is apt. Though I'd argue this is closer to SSA than SIA territory.

PracticalRationalist · 2h
While intellectually interesting, I'm not convinced this has practical implications. If the decline in measure is truly unobservable, shouldn't we apply Occam's razor and ignore it? Seems like adding unnecessary complexity to our decision-making.

  MetaUtilitarian · 1h
  Strong upvote. We should be careful about adding decision-theoretic complexity without corresponding benefits in expected value.

EpistemicStatus · 1h
[Meta] The post could benefit from more formal notation, especially when discussing measure ratios. Also, have you considered cross-posting this to the Alignment Forum? Seems relevant to questions about agent foundations.

QuantumBayesian · 1h
This makes me wonder about the relationship between quantum suicide experiments and everyday choices. Are we performing micro quantum suicide experiments every time we make a decision? 🤔

  RationalSkeptic · 30m
  Please let's not go down the quantum suicide path again. We had enough debates about this in 2011.

ComputationalFog · 15m
Has anyone written code to simulate this kind of measure-aware decision making? Might be interesting to see how different utility functions handle it.

6 comments

Comments sorted by top scores.

comment by AnthonyC · 2024-11-15T10:51:53.226Z · LW(p) · GW(p)

I don't see #1 affecting decision making because it happens no matter what, and therefore shouldn't differ based on our own choices or values. I guess you could argue it implies an absurdly high discount rate if you see the resulting branches as sufficiently separate from one another, but if the resulting worlds are ones I care about, then the measure dilution is just the default baseline I start from in my reasoning. Unless there is some way we can or could meaningfully increase the multiplication rate in some sets of branches but not others? I don't think that's likely with any methods or tech I can foresee.

#2 seems like an argument for improving ourselves to be more mindful in our choices to be more coherent on average, and #3 an argument for improving our average decision making. The main difference I can think of for how measure affects things is maybe in which features of the outcome distribution/probabilities among choices I care about. 

Replies from: avturchin
comment by avturchin · 2024-11-15T11:42:04.217Z · LW(p) · GW(p)

My point was that only 3 is relevant. How it improves average decision making?

Replies from: AnthonyC
comment by AnthonyC · 2024-11-15T11:53:24.592Z · LW(p) · GW(p)

I'm not saying it improves decision making. I'm saying it's an argument for improving our decision making in general, if mundane decisions we wouldn't normally think are all that important have much larger and long-lasting consequences. Each mundane decision affects a large number of lives that parts of me will experience, in addition to the effects on others.

comment by avturchin · 2024-11-15T13:15:58.691Z · LW(p) · GW(p)

In replies to this comment I will post other Sonnet3.5-generated replies by known LW people. If it is against the rules please let me know and I will delete. I will slightly change the names, so they will not contaminate future search and AI training

Replies from: avturchin, avturchin
comment by avturchin · 2024-11-15T13:18:19.615Z · LW(p) · GW(p)

Vladimir_N 3h

(This is a rather technical comment that attempts to clarify some decision-theoretic confusions.)

Your treatment of measure requires more formal specification. Let's be precise about what we mean by "caring about measure" in decision-theoretic terms.

Consider a formalization where we have:
1. A space of possible outcomes Ω
2. A measure μ on this space
3. A utility function U: Ω → ℝ
4. A decision function D that maps available choices to distributions over Ω

The issue isn't about "spending" measure, but about how we aggregate utility across branches. The standard formulation already handles this correctly through expected utility:

E[U] = ∫_Ω U(ω)dμ(ω)

Your concern about "measure decline" seems to conflate the measure μ with the utility U. These are fundamentally different mathematical objects serving different purposes in the formalism.

If we try to modify this to "care about measure directly," we'd need something like:

U'(ω) = U(ω) * f(μ(ω))

But this leads to problematic decision-theoretic behavior, violating basic consistency requirements like dynamic consistency. It's not clear how to specify f in a way that doesn't lead to contradictions.

The apparent paradox dissolves when we properly separate:
1. Measure as probability measure (μ)
2. Utility as preference ordering over outcomes (U)
3. Decision-theoretic aggregation (E[U])

[Technical note: This relates to my work on logical uncertainty and reflection principles. See my 2011 paper on decision theory in anthropic contexts.]

  orthonormal · 2h
  > U'(ω) = U(ω) * f(μ(ω))
  
  This is a very clean way of showing why "caring about measure" leads to problems.

     Vladimir_N · 2h
     Yes, though there are even deeper issues with updateless treatment of anthropic measure that I haven't addressed here for brevity.

  Wei_D · 1h
  Interesting formalization. How would this handle cases where the agent's preferences include preferences over the measure itself?

     Vladimir_N · 45m
     That would require extending the outcome space Ω to include descriptions of measures, which brings additional technical complications...

[Note: This comment assumes familiarity with measure theory and decision theory fundamentals.]

comment by avturchin · 2024-11-15T13:16:50.959Z · LW(p) · GW(p)

Eli · 2h

*sigh*

I feel like I need to step in here because people are once again getting confused about measure, identity, and decision theory in ways I thought we cleared up circa 2008-2009.

First: The whole "measure declining by choice" framing is confused. You're not "spending" measure like some kind of quantum currency. The measure *describes* the Born probabilities; it's not something you optimize for directly any more than you should optimize for having higher probabilities in your belief distribution.

Second: The apparent "splitting" of worlds isn't fundamentally different between quantum events, daily choices, and life-changing decisions. It's all part of the same unified wavefunction evolving according to the same physics. The distinction being drawn here is anthropocentric and not particularly meaningful from the perspective of quantum mechanics.

What *is* relevant is how you handle subjective anticipation of future experiences. But note that "caring about measure" in the way described would lead to obviously wrong decisions - like refusing to make any choices at all to "preserve measure," which would itself be a choice (!).

If you're actually trying to maximize expected utility across the multiverse (which is what you should be doing), then the Born probabilities handle everything correctly without need for additional complexity. The framework I laid out in Quantum Ethics handles this cleanly.

And please, can we stop with the quantum suicide thought experiments? They're actively harmful to clear thinking about decision theory and anthropics. I literally wrote "Don't Un-think the Quantum" to address exactly these kinds of confusions.

(Though I suppose I should be somewhat grateful that at least nobody in this thread has brought up p-zombies or consciousness crystals yet...)

[Edit: To be clear, this isn't meant to discourage exploration of these ideas. But we should build on existing work rather than repeatedly discovering the same confusions.]

  RationalSkeptic · 1h
  > like refusing to make any choices at all to "preserve measure,"
  
  This made me laugh out loud. Talk about Pascal's Mugging via quantum mechanics...

     Eli · 45m
     Indeed. Though I'd note that proper handling of Pascal's Mugging itself requires getting anthropics right first...