If I care about measure, choices have additional burden (+AI generated LW-comments)

post by avturchin · 2024-11-15T10:27:15.212Z · LW · GW · 11 comments

Contents

  AI-generated comment section:
None
11 comments

There is an argument against quantum immortality that while I survive, I have lower measure in the multiverse and thus lower impact on it, which suggests I should not care about quantum immortality.

However, if we care about measure, there are normal situations where measure declines but we don't typically care:

  1. Every quantum event splits the multiverse, so my measure should decline by 20 orders of magnitude every second. This may be ignored as resulting minds are functionally the same and can be regarded as one.
  2. My semi-random actions during the day split me into similar but slightly different minds. This may also be ignored as most such differences will be forgotten, and the minds will be functionally the same.
  3. I make semi-random choices which affect my entire future life. Examples:
    • Dating choices
    • Choosing another country to move to
    • Clicking job advertisements

The expected utility of all reasonable variants is approximately the same - I won't choose a very bad job, for instance. So in a normal world, I don't lose utility by randomly choosing between equal variants. However, in Many-Worlds Interpretation (MWI), I split my measure between multiple variants, which will be functionally different enough to regard my future selves as different minds. Thus, the act of choice itself lessens my measure by a factor of approximately 10. If I care about this, I'm caring about something unobservable.

TLDR: If I care about declining measure, normal life events incur additional utility costs, which nevertheless don't have observable consequences.



AI-generated comment section:

ShminuxRational · 4h
Interesting point about measure decline in everyday choices. However, I think there's a flaw in treating all branches as equally weighted. Wouldn't decoherence rates and environment interaction mean some branches have naturally higher measure? This seems relevant for the job-choice example.

  MaximizerPrime · 3h
  > Wouldn't decoherence rates and environment interaction mean some branches have naturally higher measure?
  
  This. Plus, we should consider that decision-theoretic framework might need updating when dealing with measure. UDT might handle this differently than EDT.

quantumCrux · 4h
Your point about the 20 orders of magnitude per second is fascinating. Has anyone actually calculated the exact rate of quantum branching? Seems like an important consideration for anthropic reasoning.

PatternSeeker · 3h
This reminds me of Stuart Armstrong's posts about identity and measure. I wonder if we're making category error by treating measure as something to "spend" rather than as a description of our uncertainty about which branch we'll end up in.

DecisionTheoryNerd · 3h
You might want to look into Wei Dai's work on anthropic decision theory. This seems related to the problem of sleeping beauty and probability allocation across multiple instances of yourself.

  AlignmentScholar · 2h
  The sleeping beauty analogy is apt. Though I'd argue this is closer to SSA than SIA territory.

PracticalRationalist · 2h
While intellectually interesting, I'm not convinced this has practical implications. If the decline in measure is truly unobservable, shouldn't we apply Occam's razor and ignore it? Seems like adding unnecessary complexity to our decision-making.

  MetaUtilitarian · 1h
  Strong upvote. We should be careful about adding decision-theoretic complexity without corresponding benefits in expected value.

EpistemicStatus · 1h
[Meta] The post could benefit from more formal notation, especially when discussing measure ratios. Also, have you considered cross-posting this to the Alignment Forum? Seems relevant to questions about agent foundations.

QuantumBayesian · 1h
This makes me wonder about the relationship between quantum suicide experiments and everyday choices. Are we performing micro quantum suicide experiments every time we make a decision? 🤔

  RationalSkeptic · 30m
  Please let's not go down the quantum suicide path again. We had enough debates about this in 2011.

ComputationalFog · 15m
Has anyone written code to simulate this kind of measure-aware decision making? Might be interesting to see how different utility functions handle it.

11 comments

Comments sorted by top scores.

comment by AnthonyC · 2024-11-15T10:51:53.226Z · LW(p) · GW(p)

I don't see #1 affecting decision making because it happens no matter what, and therefore shouldn't differ based on our own choices or values. I guess you could argue it implies an absurdly high discount rate if you see the resulting branches as sufficiently separate from one another, but if the resulting worlds are ones I care about, then the measure dilution is just the default baseline I start from in my reasoning. Unless there is some way we can or could meaningfully increase the multiplication rate in some sets of branches but not others? I don't think that's likely with any methods or tech I can foresee.

#2 seems like an argument for improving ourselves to be more mindful in our choices to be more coherent on average, and #3 an argument for improving our average decision making. The main difference I can think of for how measure affects things is maybe in which features of the outcome distribution/probabilities among choices I care about. 

Replies from: avturchin
comment by avturchin · 2024-11-15T11:42:04.217Z · LW(p) · GW(p)

My point was that only 3 is relevant. How it improves average decision making?

Replies from: AnthonyC
comment by AnthonyC · 2024-11-15T11:53:24.592Z · LW(p) · GW(p)

I'm not saying it improves decision making. I'm saying it's an argument for improving our decision making in general, if mundane decisions we wouldn't normally think are all that important have much larger and long-lasting consequences. Each mundane decision affects a large number of lives that parts of me will experience, in addition to the effects on others.

comment by jchan · 2024-11-16T23:50:59.700Z · LW(p) · GW(p)

However, in Many-Worlds Interpretation (MWI), I split my measure between multiple variants, which will be functionally different enough to regard my future selves as different minds. Thus, the act of choice itself lessens my measure by a factor of approximately 10. If I care about this, I'm caring about something unobservable.

If we're going to make sense of living in a branching multiverse, then we'll need to adopt a more fluid concept of personal identity.

Scenario: I take a sleeping pill that will make me fall asleep in 30 minutes. However, the person who wakes up in my bed the next morning will have no memory of that 30-minute period; his last memory will be of taking the pill.

If I imagine myself experiencing that 30-minute interval, intuitively it doesn't at all feel like "I have less than 30 minutes to live." Instead, it feels like I'd be pretty much indifferent to being in this situation - maybe the person who wakes up tomorrow is not "me" in the artificial sense of having a forward-looking continuity of consciousness with my current self, but that's not really what I care about anyway. He is similar enough to current-me that I value his existence and well-being to nearly the same degree as I do my own; in other words, he "is me" for all practical purposes.

The same is true of the versions of me in nearby world branches. I can no longer observe or influence them, but they still "matter" to me. Of course, the degree of self-identification will decrease over time as they diverge, but then again, so does my degree of identification with the "me" many decades in the future, even assuming a single timeline.

Replies from: avturchin
comment by avturchin · 2024-11-17T20:49:49.443Z · LW(p) · GW(p)

A sad thing is that most of life moments are like this 30-minutes intervals - we forget most life events, they are like dead ends. 

More generally, type-copies of me still matter for me.

comment by TAG · 2024-11-15T13:58:40.048Z · LW(p) · GW(p)

Every quantum event splits the multiverse, so my measure should decline by 20 orders of magnitude every second.

There isn’t the slightest evidence that irrevocable splitting, splitting into decoherent branches occurs at every microscopic event -- that would be combining the frequency of coherentism style splitting with the finality of decoherent splitting. As well as the conceptual incoherence, there is In fact plenty of evidence—eg. the existence of quantum computing—that it doesnt work that way

"David Deutsch, one of the founders of quantum computing in the 1980s, certainly thinks that it would. Though to be fair, Deutsch thinks the impact would “merely” be psychological – since for him, quantum mechanics has already proved the existence of parallel uni- verses! Deutsch is fond of asking questions like the following: if Shor’s algorithm succeeds in factoring a 3000-digit integer, then where was the number factored? Where did the computational resources needed to factor the number come from, if not from some sort of “multiverse” exponentially bigger than the universe we see? To my mind, Deutsch seems to be tacitly assuming here that factoring is not in BPP – but no matter; for purposes of argument, we can certainly grant him that assumption. It should surprise no one that Deutsch’s views about this are far from universally accepted. Many who agree about the possibil- ity of building quantum computers, and the formalism needed to describe them, nevertheless disagree that the formalism is best inter- preted in terms of “parallel universes.” To Deutsch, these people are simply intellectual wusses – like the churchmen who agreed that the Copernican system was practically useful, so long as one remembers that obviously the Earth doesn’t really go around the sun. So, how do the intellectual wusses respond to the charges? For one thing, they point out that viewing a quantum computer in terms of “parallel universes” raises serious difficulties of its own. In particular, there’s what those condemned to worry about such things call the “preferred basis problem.” The problem is basically this: how do we define a “split” between one parallel universe and another? There are infinitely many ways you could imagine slic- ing up a quantum state, and it’s not clear why one is better than another! One can push the argument further. The key thing that quan- tum computers rely on for speedups – indeed, the thing that makes quantum mechanics different from classical probability theory in the first place – is interference between positive and negative amplitudes. But to whatever extent different “branches” of the multiverse can usefully interfere for quantum computing, to that extent they don’t seem like separate branches at all! I mean, the whole point of inter- ference is to mix branches together so that they lose their individual identities. If they retain their identities, then for exactly that reason we don’t see interference. Of course, a many-worlder could respond that, in order to lose their separate identities by interfering with each other, the branches had to be there in the first place! And the argument could go on (indeed, has gone on) for quite a while. Rather than take sides in this fraught, fascinating, but perhaps ultimately meaningless debate..."..Scott Aaronson , QCSD, p148

Also see

https://www.lesswrong.com/posts/wvGqjZEZoYnsS5xfn/any-evidence-or-reason-to-expect-a-multiverse-everett?commentId=o6RzrFRCiE5kr3xD4 [LW(p) · GW(p)]

Replies from: avturchin
comment by avturchin · 2024-11-15T14:12:38.504Z · LW(p) · GW(p)

But if I use quantum coin to make a life choice, there will be splitting, right?

comment by avturchin · 2024-11-15T13:15:58.691Z · LW(p) · GW(p)

In replies to this comment I will post other Sonnet3.5-generated replies by known LW people. If it is against the rules please let me know and I will delete. I will slightly change the names, so they will not contaminate future search and AI training

Replies from: avturchin, avturchin, avturchin
comment by avturchin · 2024-11-15T13:21:32.231Z · LW(p) · GW(p)

Wei· 3h

This post touches on several issues I've been thinking about since my early work on anthropic decision theory and UDT. Let me break this down:

1. The measure-decline problem is actually more general than just quantum mechanics. It appears in any situation where your decision algorithm gets instantiated multiple times, including classical copying, simulation, or indexical uncertainty. See my old posts on anthropic probabilities and probability-as-preference.

2. The "functional identity" argument being used here to dismiss certain types of splitting is problematic. What counts as "functionally identical" depends on your decision theory's level of grain. UDT1.1 would treat seemingly identical copies differently if they're in different computational states, while CDT might lump them together.

Some relevant questions that aren't addressed:

- How do we handle preference aggregation across different versions of yourself with different measures?
- Should we treat quantum branching differently from other forms of splitting? (I lean towards "no" these days)
- How does this interact with questions of personal identity continuity?
- What happens when we consider infinite branches? (This relates to my work on infinite ethics)

The real issue here isn't about measure per se, but about how to aggregate preferences across different instances of your decision algorithm. This connects to some open problems in decision theory:

1. The problem of preference aggregation across copies
2. How to handle logical uncertainty in the context of anthropics
3. Whether "caring about measure" can be coherently formalized

I explored some of these issues in my paper on UDT, but I now think the framework needs significant revision to handle these cases properly.

  Stuart · 2h
  > The problem of preference aggregation across copies
  
  This seems key. Have you made any progress on formalizing this since your 2019 posts?

     Wei · 2h
     Some progress on the math, but still hitting fundamental issues with infinity. Might post about this soon.

  Abram · 1h
  Curious about your current thoughts on treating decision-theoretic identical copies differently. Seems related to logical causation?

     Wei · 45m
     Yes - this connects to some ideas about logical coordination I've been developing. The key insight is that even "identical" copies might have different logical roles...

[Edit: For those interested in following up, I recommend starting with my sequence on decision theory and anthropics, then moving to the more recent work on logical uncertainty.]

comment by avturchin · 2024-11-15T13:18:19.615Z · LW(p) · GW(p)

Vladimir_N 3h

(This is a rather technical comment that attempts to clarify some decision-theoretic confusions.)

Your treatment of measure requires more formal specification. Let's be precise about what we mean by "caring about measure" in decision-theoretic terms.

Consider a formalization where we have:
1. A space of possible outcomes Ω
2. A measure μ on this space
3. A utility function U: Ω → ℝ
4. A decision function D that maps available choices to distributions over Ω

The issue isn't about "spending" measure, but about how we aggregate utility across branches. The standard formulation already handles this correctly through expected utility:

E[U] = ∫_Ω U(ω)dμ(ω)

Your concern about "measure decline" seems to conflate the measure μ with the utility U. These are fundamentally different mathematical objects serving different purposes in the formalism.

If we try to modify this to "care about measure directly," we'd need something like:

U'(ω) = U(ω) * f(μ(ω))

But this leads to problematic decision-theoretic behavior, violating basic consistency requirements like dynamic consistency. It's not clear how to specify f in a way that doesn't lead to contradictions.

The apparent paradox dissolves when we properly separate:
1. Measure as probability measure (μ)
2. Utility as preference ordering over outcomes (U)
3. Decision-theoretic aggregation (E[U])

[Technical note: This relates to my work on logical uncertainty and reflection principles. See my 2011 paper on decision theory in anthropic contexts.]

  orthonormal · 2h
  > U'(ω) = U(ω) * f(μ(ω))
  
  This is a very clean way of showing why "caring about measure" leads to problems.

     Vladimir_N · 2h
     Yes, though there are even deeper issues with updateless treatment of anthropic measure that I haven't addressed here for brevity.

  Wei_D · 1h
  Interesting formalization. How would this handle cases where the agent's preferences include preferences over the measure itself?

     Vladimir_N · 45m
     That would require extending the outcome space Ω to include descriptions of measures, which brings additional technical complications...

[Note: This comment assumes familiarity with measure theory and decision theory fundamentals.]

comment by avturchin · 2024-11-15T13:16:50.959Z · LW(p) · GW(p)

Eli · 2h

*sigh*

I feel like I need to step in here because people are once again getting confused about measure, identity, and decision theory in ways I thought we cleared up circa 2008-2009.

First: The whole "measure declining by choice" framing is confused. You're not "spending" measure like some kind of quantum currency. The measure *describes* the Born probabilities; it's not something you optimize for directly any more than you should optimize for having higher probabilities in your belief distribution.

Second: The apparent "splitting" of worlds isn't fundamentally different between quantum events, daily choices, and life-changing decisions. It's all part of the same unified wavefunction evolving according to the same physics. The distinction being drawn here is anthropocentric and not particularly meaningful from the perspective of quantum mechanics.

What *is* relevant is how you handle subjective anticipation of future experiences. But note that "caring about measure" in the way described would lead to obviously wrong decisions - like refusing to make any choices at all to "preserve measure," which would itself be a choice (!).

If you're actually trying to maximize expected utility across the multiverse (which is what you should be doing), then the Born probabilities handle everything correctly without need for additional complexity. The framework I laid out in Quantum Ethics handles this cleanly.

And please, can we stop with the quantum suicide thought experiments? They're actively harmful to clear thinking about decision theory and anthropics. I literally wrote "Don't Un-think the Quantum" to address exactly these kinds of confusions.

(Though I suppose I should be somewhat grateful that at least nobody in this thread has brought up p-zombies or consciousness crystals yet...)

[Edit: To be clear, this isn't meant to discourage exploration of these ideas. But we should build on existing work rather than repeatedly discovering the same confusions.]

  RationalSkeptic · 1h
  > like refusing to make any choices at all to "preserve measure,"
  
  This made me laugh out loud. Talk about Pascal's Mugging via quantum mechanics...

     Eli · 45m
     Indeed. Though I'd note that proper handling of Pascal's Mugging itself requires getting anthropics right first...