the quantum amplitude argument against ethics deduplication
post by Tamsin Leake (carado-1) · 2023-03-12T13:02:31.876Z · LW · GW · 16 commentsThis is a link post for https://carado.moe/quantum-amplitude-deduplication.html
Contents
16 comments
in experience/moral patient deduplication and ethics, i explore the question of whether running the same computation of a moral patient twice counts as double, ethically. in all claw, no world [LW · GW] i draw up a view of the cosmos based on time steps in the universal machine which suggests that duplicated computations do count as double, because they occupy twice the amount of time-steps in the universal program.
in this post i make another argument, based on preferring one view over another of the (probably correct [LW · GW]) many-worlds interpretation [? · GW] of quantum mechanics.
when coming across the concept of many-worlds, i think people most generally assume the view on the left, where new timelines are being created. i think the view on the right, where a constant amount of "reality fluid" or "reality juice" is being split into different timelines, is more correct and makes more sense: we wouldn't expect the amount of "stuff existing" to keep exponentially growing over time. i believe it also maps to the notion of quantum amplitude.
(where at a given time, A
is the amplitude of a particular timeline and ΣA
is the sum of amplitudes across all timelines)
i think the way to view this that makes sense, if one is thinking in terms of discrete computation, is that the universe starts out "computing" the same thing in all of many "threads", and then as timelines branch fractions of these threads start diverging.
this also explains what goes on inside a quantum computer: in the quantum circuit it, rather than saying that a bunch of "new" universes are being temporarily created and then re-merged, instead it's merely the case that different computation threads are temporarily computing something different instead of the same thing.
(this entails that "entropy control" cannot work, at least not unless some weird "solomonoff deism" simulation hypotheses optimizing away redundanced computation happens.)
if P≠BQP and the universal program is classical, then it's weird that we inhabit a quantum world [LW · GW] — we should be too far inside the universal computation.
if P=BQP or the universal program is quantum, then it makes sense to live in a quantum universe, but:
- if we adopt the left-side view (more total fluid), then we should observe being at the "end of time" where there's maximally many timelines — exponentially much of our anthropic juice should be at the maximum quantum entropy, perhaps as boltzmann brains observing anomalously chaotic words. and we don't observe that!
- if we adopt the right-side view (fluid gets split), then we get back "regular" anthropics, and everything is normal again: our anthropic juice remains roughly the same as we pass branching events/macro-scale decoherence.
(one view that completely circumvents all of this is if P≠BQP and the cosmos is, ultimately, implemented classically, but we still only inhabit quantum worlds — perhaps classical worlds simply don't exist, or the cosmos is really just our big bang and nothing else. in that case, it could be that the classical program taking exponentially long to compute us exponentially far approximately compensates for the time step distribution favoring earlier us's, possibly exponentially much. that'd be really strange, and it feels like we'd be too far, but i guess it's possible.)
anyways, what this suggests is that, in the simplest model, the universe is running many computation threads which are originally computing the same thing, and then some fraction of them diverge sometimes — either to re-merge in local situations like quantum computers or the double-slit experiment, or to decohere the rest of the world and more "permanently" split it.
but more importantly, this suggests that:
- with regards to intrinsic value (rather than eg caring about diversity), duplicating the computation of moral-patient-experience does count as more moral-patient-experience. in deduplication ethics, I≈M≈Q≈V.
- if we could do it, resimulating the earth in order to bring back everyone has an unethical cost: we'd be rerunning all of history's suffering.
- predictablizing ethic deduplication would be a significant change.
- with regards to quantum immortality: we mustn't count on it. the fact that we're strongly duplicated now gets us-now to count a lot more, therefore losing 99% of our quantum amplitude to AI doom [LW · GW] would be very bad: we would actually lose existence juice. on the upside this also applies to S-risks: it's actually helping that they're small.
16 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2023-03-12T19:54:05.037Z · LW(p) · GW(p)
Note that if you subscribe to MWI, the whole thing is completely deterministic, and so you can't decide to pour different amounts of this "existence juice" into different "branches" by making smarter decisions about AI research. The outcome was predetermined at the time the universe was created. All you do is LARP until the reality reveals itself.
Replies from: carado-1, Viliam, metachirality↑ comment by Tamsin Leake (carado-1) · 2023-03-12T20:31:48.770Z · LW(p) · GW(p)
i don't think determinism is incompatible with making decisions, just like nondeterminism doesn't mean my decisions are "up to randomness"; from my perspective, i can either choose to do action A or action B, and from my perspective i actually get to steer the world towards what those action lead to.
put another way, i'm a compatibilist; i implement embedded agency [? · GW].
put another way, yes i LARP, and this is a world that gets steered towards the values of agents who LARP, so yay.
Replies from: shminux↑ comment by Shmi (shminux) · 2023-03-12T21:21:37.043Z · LW(p) · GW(p)
this is a world that gets steered towards the values of agents who LARP
That's the part that makes no sense to me. (Neither does compatibilism, to be honest, which to me has little to do with embedded agency.) Seems like the causality error you point is in a wrong direction: your LARPing and the outcomes have a common cause, but there is no "if we do this, the world ends up like that" in the "territory". Anyway, seems like a tired old debate, probably not worth it.
↑ comment by Viliam · 2023-03-13T19:08:25.042Z · LW(p) · GW(p)
The outcome was predetermined at the time the universe was created. All you do is LARP until the reality reveals itself.
The same, modulo a few coinflips, is true for the collapse interpretations.
Replies from: shminux↑ comment by Shmi (shminux) · 2023-03-14T00:14:58.735Z · LW(p) · GW(p)
yeah, not arguing, but people tend to think about probabilistic evolution as "not set in stone" and potentially influenced by our actions. There is no out like that for the completely deterministic world.
↑ comment by metachirality · 2023-05-26T16:34:51.906Z · LW(p) · GW(p)
I don't think this matters all that much. In Newcomb's problem, even though your decision is predetermined, you should still want to act as if you can affect the past, specifically Omega's prediction.
Replies from: shminux↑ comment by Shmi (shminux) · 2023-05-26T16:58:26.877Z · LW(p) · GW(p)
There is no "ought" or "should" in a deterministic world of perfect predictors. There is only "is". You are an algorithm and Omega knows how you will act. Your inner world is an artifact that gives you an illusion of decision making. The division is simple: one-boxers win, two-boxers lose, the thought process that leads to the action is irrelevant.
Replies from: metachirality↑ comment by metachirality · 2023-05-26T17:12:03.839Z · LW(p) · GW(p)
One-boxers win because they reasoned in their head that one-boxers win because of updateless decision theory or something so they "should" be a one-boxer. The decision is predetermined but the reasoning acts like it has a choice in the matter (and people who act like they have a choice in the matter win.) What carado is saying is that people who act like they can move around the realityfluid tend to win more, just like how people who act like they have a choice in Newcomb's problem and one-box in Newcomb's problem win even though they don't have a choice in the matter.
Replies from: shminux↑ comment by Shmi (shminux) · 2023-05-26T17:36:46.497Z · LW(p) · GW(p)
None of this is relevant. I don't like the "realityfluid" metaphor, either. You win because you like the number 1 more than number 2, or because you cannot count past 1, or because you have a fancy updateless model of the world, or because you have a completely wrong model of the world which nonetheless makes you one-box. You don't need to "act like you have a choice" at all.
Replies from: metachirality↑ comment by metachirality · 2023-05-26T20:42:27.205Z · LW(p) · GW(p)
The difference between an expected utility maximizer using updateless decision theory and an entity who likes the number 1 more than the number 2, or who cannot count past 1, or who has a completely wrong model of the world which nonetheless makes it one-box is that the expected utility maximizer using updateless decision theory wins in scenarios outside of Newcomb's problem where you may have to choose to $2 instead of $1, or have to count amounts of objects larger than 1, or have to believe true things. Similarly, an entity that "acts like they have a choice" generalizes well to other scenarios whereas these other possible entities don't.
Replies from: shminux↑ comment by Shmi (shminux) · 2023-05-27T00:44:16.718Z · LW(p) · GW(p)
Yes, agents whose inner model is counting possible worlds, assigning probabilities and calculating expected utility can be successful in a wider variety of situations than someone who always picks 1. No, thinking like "an entity that "acts like they have a choice"" does not generalize well, since "acting like you have choice" leads you to CDT and two-boxing.
comment by Dagon · 2023-03-12T18:21:58.177Z · LW(p) · GW(p)
with regards to intrinsic value (rather than eg caring about diversity)
Do you have a reference (or sketch of an argument) for why there's any such thing as "intrinsic value", and why diversity or variety wouldn't be part of it?
Replies from: carado-1↑ comment by Tamsin Leake (carado-1) · 2023-03-12T20:27:58.334Z · LW(p) · GW(p)
what i mean here is "with regards to how much moral-patienthood we attribute to things in it (eg for if they're suffering), rather than secondary stuff we might care about like how much diversity we gain from those worlds".
Replies from: TAGcomment by TAG · 2023-03-31T16:06:09.541Z · LW(p) · GW(p)
with regards to intrinsic value (rather than eg caring about diversity), duplicating the computation of moral-patient-experience does count as more moral-patient-experience. in deduplication ethics, I≈M≈Q≈V.
The physics doesn't determine the intrinsic value of anything, and what you care calling intrinsic value is not intrinsic.
The physics also doesn't tell you how measure -- what you are calling "reality juice" -- relates to consciousness. Having levels of consciousness that decrease with decreasing measure is convenient for some ethical theories, but you shouldn't believe things because they are convenient. Also, it's a kind of zombie argument.
There's broadly two areas where MWI has ethical implications. One is over the fact that MW means low probability events have to happen very time -- as opposed to single universe physics, where they usually don't. The other is over whether they are discounted in moral significance for being low in quantum mechanical measure or probability
It can be argue that probability calculations come out the same under different interpretations of QM, but ethics is different. The difference stems from the fact that what what other people experience is relevant to them, wheareas for a probability calculation, I only need to be able to statistically predict my own observations. Using QM to predict my own observations, I can ignore the question of whether something has a ten percent chance of happening in the one and only world, or a certainty of happening in one tenth of possible worlds.
You can have objective information about observations, and if your probability calculus is wrong , you will get wrong results and know that you are getting wrong results. That is the negative feedback that allows physics to be less wrong.
You can have subjective information about your own mental states, and if your personal calculus is wrong , you will get wrong results and know that you are getting wrong results. That is the negative feedback that allows personal decision theory to be less wrong.
Altruistic ethics is different. You don't have either kind of direct evidence, because you are concerned with other people's subjective sensations , not objective evidence , or your own subjectivity. Questions about ethics are downstream of questions about qualia, and qualia are subjective, and because they are subjective, there is no reason to expect them to behave like third person observations. We have to infer that someone else is suffering , and how much, using background assumptions. For instance, I assume that if you hit your thumb with a hammer , it hurts you like it hurts me when I hit my thumb with a hammer.
One can have a set of ethical axioms saying that I should avoid causing death and suffering to others, but to apply them under many worlds assumptions, I need to be able to calculate how much death and suffering my choices cause in relation to the measure. Which means I need to know whether the measure or probability of a world makes a difference to the intensity of subjective experience.. including the option of "not at all", and I need to know whether the deaths of then people in a one tenth measure world count as ten deaths or one death. .
Suppose they are discounted.
If people in low measure worlds experience their suffering fully, then a 1%, of creating a hell-world would be equivalent in suffering to a 100% chance. But if people in low measure worlds are like philosophical zombies, with little or no phenomenal consciousness, so that their sensations are faint or nonexistent, the moral hazard is much lower.
A similar, but slightly less obvious argument applies to causing death. Causing the "death" of a complete zombie is presumably as morally culpable as causing the death of a character in a video game...which, by common consent, is not problem at all. So... causing the death of a 50% zombie would be only half as bad as killing a real person...maybe.
There is an alternative way of cashing out quantum mechanical measure due to David Deutsch. He supposes that you can have exact duplicates of worlds , which form sets of identical worlds, and which have no measure of their own. Each set contains different worlds, making the sets unique, each world within a set is identical to the others. Counting the worlds in an equivalence set gives you the equivalent of measure.
Under this interpretation, you should ethically discount low-measure worlds (world sets) because there are fewer people in them.
The approach where suffering and moral worth are discounted according to measure has the convenient property that it works like probability theory, so that you don't have to take an interpretation of QM into account. But , if you are arguing to the conclusion that the interpretation of QM is ethically neutral, you can't inject the same claim as an assumption, without incurring circularity.
What's needed is a way of deciding the issue of measure versus reality versus consciousness ethical value that is not question begging. We are not going to get a rigourous theory based on physics alone, because physics does not explicitly deal with reality, consciousness or ethical value. But there are some hints. One important consideration is that under many world's theory, all worlds must have measures summing to 1, so any individual world,including ours, has measure less than 1. But our world and our consciousness seem fully real to us; if there is nothing special about our world, then the inhabitants of other worlds presumably feel the same. And that is just the premise that leads to the conclusion that many world's theory does impact ethics.
comment by Anomalous (ward-anomalous) · 2023-03-30T18:53:26.730Z · LW(p) · GW(p)
I don't buy the anthropic interpretation for the same reason I don't buy quantum immortality or grabby aliens, so I'm still weakly leaning towards thinking that decoherence matters. Weirdly I haven't seen this dilemma discussed before, and I've not brought it up because I think it's ifonharazdous--for the same reasons you point out in the post. I also tried to think of ways to exploit this for moral gain two years ago! So I'm happy to see I'm not the only one (e.g., you mention entropy control).
I was going to ask a question, but I went looking instead. Here's my new take:
After some thinking, I now have a very shoddy informal mental model in my head, and I'm calling it the "Quantum Shuffle Theory" ⋆★°. If (de)coherence is based on a local measure of similarity between states, then you'd likely see spontaneous recoherence in regions that line up. It could reach equilibrium if the rate of recoherence is a function of decoherence (at some saturation point for decohered configurations).
Additionally, the model (as it works in my head at least) would predict that reality fluid isn't both ergodic and infinite, because we'd observe maxed out effects of interference literally everywhere all the time (although maybe that's what we observe, idk). Or, at least, it puts some limits on how the multiverse could be infinite.
Some cherry-picked support I found for this idea:
"Next, we show that the evolution of the pure-basis states reveals an interesting phenomenon as the system, after decoherence, evolves toward the equilibrium: the spontaneous recoherence of quantum states. ... This phenomenon reveals that the reservoir only shuffle the original information carried out by the initial state of the system instead of erasing it. ... Therefore, spontaneous recoherence is not a property associated only with coherent-state superpositions." (2010)
Also, Bose-Einstein condensates provide some evidence for the shuffling interpretation, but my qm is weak enough that idk whether bog standard MWI predicts it too.
"a Bose–Einstein condensate (BEC) is a state of matter that is typically formed when a gas of bosons at very low densities is cooled to temperatures very close to absolute zero (−273.15 °C or −459.67 °F). Under such conditions, a large fraction of bosons occupy the lowest quantum state, at which point microscopic quantum mechanical phenomena, particularly wavefunction interference, become apparent macroscopically."