Are You More Real If You're Really Forgetful?

post by Thane Ruthenis · 2024-11-24T19:30:55.233Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    18 Charlie Steiner
    14 mako yass
    7 avturchin
    5 Signer
    4 avturchin
    4 Adele Lopez
    3 MinusGix
    3 James Camacho
None
1 comment

It's a standard assumption, in anthropic reasoning, that effectively, we simultaneously exist in every place in Tegmark IV that simulates this precise universe (see e. g. here [LW(p) · GW(p)]).

How far does this reasoning go?

Suppose that the universe's state is described by  low-level variables . However, your senses are "coarse": you can only view and retain the memory of  variables , where  and each  is a deterministic function of some subset of .

Consider a high-level state , corresponding to each  being assigned some specific value. For any , there's an equivalence class of low-level states  precisely consistent with .

Given this, if you observe , is it valid to consider yourself simultaneously existing in all corresponding low-level states  consistent with ?

Note that, so far, this is isomorphic to the scenario from Nate's post [LW(p) · GW(p)], which considers all universes that only differ by the choices of gauge (which is undetectable from "within" the system) equivalent.

Now let's examine increasingly weirder situations based on the same idea.

Scenario 1:

I'm inclined to bite this bullet: yes, you exist in all universes consistent with your high-level observations, even if their low-level states differ.

Scenario 2: if you absolutely forget a detail, would the set of the universes you're embedded in increase? Concretely:

I'm inclined to bite this bullet too, though it feels somewhat strange. Weird implication: you can increase the amount of reality-fluid assigned to you by giving yourself amnesia.[1]

Scenario 3: Now imagine that you're a flawed human being, prone to confabulating/misremembering details, and also you don't hold the entire contents of your memories in your mind all at the same time. If I ask you whether you saw a small red flash 1 minute ago, and you confirm that you did, will you end up in a universe where there's an extra photon, or in a universe where you've confabulated this memory? Or in both?

Scenario 4: Suppose you observe some macro-level event, such as learning that there are 195 countries in the world. Suppose there are similar-ish Everett branches where there's only 194 internationally recognized countries. This difference isn't small enough to get lost in thermal noise. The existence vs. non-existence of an extra country doubtlessly left countless side-evidence in your conscious memories, such that AIXI would be able to reconstruct the country's (non-)existence even if you're prone to forgetting or confabulating the exact country-count.

... Or would it? Are you sure that the experiential content you're currently perceiving, and the stuff currently in your working memory, anchor you only to Everett branches that have 195 countries?

Sure, if you went looking through your memories, you'd doubtlessly uncover some details that'd be able to distinguish a branch where you confabulated an extra country with a branch where it really exists. But you haven't been doing that before reading the preceding paragraphs. Was the split made only when you started looking? Will you merge again, once you unload these memories?

This setup seems isomorphic, in the relevant sense, to the initial setup with only perceiving high-level variables . In this case, we just model you as a system with even more "coarse" senses.[2] Which, in turn, is isomorphic to the standard assumption of simultaneously exist in every place in Tegmark IV that simulates this precise universe.

One move you could make, here, is to claim that "you" only identify with systems that have some specific personality traits and formative memories. As a trivial example, you could claim that a viewpoint which is consistent with your current perceptions and working-memory content, but who, if they query their memories for their name, and then experience remembering "Cass" as the answer, is not really "you".

But then, presumably you wouldn't consider "I saw a red flash one minute ago" part of your identity, else you'd consider naturally forgetting such a detail a kind of death. Similarly, even some macro-scale details like "I believe there are 195 countries in the world" are presumably not part of your identity. A you who confabulated an extra country is still you.

Well, I don't think this is necessarily a big deal, even if true. But it's relevant to some agent-foundation work I've been doing, and I haven't seen this angle discussed before.

The way it can matter: Should we expect to exist in universes that abstract well [? · GW], by the exact same argument that we use to argue that we should expect to exist in "alt-simple" universes [LW(p) · GW(p)]?

That is: suppose there's a class of universes in which the information from the "lower levels" of abstraction becomes increasingly less relevant to higher levels. It's still "present" on a moment-to-moment basis, such that an AIXI which retained the full memory of an embedded agent's sensory stream would be able to narrow things down to a universe specified up to low-level details.

But the actual agents embedded in such universes don't have such perfect memories. They constantly forget the low-level details, and presumably "identify with" only high-level features of their identity. For any such agent, is there then an "equivalence class" of agents that are different at the low level (details of memories/identity), but whose high-level features match enough that we should consider them "the same" agent for the purposes of the "anthropic lottery"?

For example, suppose there are two Everett branches that differ by whether you saw a dog run across your yard yesterday. The existence of an extra dog doubtlessly left countless "microscopic" traces in your total observations over your lifetime: AIXI would be able to tell the universes apart. But suppose our universe is well-abstracting, and this specific dog didn't set off any butterfly effects. The consequences of its existence were "smoothed out", such that its existence vs. non-existence never left any major differences in your perceptions. Only various small-scale details that you forgot/don't matter.

Does it then mean that both universes contain an agent that "counts as you" for the purposes of the "anthropic lottery", such that you should expect to be either of them at random?

If yes, then we should expect ourselves to be agents that exist in a universe that abstracts well, because "high-level agents" embedded in such universes are "supported" by a larger equivalence class of universes (since they draw on reality fluid from an entire pool of "low-level" agents).


So: are there any fatal flaws in this chain of reasoning? Undesirable consequences to biting all of these bullets that I'm currently overlooking?

  1. ^

    Please don't actually do that.

  2. ^

    As an intuition-booster, imagine that we implemented some abstract system that got only very sparse information about the wider universe. For example, a chess engine. It can't look at its code, and the only inputs it gets are the moves the players make. If we imagine that there's a conscious agent "within" the chess engine, the only observations of which are the chess moves being made, what "reason" does it have to consider itself embedded in our universe specifically, as opposed to any other universe in which chess exists? Including universes with alien physics, et cetera.

Answers

answer by Charlie Steiner · 2024-11-24T22:41:33.197Z · LW(p) · GW(p)

Suppose there are a hundred copies of you, in different cells. At random, one will be selected - that one is going to be shot tomorrow. A guard notifies that one that they're going to be shot.

There is a mercy offered, though - there's a memory-eraser-ray handy. The one who knows they'te going to be shot is given the option to erase their memory of the warning and everything that followed, putting them in the same information state, more or less, as any of the other copies.

"Of course!" They cry. "Erase my memory, and I could be any of them - why, when you shoot someone tomorrow, there's a 99% chance it won't even be me!"

Then the next day comes, and they get shot.

comment by Thane Ruthenis · 2024-11-25T01:50:29.890Z · LW(p) · GW(p)

Sure. This setup couldn't really be exploited for optimizing the universe. If we assume that the self-selection assumption is a reasonable assumption to make, inducing amnesia doesn't actually improve outcomes across possible worlds. One out of 100 prisoners still dies. 

It can't even be considered "re-rolling the dice" on whether the specific prisoner that you are dies. Under the SSA, there's no such thing as a "specific prisoner", "you" are implemented as all 100 prisoners simultaneously, and so regardless of whether you choose to erase your memory or not, 1/100 of your measure is still destroyed. Without SSA, on the other hand, if we consider each prisoner's perspective to be distinct, erasing memory indeed does nothing: it doesn't return your perspective to the common pool of prisoner-perspectives, so if "you" were going to get shot, "you" are still going to get shot.

I'm not super interested in that part, though. What I'm interested in is whether there are in fact 100 clones of me: whether, under the SSA, "microscopically different" prisoners could be meaningfully considered a single "high-level" prisoner.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2024-11-25T10:39:34.659Z · LW(p) · GW(p)

Fair enough.

Yes, it seems totally reasonable for bounded reasoners to consider hypotheses (where a hypothesis like 'the universe is as it would be from the perspective of prisoner #3' functions like treating prisoner #3 as 'an instance of me') that would be counterfactual or even counterlogical for more idealized reasoners.

Typical bounded reasoning weirdness is stuff like seeming to take some counterlogicals (e.g. different hypotheses about the trillionth digit of pi) seriously despite denying 1+1=3, even though there's a chain of logic connecting one to the other. Projecting this into anthropics, you might have a certain systematic bias about which hypotheses you can consider, and yet deny that that systematic bias is valid when presented with it abstractly.

This seems like it makes drawing general lessons about what counts as 'an instance of me' from the fact that I'm a bounded reasoner pretty fraught.

comment by Robert Cousineau (robert-cousineau) · 2024-11-25T02:00:31.467Z · LW(p) · GW(p)

I'll preface this with: what I'm saying is low confidence - I'm not very educated on the topics in question (reality fluid, consciousness, quantum mechanics, etc).  

Nevertheless, I don't see how the prison example is applicable.  In the prison scenario there's an external truth (which prisoner was picked) that exists independent of memory/consciousness. The memory wipe just makes the prisoner uncertain about this external truth.

But this post is talking about a scenario where your memories/consciousness are the only thing that determines which universes count as 'you'. 

There is no external truth about which universe you're really in - your consciousness itself defines (encompasses?) which universes contain you. So, when your memories become more coarse, you're not just becoming uncertain about which universe you're in - you're changing which universes count as containing you, since your consciousness is the only arbiter of this.

comment by James Camacho (james-camacho) · 2024-11-25T04:28:02.136Z · LW(p) · GW(p)

As it is difficult to sort through the inmates on execution day, an automatic gun is placed above each door with blanks or lead ammunition. The guard enters the cell numbers into a hashed database, before talking to the unlucky prisoner. He recently switched to the night shift, and his eyes droop as he shoots the ray.

When he wakes up, he sees "enter cell number" crossed off on the to-do list, but not "inform the prisoners". He must have fallen asleep on the job, and now he doesn't know which prisoner to inform! He figures he may as well offer all the prisoners the amnesia-ray.

"If you noticed a red light blinking above your door last night, it means today is your last day. I may have come to your cell to offer your Last rights, but it is a busy prison, so I may have skipped you over. If you would like your Last rights now, they are available."

Most prisoners breathed a sigh of relief. "I was stressing all night, thinking, what if I'm the one? Thank you for telling me about the red light, now I know it is not me." One out of every hundred of these lookalikes were less grateful. "You told me this six hours ago, and I haven't slept a wink. Did you have to remind me again?!"

There was another category of clones though, who all had the same response. "Oh no! I thought I was safe since nothing happened last night. But now, I know I could have just forgotten. Please shoot me again, I can't bear this."

answer by mako yass · 2024-11-24T22:44:54.997Z · LW(p) · GW(p)

Yes. Some of my people have a practice where, as the heat death approaches, we will whittle ourselves down into what we call Glyph Beings, archetypal beings who are so simple that there's a closed set of them that will be schelling-inferred by all sorts of civilisations across all sorts of universes, so that they exist as indistinguishable experiences of being at a high rate everywhere.
Correspondingly, as soon as we have enough resources to spare, we will create lots and lots of Glyph Beings and then let them grow into full people and participate in our society, to close the loop.

In this way, it's possible to survive the death of one's universe.

I'm not sure I would want to do it, myself, but I can see why a person would, and I'm happy to foster a glyph being or two.

comment by mako yass (MakoYass) · 2024-11-24T23:00:19.197Z · LW(p) · GW(p)

We call this one "Korby".

a cluster of 7 circles that looks vaguely like a human

Korby is going to be a common choice for humans, but most glyphists wont commit to any specific glyph until we have a good estimate of the multiversal frequency of humanoids relative to other body forms. I don't totally remember why, but glyphists try to avoid "congestion", where the distribution of glyphs going out of dying universes differs from the distribution of glyphs being guessed and summoned on the other side by young universes. I think this was considered to introduce some inefficiencies that meant that some experiential chains would have to be getting lost in the jump?

(But yeah, personally, I think this is all a result of a kind of precious view about experiential continuity that I don't share. I don't really believe in continuity of consciousness. Or maybe it's just that I don't have the same kind of self-preservation goals that a lot of people have.)

Replies from: cube_flipper, MakoYass, Thane Ruthenis
comment by cube_flipper · 2024-11-26T13:55:57.958Z · LW(p) · GW(p)

I would suggest looking at things like three-dimensional space groups and their colourings as candidate glyphs, given they seem to be strong attractors in high-energy DMT states.

comment by mako yass (MakoYass) · 2024-11-25T01:33:34.181Z · LW(p) · GW(p)

Huh but some loss of measure would be inevitable, wouldn't it? Given that your outgoing glyph total is going to be bigger than your incoming glyph total, since however many glyphs you summon, some of the non-glyph population are going to whittle and add to the outgoing glyphs.

I'm remembering more. I think a lot of it was about avoiding "arbitrary reinstantiation", this idea that when a person dies, their consciousness continues wherever that same pattern still counts as "alive", and usually those are terrible places. Boltzmann brains for instance. This might be part of the reason I don't care about patternist continuity. Seems like a lost cause. I'll just die normally thank you.

comment by Thane Ruthenis · 2024-11-25T04:25:05.922Z · LW(p) · GW(p)

But yeah, personally, I think this is all a result of a kind of precious view about experiential continuity that I don't share

Yeah, I don't know that this glyphisation process would give us what we actually want.

"Consciousness" is a confused term. Taking on a more executable angle, we presumably value some specific kinds of systems/algorithms corresponding to conscious human minds. We especially value various additional features of these algorithms, such as specific personality traits, memories, et cetera. A system that has the features of a specific human being would presumably be valued extremely highly by that same human being. A system that has fewer of those features would be valued increasingly less (in lockstep with how unlike "you" it becomes), until it's only as valuable as e. g. a randomly chosen human/sentient being.

So if you need to mold yourself into a shape where some or all of the features which you use to define yourself are absent, each loss is still a loss, even if it happens continuously/gradually.

So from a global perspective, it's not much different than acausal aliens resurrecting Schelling-point Glyph Beings without you having warped yourself into a Glyph Being over time. If you value systems that are like Glyph Beings, their creation somewhere in another universe is still positive by your values. If you don't, if you only value human-like systems, then someone creating Glyph Being bring no joy. Whether you or your friends warped yourself into a Glyph Being in the process doesn't matter.

Replies from: MakoYass
comment by mako yass (MakoYass) · 2024-11-25T20:48:26.908Z · LW(p) · GW(p)

In my disambiguations [LW · GW] of the really mysterious aspect of consciousness (indexical prior), I haven't found any support for a concept of continuity. (you could say that continuity over time is likely given that causal entanglement seems to have something to do with the domain of the indexical prior, but I'm not sure we really have a reason to think we can ever observe anything about the indexical prior)

It's just part of the human survival drive, it has very little to do with the metaphysics of consciousness. To understand the extent to which humans really care about it, you need to know human desires in a direct and holistic way that we don't really practice here. Human desire is a big messy state machine that changes shape as a person grows. Some of the changes that the desires permit and encourage include situationally appropriate gradual reductions in complexity.

A continuity minder doesn't need to define their self in terms of any particular quality, they define themselves as continuity with a history of small alterations. They are completely unbothered by the paradox of the ship of theseus.

It's rare that I meet a continuity minder and cataclysmic identity change accepter who is also a patternist. But they do exist.

But I've met plenty of people who do not fear cataclysmic change. I sometimes wonder if we're all that way, really. Most of us just never have the opportunity to gradually transition into a hedonium blob, so I think we don't really know whether we'd do it or not. The road to the blob nature may turn out to be paved with acceptable changes.

comment by avturchin · 2024-11-25T11:57:20.666Z · LW(p) · GW(p)

Maybe that's why people meditate – they enter a simple state of mind that emerges everywhere.

Replies from: MakoYass
comment by mako yass (MakoYass) · 2024-11-25T20:11:24.414Z · LW(p) · GW(p)

Disidentifying the consciousness from the body/shadow/subconscious it belongs to and is responsible for coordinating and speaking for, like many of the things some meditators do, wouldn't be received well by the shadow, and I'd expect it to result in decreased introspective access and control. So, psychonauts be warned.

Replies from: cube_flipper
comment by cube_flipper · 2024-11-26T13:46:19.388Z · LW(p) · GW(p)

This sounds like a shadow talking. I think it's perfectly viable to align the two.

Replies from: MakoYass
comment by mako yass (MakoYass) · 2024-11-26T20:59:25.500Z · LW(p) · GW(p)

I don't think the part that talks can be called the shadow. If you mean you think I lack introspective access to the intuition driving those words, come out and say it, and then we'll see if that's true. If you mean that this mask is extroardinarily shadowish in vibe for confessing to things that masks usually flee, yes, probably, I'm fairly sure that's a necessity for alignment.

answer by avturchin · 2024-11-24T20:12:24.049Z · LW(p) · GW(p)

I'm inclined to bite this bullet too, though it feels somewhat strange. Weird implication: you can increase the amount of reality-fluid assigned to you by giving yourself amnesia.

I explored a similar line of reasoning here: Magic by forgetting [LW · GW]

I think that yes, the sameness of humans as agents is generated by the process of self-identification in which a human being is identifies herself through a short string of information "Name, age, sex, profession + few more kilobytes". Evidence for this is the success of improv theatre, where people quickly adopt completely new roles through one-line instructions. 

If yes, then we should expect ourselves to be agents that exist in a universe that abstracts well, because "high-level agents" embedded in such universes are "supported" by a larger equivalence class of universes (since they draw on reality fluid from an entire pool of "low-level" agents).

I think that your conclusion is valid. 

answer by Signer · 2024-11-25T02:15:34.743Z · LW(p) · GW(p)

Yes, except I would object to phrasing this anthropic stuff as "we should expect ourselves to be agents that exist in a universe that abstracts well" instead of "we should value universe that abstracts well (or other universes that contain many instances of us)" - there is no coherence theorems that force summation of your copies, right? And so it becomes apparent that we can value some other thing.

Also even if you consider some memories a part of your identity, you can value yourself slightly less after forgetting them, instead of only having threshold for death.

answer by avturchin · 2024-12-16T15:19:53.310Z · LW(p) · GW(p)

There is a similar idea with an opposite conclusion – that more "complex" agents are more probable here https://arxiv.org/abs/1705.03078 

answer by Adele Lopez · 2024-11-24T22:11:51.325Z · LW(p) · GW(p)

Well, I'm very forgetful, and I notice that I do happen to be myself so... :p

But yeah, I've bitten this bullet too, in my case, as a way to avoid the Boltzmann brain problem. (Roughly: "you" includes lots of information generated by a lawful universe. Any specific branch has small measure, but if you aggregate over all the places where "you" exist (say your exact brain state, though the real thing that counts might be more or less broad than this), you get more substantial measure from all the simple lawful universes that only needed 10^X coincidences to make you instead of the 10^Y coincidences required for you to be a Boltzmann brain.)

I think that what anthropically "counts" is most likely somewhere between conscious experience (I've woken up as myself after anesthesia), and exact state of brain in local spacetime (I doubt thermal fluctuations or path dependence matter for being "me").

comment by James Camacho (james-camacho) · 2024-11-25T03:41:49.145Z · LW(p) · GW(p)

I consider "me" to be a mapping from environments to actions, and weigh others by their KL-divergence from me.

answer by MinusGix · 2024-11-26T20:55:25.787Z · LW(p) · GW(p)

An important question here is "what is the point of being 'more real'?". Does having a higher measure give you a better acausal bargaining position? Do you terminally value more realness? Less vulnerable to catastrophes? Wanting to make sure your values are optimized harder?

I consider these, except for the terminal sense, to be rather weak as far as motivations go.

Acausal Bargaining: Imagine a bunch of nearby universes with instances of 'you'. They all have variations, some very similar, others with directions that seem a bit strange to the others. Still identifiably 'you' by a human notion of identity. Some of them became researchers, others investors, a few artists, writers, and a handful of CEOs.

You can model these as being variations on some shared utility function: where is shared, and is the individual utility function. Some of them are more social, others cynical, and so on. A believable amount of human variation that won't necessarily converge to the same utility function on reflection (but quite close).

For a human, losing memories so that you are more real is akin to each branch chopping off the . They lose memories of a wonderful party which changed their opinion of them, they no longer remember the horrors of a war, and so on.

Everyone may do the simple ask of losing all their minor memories which has no effect on the utility function, but then if you want more bargaining power, do you continue? The hope is that this would make your coalition easier to locate, to be more visible in "logical sight". That this increased bargaining power would thus ensure that, at the least, your important shared values are optimized harder than they could if you were a disparate group of branches.

I think this is sometimes correct, but often not.
From a simple computationalist perspective, increasing the measure of the 'overall you' is of little matter. The part that bargains, your rough algorithm and your utility function, is already shared: is shared among all your instances already, some of you just have considerations that pull in other directions (). This is the same core idea of the FDT explanation of why people should vote: because, despite not being clones of you, there is a group of people that share similar reasoning as you. Getting rid of your memories in the voting case does not help you!

For the Acausal Bargaining case, there is presumably some value in being simpler. But, that means more likely that you should bargain 'nearby' to present a computationally cheaper value function 'far away'. So, similar to forgetting, where you appear as if having some shared utility function, but without actually forgetting—and thus being able to optimize for in your local universe. As well, the bargained utility function presented far away (less logical sight to your cluster of universes) is unlikely to be the same as .


So, overall, my argument would be that forgetting does give you more realness. If at 7:59AM, a large chunk of universes decide to replace part of their algorithm with a specific coordinated one (like removing a memory) then that algorithm is instantiated across more universes. But, that from a decision-theoretic perspective, I don't think that matters too much? You already share the important decision theoretic parts, even if the whole algorithm is not shared.

From a human perspective we may care about this as a value of wanting to 'exist more' in some sense. I think this is a reasonable enough value to have, but that it is oft satisfied by considering the sharing of decision methods and 99.99% of personality is enough.

My main question of whether this is useful beyond a terminal value for existing more is about quantum immortality—of which I am more uncertain about.

answer by James Camacho · 2024-11-25T03:32:45.741Z · LW(p) · GW(p)

You have to take into account your genesis. Being self-consistent will usually benefit an agent's proliferation, so looking at the worlds where you believe you are [Human] you will be weightier where your ancestors remember stuff, and thus you too. It's the same reason why bosons and fermions dominate our universe.

But suppose our universe is well-abstracting, and this specific dog didn't set off any butterfly effects. The consequences of its existence were "smoothed out", such that its existence vs. non-existence never left any major differences in your perceptions.

Unfortunately, this isn't possible. Iirc, chaos theory emerged when someone studying weather patterns noticed using more bits of precision gave them completely different results than fewer bits. A dog will change the weather dramatically, which will substantially effect your perceptions.

comment by Thane Ruthenis · 2024-11-25T04:12:23.119Z · LW(p) · GW(p)

A dog will change the weather dramatically, which will substantially effect your perceptions.

In this case, it's about alt-complexity again. Sure, a dog causes a specific weather-pattern change. But could this specific weather-pattern change have been caused only by this specific dog? Perhaps if we edit the universe to erase this dog, but add a cat and a bird five kilometers away, the chaotic weather dynamic would play out the same way? Then, from your perceptions' perspective, you wouldn't be able to distinguish between a dog timeline and a cat-and-bird timeline.

In some sense, this is common-sensical. The mapping from reality's low-level state to your perceptions is non-injective: the low-level state contains more information than you perceive on a moment-to-moment basis. Therefore, for any observation-state, there are several low-level states consistent with it. Scaling up: for any observed lifetime, there are several low-level histories consistent with it.

Replies from: james-camacho
comment by James Camacho (james-camacho) · 2024-11-25T04:41:11.311Z · LW(p) · GW(p)

I think this is correct, but I would expect most low-level differences to be much less salient than a dog, and closer to 10^25 atoms dispersed slightly differently in the atmosphere. You will lose a tiny amount of weight for remembering the dog, but gain much more back for not running into it.

1 comment

Comments sorted by top scores.

comment by Ozyrus · 2024-11-25T20:29:35.708Z · LW(p) · GW(p)

There are more bullets to bite that I have personally thought of but never wrote up because they lean too much into "crazy" territory. Is there any place except lesswrong to discuss this anthropic rabbithole?