Magic by forgetting
post by avturchin · 2024-04-24T14:32:20.753Z · LW · GW · 37 commentsContents
Thought experiment with curing a disease by forgetting There are several caveats for such a line of reasoning Why minds in similar states should merge? Theoretical price None 37 comments
Epistemic – this post is more suitable for LW as it was 10 years ago
Thought experiment with curing a disease by forgetting
Imagine I have a bad but rare disease X. I may try to escape it in the following way:
1. I enter the blank state of mind and forget that I had X.
2. Now I in some sense merge with a very large number of my (semi)copies in parallel worlds who do the same. I will be in the same state of mind as other my copies, some of them have disease X, but most don’t.
3. Now I can use self-sampling assumption for observer-moments (Strong SSA) and think that I am randomly selected from all these exactly the same observer-moments.
4. Based on this, the chances that my next observer-moment after the blank state of mind will have the disease X are small and equal to the statistical probability of having disease X. Let’s say there are 1000 times more my copies which do not have disease X. Therefore after I return from the meditation, there will be only 0.001 chance that I will have this disease X, as the next state will be randomly selected from all those that can logically follow from the current state. Thus, I will be almost for sure cured!
There are several caveats for such a line of reasoning
1. Obviously, I must forget not only about the disease but even about the fact that I was trying to forget something. I have to forget that I tried to forget about X and even used meditation as a magic tool. Therefore, after waking up, I will not know if it works. Also, it will work if not-ill people are often also entering the blank state of mind without attempts to forget something (and accept the risk of getting something bad). Meditation is in some sense such a blank state of mind, and many people meditate just for relaxation or enlightenment.
2. The state-based, not path-based identity theory must be valid. Not continuity of consciousness, but “I am randomly selected from the same minds”. Note that path-dependent identity also has its own paradoxes: two copies can have different ‘weights” depending on how they were created while having the same measure. For example, if in sleep two copies of me will be created and one of the copies will be copied again – when there will be 3 copies in the morning in the same world, but if we calculate chances to be one of them based on paths, they will be ½ and ¼ and ¼. Path-based identity also claims that a copy of me sent by a tele-transporter is not me, because it has a different path. Path-based identity also is used in the identity of objects of art, in the name of provenance.
3. Also, MWI or other form of multiverse must be true.
4. There is a 0.001 chance that someone who did not have the disease will get it. But he can repeat the procedure.
5. One can try to change other observables this way: age, and height. Small changes will work better, as they are easy to forget.
6. The deeper the meditation (which here is understood as a blank state of mind without any other clarification like contact with atman or jahnas, and deepness is measured only by closeness by the pure blank state without any traces), the more minds are in the same state of consciousness throughout the universe. This means that I somehow can jump into those minds as if through a wormhole.
7. This contradicts all popular theories of magic where a person concentrates on what she wants. Here you need to forget.
8. The bigger the problem, the more difficult is to forget it.
9. There can’t be observable evidence that magic-by-forgetting actually works.
10. A bad infohazardous consequence: the things you love can disappear forever as soon as you stop looking at them. There was a LW post about this fear in 2015 https://www.lesswrong.com/posts/is7ieoWyiyYRc7eXL/the-consequences-of-dust-theory
11. Magic by forgetting will be a necessary consequence of the dust theory (but not vice versa, magic by forgetting can be valid even in no-dust-theory-worlds). One way to solve this is to accept that there is nothing in the world except the chains of mathematical Boltzmann-brains-observer-moments, as Mueller did in his article “Law without Law”. In that case, we can suggest that more stable chains are getting advantage and such stability also implies that there are stronger interconnections between observer-moments (more traces of past moments in the current moment) and there is less magic by forgetting. But glitches can be observable in such a model.
12. An interesting analogy is with a hybrid model of Sleeping Beauty by Bostrom. In it, according to my understanding, the observer, when gets new evidence, should update her reference class to the member of all minds who got the same evidence.
13. Yes, I tried to implement this, but I don’t know if it works.
14. Can I validate magic-by-forgetting, if I precommit to use it any time I have a bad problem? – Will I have eventually fewer bad problems on average (without knowing which bad problems I escaped)?
15. Small drift of reality. Even if I keep all important things in my mind constantly, there is a bar of error in details. Within this error, two little different things can look the same. After time passes, such small errors may accumulate and reality will change. In a normal world, it is unobservable. In the dust world, it can be observed and will look like the Mandella effect: a strange discrepancy between memory and facts, or generally, between any two long disconnected information channels.
16. If you are an effective altruist, magic by forgetting doesn’t matter to you.
17. If you practice magic-by-forgetting 1000 times, it returns to thermodynamic equilibrium, and your chances of getting rid of bad things become equal to getting it.
18. If you have rare but valuable property, it is dangerous for you – you may lose it.
Why minds in similar states should merge?
They do not merge physically (if dust theory is false), but they merge logically: if there are three different minds with different names A, B and C, and each of them enters into a blank state and forgets its name, each mind can assign the chances that its name is A as 1/3, based on the self-sampling assumption (SIA does not make a significant change here as there are no possible minds in this experiment).
To strengthen the point, imagine that the minds would actually merge, maybe as uploads which are written down in the same memory block (or whatever way of mind merging you can imagine). You can observe that such merging methods do not assume actual information exchange between copies, as they all have the same information. There is no casual process which connects copies. So, merging into one place plays only a symbolic role, and being in the same state in different locations is the same as being merged into one place.
The point here is not just indexical uncertainty, but that the three minds which are in the same state should be treated as the same mind (from an internal perspective): the same mind, located in three different places. Any argument against it assumes some path-dependent identity or external perspective.
Theoretical price
While it is easy to dismiss the idea of magic-by-forgetting as absurd, it may have a theoretical price. Either a strong self-sampling assumption is false and-or path-based identity is true.
37 comments
Comments sorted by top scores.
comment by justinpombrio · 2024-04-24T21:34:36.406Z · LW(p) · GW(p)
There is a 0.001 chance that someone who did not have the disease will get it. But he can repeat the procedure.
No, that doesn't work. It invalidates the implicit assumption you're making that the probability that a person chooses to "forget" is independent of whether they have the disease. Ultimately, you're "mixing" the various people who "forgot", and a "mixing" procedure can't change the proportion of people who have the disease.
When you take this into account, the conclusion becomes rather mundane. Some copies of you can gain the disease, while a proportional number of copies can lose it. (You might think you could get some respite by repeatedly trading off "who" has the disease, but the forgetting procedure ensures that no copy ever feels respite, as that would require remembering having the disease.)
Replies from: avturchin↑ comment by avturchin · 2024-04-25T18:29:02.156Z · LW(p) · GW(p)
The "repeating" will not be repeating from internal point of view of a person, as he has completely erased the memories of the first attempt. So he will do it as if it is first time.
Replies from: justinpombrio↑ comment by justinpombrio · 2024-04-28T03:54:04.021Z · LW(p) · GW(p)
My point still stands. Try drawing out a specific finite set of worlds and computing the probabilities. (I don't think anything changes when the set of worlds becomes infinite, but the math becomes much harder to get right.)
Replies from: avturchin↑ comment by avturchin · 2024-04-28T10:05:40.488Z · LW(p) · GW(p)
The trick is to use already existing practice of meditation (or sleeping) and connect to it. Most people who go to sleep do no do it to use magic by forgetting, but it is natural to forget something during sleep. Thus, the fact that I wake up from sleeping does not provide any evidence about me having the disease.
But it is in a sense parasitic behavior, and if everyone will use magic by forgetting every time she goes to sleep, there will be almost no gain. Except that one can "exchange" one bad thing on another, but will not remember the exchange.
Replies from: justinpombrio↑ comment by justinpombrio · 2024-04-28T13:41:12.169Z · LW(p) · GW(p)
Not "almost no gain". My point is that it can be quantified, and it is exactly zero expected gain under all circumstances. You can verify this by drawing out any finite set of worlds containing "mediators", and computing the expected number of disease losses minus disease gains as:
num(people with disease)*P(person with disease meditates)*P(person with disease who meditates loses the disease) - num(people without disease)*P(person without disease meditates)*P(person without disease who meditates gains the disease)
My point is that this number is always exactly zero. If you doubt this, you should try to construct a counterexample with a finite number of worlds.
Replies from: avturchin↑ comment by avturchin · 2024-04-29T14:00:18.283Z · LW(p) · GW(p)
I think I understand what you say - the expected utility of the whole procedure is zero.
For example, imagine that there are 3 copies and only one has the disease. All meditate. After the procedure, the copy with disease will have 2/3 chances of being cured. Each of two copies without the disease are getting 1/3 chance of having the disease which in sum gives 2/3 of total utility. In that case total utility of being cured = total utility of getting the disease and the whole procedure is neutral.
However, If I already know that I have the disease, and I am not altruistic to my copies, playing such game is a wining move to me?
Replies from: justinpombrio, RamblinDash↑ comment by justinpombrio · 2024-04-29T15:33:33.220Z · LW(p) · GW(p)
However, If I already know that I have the disease, and I am not altruistic to my copies, playing such game is a wining move to me?
Correct. But if you don't have the disease, you're probably also not altruistic to your copies, so you would choose not to participate. Leaving the copies of you with the disease isolated and unable to "trade".
Replies from: avturchin↑ comment by avturchin · 2024-04-29T16:07:55.917Z · LW(p) · GW(p)
Yes, it only works if other copies are meditating for some other reason. For example, they sleep or meditate for enlightenment. And they are exploited in this situation.
Replies from: justinpombrio↑ comment by justinpombrio · 2024-04-30T17:02:34.987Z · LW(p) · GW(p)
Exactly.
↑ comment by RamblinDash · 2024-04-29T14:19:39.155Z · LW(p) · GW(p)
In this scenario, why are the non-disease-having copies participating? They are not in a state of ignorance, they know they don't have the disease.
Replies from: avturchin↑ comment by avturchin · 2024-04-29T16:05:38.072Z · LW(p) · GW(p)
I assume that meditation happens naturally, like sleep.
Replies from: RamblinDash↑ comment by RamblinDash · 2024-04-30T15:49:22.374Z · LW(p) · GW(p)
But don't the non-diseased copies not just need to generally meditate, but to do some special kind of meditation where they forget the affirmative evidence they have that they don't have the disease?
Replies from: avturchin↑ comment by avturchin · 2024-04-30T20:32:52.254Z · LW(p) · GW(p)
non-disease copies do not need to perform any changes in their meditation routine in this model, assuming that they naturelly forget their disease status during meditation.
Replies from: RamblinDash↑ comment by RamblinDash · 2024-05-01T00:47:13.278Z · LW(p) · GW(p)
I am not a mediator so maybe you have me beat, but it's not immediately clear why you would assume this
comment by No77e (no77e-noi) · 2024-04-24T20:54:13.994Z · LW(p) · GW(p)
Even if you manage to truly forget about the disease, there must exist a mind "somewhere in the universe" that is exactly the same as yours except without knowledge of the disease. This seems quite unlikely to me, because you having the disease has interacted causally with the rest of your mind a lot by when you decide to erase its memory. What you'd really need to do is to undo all the consequences of these interactions, which seems a lot harder to do. You'd really need to transform your mind into another one that you somehow know is present "somewhere in the multiverse" which seems also really hard to know.
Replies from: alen-2, avturchin↑ comment by Alen (alen-2) · 2024-04-24T21:19:56.534Z · LW(p) · GW(p)
The multiverse might be very big. Perhaps if you're mad enough having the disease will bring you to a state of mind that a version with no disease has. That's why wizards have to be mad to use magic.
Replies from: avturchincomment by Ape in the coat · 2024-04-28T17:01:36.883Z · LW(p) · GW(p)
Universal guide to magic via anthropics:
- Be not randomly sampled from a set
- Assume that you you are randomly sampled from the set anyway
- Arrive to an absurd conclusion
- Magic!
Either a strong self-sampling assumption is false
Of course it is false. What are the reasons to even suspect that it might be true?
and-or path-based identity is true.
Note that path-dependent identity also has its own paradoxes: two copies can have different ‘weights” depending on how they were created while having the same measure. For example, if in sleep two copies of me will be created and one of the copies will be copied again – when there will be 3 copies in the morning in the same world, but if we calculate chances to be one of them based on paths, they will be ½ and ¼ and ¼.
This actually sounds about right. What's paradoxical here?
Replies from: simon↑ comment by simon · 2024-11-26T17:54:12.756Z · LW(p) · GW(p)
This actually sounds about right. What's paradoxical here?
Not that it's necessarily inconsistent, but in my view it does seem to be pointing out an important problem with the assumptions (hence indeed a paradox if you accept those false assumptions):
(ignore this part, it is just a rehash of the path dependence paradigm. It is here to show that I am not complaining about the math, but about its relation to reality):
Imagine you are going to be split (once). It is factually the case that there are going to be two people with memories, etc. consistent with having been you. Without any important differences to distinguish them, and if you insist on coming up with some probability number for "waking up" as one particular one of them obviously it has to be ½.
And then, if one of those copies subsequently splits, if you insist on assigning a probability number for those further copies, then from the perspective of that parent copy, the further copies also have to be ½ each.
And then if you take these probability numbers seriously and insist on them all being consistent then obviously from the perspective of the original the probability numbers for the final numbers have to be ½ and ¼ and ¼. As you say "this actually sounds about right".
What's paradoxical here is that in the scenario provided we have the following facts:
- you have 3 identical copies all formed from the original
- all 3 copies have an equal footing going forward
and yet, the path-based identity paradigm is trying to assign different weights to these copies, based on some technical details of what happened to create them. The intuition that this is absurd is pointing at the fact that these technical details aren't what most people probably would care about, except if they insist on treating these probability numbers as real things and trying to make them follow consistent rules.
Ultimately "these three copies will each experience being a continuation of me" is an actual fact about the world, but statements like "'I' will experience being copy A (as opposed to B or C)" are not pointing to an actual fact about the world. Thus assigning a probability number to such a statement is a mental convenience that should not be taken seriously. The moment such numbers stop being convenient, like assigning different weights to copies you are actually indifferent between, they should be discarded. (and optionally you could make up new numbers that match what you actually care about instrumentally. Or just not think of it in those terms).
Replies from: avturchin↑ comment by avturchin · 2024-11-26T19:07:47.757Z · LW(p) · GW(p)
Did I understand you right that you argue against path-dependent identity here?
"'I' will experience being copy A (as opposed to B or C)" are not pointing to an actual fact about the world. Thus assigning a probability number to such a statement is a mental convenience that should not be taken seriously
Copies might be the same after copying but the room numbers in which they appear are different, and thus they can make bets on room numbers
↑ comment by simon · 2024-11-26T19:18:37.482Z · LW(p) · GW(p)
The issue, to me, is not whether they are distinguishable.
The issues are:
- is there any relevant-to-my-values difference that would cause me to weight them differently? (answer: no)
and:
- does this statement make any sense as pointing to an actual fact about the world: "'I' will experience being copy A (as opposed to B or C)" (answer: no)
Imagine the statement: in world 1, "I" will wake up as copy A. in world 2 "I" will wake up as copy B. How are world 1 and world 2 actually different?
Answer: they aren't different. It's just that in world 1, I drew a box around the future copy A and said that this is what will count as "me", and in world 2, I drew a box around copy B and said that this is what will count as "me". This is a distinction that exists only in the map, not in the territory.
comment by Dagon · 2024-04-24T21:10:50.575Z · LW(p) · GW(p)
Is your mind causally disconnected from the actual universe? That's the only way I can understand the merging of minds that share some similarities (but are absolutely not identical across universes that aren't themselves identical). Your forgetting may make two possible minds superficially the same, but they're simply not identical.
I don't know why you think path-based configuration of brain state would be false. That may not be "identity" for all purposes - there may be purposes for which it doesn't suffice or is too restrictive, but it's probably good for this case.
Replies from: avturchin↑ comment by avturchin · 2024-04-25T16:19:18.954Z · LW(p) · GW(p)
Presumably in deep meditation people become disconnected from reality.
Replies from: Dagon↑ comment by Dagon · 2024-04-25T16:48:49.236Z · LW(p) · GW(p)
In deep meditation people become disconnected from reality
Only metaphorically, not really disconnected. In truth, in deep meditation, the conscious attention is not focused on physical perceptions, but that mind is still contained in and part of the same reality.
This may be the primary crux of my disagreement with the post. People are part of reality, not just connected to it. Dualism is false, there is no non-physical part of being. The thing that has experiences, thoughts, and qualia is a bounded segment of the universe, not a thing separate or separable from it.
comment by Donald Hobson (donald-hobson) · 2024-04-24T18:47:56.377Z · LW(p) · GW(p)
Who knows what "meditation" is really doing under the hood.
Lets set up a clearer example.
Suppose you are an uploaded mind, running on a damaged robot body.
You write a script that deletes your mind, running a bunch of nul-ops before rebooting a fresh blank baby mind with no knowledge of the world.
You run the script, and then you die. That's it. The computer running nul ops "merges" with all the other computers running nul ops. If the baby mind learns enough to answer the question before checking if it's hardware is broken, then it considers itself to have a small probability of the hardware being broken. And then it learns the bad news.
Basically, I think forgetting like that without just deleting your mind isn't something that really happens. I also feel like, when arbitrary mind modifications are on the table, "what will I experience in the future" returns Undefined.
Toy example. Imagine creating loads of near-copies of yourself, with various changes to memories and personality. Which copy do you expect to wake up as? Equally likely to be any of them? Well just make some of the changes larger and larger until some of the changes delete your mind entirely and replace it with something else.
Because the way you have set it up, it sounds like it would be possible to move your thread of subjective experience into any arbitrary program.
Replies from: avturchin↑ comment by avturchin · 2024-04-24T19:51:11.843Z · LW(p) · GW(p)
In the case of broken robot we need two conditions for magic by forgetting:
- there are 100 robots and only one is broken and all of them are type-copies of each other.
- each robot enters into blank state of mind naturally in some moment, like sleep or reboot.
In that case, after robot enters the blank state of mind it has equal chances to be any of robots and this dilutes its chances to have the damaged body after awakening.
For you toy example - at first approximation, any of which can recognize itself as avturchin (self-recognition identity criteria).
↑ comment by Donald Hobson (donald-hobson) · 2024-04-24T22:10:46.076Z · LW(p) · GW(p)
The point is, if all the robots are a true blank state, then none of them is you. Because your entire personality has just been forgotten.
Replies from: avturchin↑ comment by avturchin · 2024-04-25T16:07:32.706Z · LW(p) · GW(p)
I can forget one particular thing, but preserve most of my selfidentification information
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2024-04-26T13:31:58.007Z · LW(p) · GW(p)
True. But for that you need there to exist another mind almost identical to yours except for that one thing.
In the question "how much of my memories can I delete while retaining my thread of subjective experience?" I don't expect there to be an objective answer.
comment by simon · 2024-11-25T00:34:30.425Z · LW(p) · GW(p)
Here are some things one might care about:
- what happens to your physical body
- the access to working physical bodies of cognitive algorithms, across all possible universes, that are within some reference class containing the cognitive algorithm implemented by your physical body
- ... etc, etc...
- what happens to the physical body selected by the following process:
- start with your physical body
- go forward to some later time selected by the cognitive algorithm implemented by your physical body, allowing (or causing) the knowledge possessed by the cognitive algorithm implemented by your physical body to change in the interim
- at that later time, randomly sample from all the physical bodies, among all universes, that implement cognitive algorithms having the same knowledge as the cognitive algorithm implemented by your physical body at that later time
- (optionally) return to step b but with the physical body whose changes of cognitive algorithm are tracked and whose decisions are used being the the new physical body selected from step c
- stop whenever the cognitive algorithm implemented by the physical body selected in some step decides to stop.
For 1, 2, and I expect for the vast majority of possibilities for 3, your procedure will not work. It will work for 4, which is apparently what you care about.
Terminal values are arbitrary, so that's entirely valid. However, 4 is not something that seems, to me, like a particularly privileged or "rational" thing to care about.
Replies from: avturchin↑ comment by avturchin · 2024-11-25T11:18:58.076Z · LW(p) · GW(p)
It will work only if I care for my observations, something like EDT.
Replies from: simon, simon↑ comment by simon · 2024-11-25T16:01:51.428Z · LW(p) · GW(p)
I now care about my observations!
My observations are as follows:
At the current moment "I" am the cognitive algorithm implemented by my physical body that is typing this response.
Ten minutes from now "I" will be the cognitive algorithm of a green tentacled alien from beyond the cosmological horizon.
You will find that there is nothing contradictory about this definition of what "I" am. What "I" observe 10 minutes from now will be fully compatible with this definition. Indeed, 10 minutes from now, "I" will be the green tentacled alien. I will have no memories of being in my current body , of course, but that's to be expected. The cognitive algorithm implemented by my current body at that time will remember being "me", but that doesn't count, that's someone else's observations.
Edit: to be clear, the point made above (by the guy who is now a green tentacled alien beyond the cosmological horizon, and whose former body and cognitive algorithm is continuous with mine) is not a complaint about the precise details of your definition of what "you" are. What he was trying to point at is whether personal identity is a real thing that exists in the world at all, and how absurd your apparent definition of "you" looks to someone - like me - who doesn't think that personal identity is a real thing.
↑ comment by simon · 2024-11-25T15:37:28.780Z · LW(p) · GW(p)
"Your observations"????
By "your observations", do you mean the observations obtained by the chain of cognitive algorithms, altering over time and switching between different bodies, that the process in 4 is dealing with? Because that does not seem to me to be a particularly privileged or "rational" set of observations to care about.
comment by ABlue · 2024-04-25T01:40:24.089Z · LW(p) · GW(p)
Is this an independent reinvention of the law of attraction? There doesn't seem to be anything special about "stop having a disease by forgetting about it" compared to the general "be in a universe by adopting a mental state compatible with that universe." That said, becoming completely convinced I'm a billionaire seems more psychologically involved than forgetting I have some disease, and the ratio of universes where I'm a billionaire versus I've deluded myself into thinking I'm a billionaire seems less favorable as well.
Anyway, this doesn't seem like a good solution since even for every "me" that gets into a better universe, another just gets booted into the worse one. As far as the interests of the whole cohort go it'd be a waste of effort.
Replies from: avturchin↑ comment by avturchin · 2024-04-25T16:15:47.805Z · LW(p) · GW(p)
The number of poor people is much larger than billionaire. So in most cases you will fail to wake up as a billionaire. But sometimes it will work and it is similar to law of attraction. But formulation via forgetting is more beautiful. You forget that you are poor.
UPDATE; actually, the difference with the law of attraction is that after applying the law of attraction, a person still remember that he has used the law. In magic by forgetting the fact of its use must be completely forgotten.
Replies from: ABlue