Posts

Debates how to defeat aging: Aubrey de Grey vs. Peter Fedichev. 2024-05-27T10:25:49.706Z
Magic by forgetting 2024-04-24T14:32:20.753Z
Strengthening the Argument for Intrinsic AI Safety: The S-Curves Perspective 2023-08-07T13:13:42.635Z
The Sharp Right Turn: sudden deceptive alignment as a convergent goal 2023-06-06T09:59:57.396Z
Another formalization attempt: Central Argument That AGI Presents a Global Catastrophic Risk 2023-05-12T13:22:27.141Z
Running many AI variants to find correct goal generalization 2023-04-04T14:16:34.422Z
AI-kills-everyone scenarios require robotic infrastructure, but not necessarily nanotech 2023-04-03T12:45:01.324Z
The AI Shutdown Problem Solution through Commitment to Archiving and Periodic Restoration 2023-03-30T13:17:58.519Z
Long-term memory for LLM via self-replicating prompt 2023-03-10T10:28:31.226Z
Logical Probability of Goldbach’s Conjecture: Provable Rule or Coincidence? 2022-12-29T13:37:45.130Z
A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warming 2022-09-11T10:25:40.707Z
The table of different sampling assumptions in anthropics 2022-06-29T10:41:18.872Z
Another plausible scenario of AI risk: AI builds military infrastructure while collaborating with humans, defects later. 2022-06-10T17:24:19.444Z
Untypical SIA 2022-06-08T14:23:44.468Z
Russian x-risks newsletter May 2022 + short history of "methodologists" 2022-06-05T11:50:31.185Z
Grabby Animals: Observation-selection effects favor the hypothesis that UAP are animals which consist of the “field-matter”: 2022-05-27T09:27:36.370Z
The Future of Nuclear War 2022-05-21T07:52:34.257Z
The doomsday argument is normal 2022-04-03T15:17:41.066Z
Russian x-risk newsletter March 2022 update 2022-04-01T13:26:49.500Z
I left Russia on March 8 2022-03-10T20:05:59.650Z
Russian x-risks newsletter winter 21-22, war risks update. 2022-02-20T18:58:20.189Z
SIA becomes SSA in the multiverse 2022-02-01T11:31:33.453Z
Plan B in AI Safety approach 2022-01-13T12:03:40.223Z
Each reference class has its own end 2022-01-02T15:59:17.758Z
Universal counterargument against “badness of death” is wrong 2021-12-18T16:02:00.043Z
Russian x-risks newsletter fall 2021 2021-12-03T13:06:56.164Z
Kriorus update: full bodies patients were moved to the new location in Tver 2021-11-26T21:08:47.804Z
Conflict in Kriorus becomes hot today, updated, update 2 2021-09-07T21:40:29.346Z
Russian x-risks newsletter summer 2021 2021-09-05T08:23:11.818Z
A map: "Global Catastrophic Risks of Scientific Experiments" 2021-08-07T15:35:33.774Z
Russian x-risks newsletter spring 21 2021-06-01T12:10:32.694Z
Grabby aliens and Zoo hypothesis 2021-03-04T13:03:17.277Z
Russian x-risks newsletter winter 2020-2021: free vaccines for foreigners, bird flu outbreak, one more nuclear near-miss in the past and one now, new AGI institute. 2021-03-01T16:35:11.662Z
[RXN#7] Russian x-risks newsletter fall 2020 2020-12-05T16:28:51.421Z
Russian x-risks newsletter Summer 2020 2020-09-01T14:06:30.196Z
If AI is based on GPT, how to ensure its safety? 2020-06-18T20:33:50.774Z
Russian x-risks newsletter spring 2020 2020-06-04T14:27:40.459Z
UAP and Global Catastrophic Risks 2020-04-28T13:07:21.698Z
The attack rate estimation is more important than CFR 2020-04-01T16:23:12.674Z
Russian x-risks newsletter March 2020 – coronavirus update 2020-03-27T18:06:49.763Z
[Petition] We Call for Open Anonymized Medical Data on COVID-19 and Aging-Related Risk Factors 2020-03-23T21:44:34.072Z
Virus As A Power Optimisation Process: The Problem Of Next Wave 2020-03-22T20:35:49.306Z
Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics 2020-03-18T12:44:42.756Z
Reasons why coronavirus mortality of young adults may be underestimated. 2020-03-15T16:34:29.641Z
Possible worst outcomes of the coronavirus epidemic 2020-03-14T16:26:58.346Z
More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them 2020-03-13T13:35:05.189Z
Anthropic effects imply that we are more likely to live in the universe with interstellar panspermia 2020-03-10T13:12:54.991Z
Russian x-risks newsletter winter 2019-2020. 2020-03-01T12:50:25.162Z
Rationalist prepper thread 2020-01-28T13:42:05.628Z
Russian x-risks newsletter #2, fall 2019 2019-12-03T16:54:02.784Z

Comments

Comment by avturchin on Shortform · 2024-07-15T12:52:14.671Z · LW · GW

Chances of being injured in head but not brain damaged are rather small, I think less than 10 per cent. So in 90 per cent of branches where shots were fired in his head directions, he is seriously injured or dead.
However, climbing to roof without Secret Service reaction was also a very unlikely event. May be only 10 per cent chance of success.

Combining, I get 9 per cent of him being dead or seriously injured yesterday. 

Comment by avturchin on What Other Lines of Work are Safe from AI Automation? · 2024-07-11T13:35:22.120Z · LW · GW

1.All related to parenting and childcare. Most parents may not want a robot to babysit their children. 

2.Art history and museums. There is a lot of physical work and non-text knowledge involved and demand may remain. This includes art restoration (until clouds of nanobots will do it).

Comment by avturchin on When is a mind me? · 2024-07-10T13:49:50.322Z · LW · GW

If we will very quickly constantly replace a mind with its copies, the mind may not have subjective experiences. Why I think that?

Subjective experience appear only when a mind moves from the state A.1 to the state A.2. That is, between A.I (I see an apple) electric signals move through circuits and in the A.2 moment I say "I see an apple!" Subjective experience of the color of apple is happening after A.1 but before A.2. 

Frozen mind in A.1 will not have subjective experience.

Now if I replace this process with a series of snapshots of the brain-states, there will be no intermediate calculations between A.1 and A.2  which produce the subjective experiences of apple and we get something like philozomby. 

Obviously, we need to know what will be the mind-state A.2 without performing the needed calculations, or that calculations themselves will have the experience. But if A.2 is simple like just saying "I see apple" we can guess about A.2 without having all internal processes. 

It may seem as a minor issue in Mars teleporter thought experiment as in the worst case only a small fraction of a second of consciousness disappears for the copy. But if we will replace a mind million times a second, we will get a p-zombie. 

Or we need to take some illusionist position about qualia: they do not exist at all, so they are not an epiphenomena of calculations. 

We can escape microscopic blackout during Mars Transporter if the coping will be performed after A.2 has finished, but when the mind state A.3 has not started yet, but this is not how the brain works as processes in it are asynchronous. We don't know how such microscopic blackout will affect the next qualia after that. We don't have a theory of qualia. Maybe after microscopic blackout we will get different set of basic qualia, like red and green will change place. In that case, the copy will be subjectively different from me.  

Comment by avturchin on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-09T22:43:39.859Z · LW · GW

Actually, it looks like from this that FNIC favors simpler ways of abiogenesis - as there will be more planets with life and more chances for me to appear.

Comment by avturchin on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-09T21:29:45.072Z · LW · GW

Abiogenesis seems to depend on the random synthesis of a 100-pieces long stand of RNA capable to self-replicate. Chances of it on any given planet is like 10E-50.

Interstellar panspermia has much less variables, and we know that most of its ingredients are already in place: martian meteorites, interstellar comets. It may have like 0.01 initial probability. 

Non-observation of aliens may be explained by the fact that a) either p(intelligence|life) is very small or b) we are the first of many nearby siblings and will meet them soon (local grabby aliens). 

Comment by avturchin on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-09T17:46:29.781Z · LW · GW

My reasoning is the following:


1.My experience will be the same in the planets with and without panspermia, as it is basically invisible for now. 

2. If Universe is very large and slightly diverse, there are regions where panspermia is possible and regions where they are not - without any visible for us consequences. 

3. (Assumptions) Abiogenesis is difficult, but potentially habitable planets are very numerous.

4. In the regions with pasnpermia, life will be disseminated from initial Edem to millions habitable planets in the Galaxy. 

5. For every habitable planet in the non-panspermia region there will be million habitable planets in panseprmia-region. 

6. As there is no observable differences between regions, for any my exact copy in non-panspermia region there will be million my copies in paspermia regions. (May be I am wrongly understand FNIC, but this is how I apply it.)


What do you think?
 

Comment by avturchin on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-09T17:35:25.978Z · LW · GW

The case for panspermia seems simpler than Sleeping beauty, as it doesn't include possible worlds. Imagine that there are two regions of the Universe, in one of which panspermia is possible and in another is not. The one where it is possible has, 100 times more habitable planets per volume. This suggests that we are more likely to be in the region in which panspermia is happening.
 

On the other hand, if you simply tried to approximate median human age by doing that estimate at every year, then your results would be pretty bad. Most of the estimates would be very off.

Most of the estimates will not be very off. 90 per cent of them will give the correct order of magnitude. 
 

Comment by avturchin on Should you refuse this bet in Technicolor Sleeping Beauty? · 2024-07-09T12:18:26.352Z · LW · GW

I think this is a toy problem about how to cooperate with own copies, especially in the cases where it is acausal cooperation in which different copies have to act differently. Similar to "Death in Damascus". Random generator is a good thing to diverge, but naturally flipping process is better.

Comment by avturchin on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-09T11:11:56.156Z · LW · GW

What you are saying reminds me of FNIC - full non-indexical conditioning. It was discussed on LW. It still may have some anthropic effects - like panspermia is more likely, but may be used against DA. 

However, most facts about my personality are irrelevant to my thought process about anthropic. There was a good term "people in my epistemic situation" used in the SIA-thread here in LW. 

another example: I can use my age as random sample of human ages and predict that median human life expectancy based on it. This works despite my ages being subsequent from one to another.

For three person in the room - I assume that some amnesia drug removed the knowledge of who exactly I am. 

Comment by avturchin on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-09T09:37:29.821Z · LW · GW

For Doomsday argument we should use only competent observers who at least can think about complex math of Doomsday argument. In other words, I randomly selected from the set of competent observers.

However, my parents was not the members of this class. They were clever, but never look at that direction. So your argument about parents doesn't work for DA. Also, first competent observers appeared around 1970, so the Doomsday is in next few decades. 


Also, I still not getting the main idea of your argument. For example, if there are three people in a room who are my grandmother, mother and I. I know only that I am one of them - how it changes that I am grandmother?

Comment by avturchin on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-06T12:27:26.014Z · LW · GW

SIA doesn't compensate SSA for Doomsday argument. SIA predicts infinite number of civilizations in the universe, but not the median duration of the existence of typical civilization. 

Comment by avturchin on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-06T12:18:12.306Z · LW · GW

The main argument against the idea that "I can't think about myself as of random sample' is that it can be experimentally tested.

For example, I can use the month of my birth to get reasonable estimation of the number of months in year: I was born in September, so it is reasonable to give 50 per сent credence to 18 months in a year, which is close to 12. 
Similarly I can use the distance to equator of the location where I was born to estimate the total diameter of the Earth. 

You may respond that in this example there is no adding up of subsequent observers. But I can estimate median human life expectancy just using my age. 

I just imagined and performed a new experiment - I want to learn the duration in hours of a day, using just one sample - I look at my clock and it is now 15.14. It gives 50 per cent that total number of hours in a day 30 that is reasonably close to 24. 

Comment by avturchin on Lessons from Failed Attempts to Model Sleeping Beauty Problem · 2024-07-06T11:25:28.697Z · LW · GW

Thanks for your detailed answer. I think that for halfer the most interesting part is than the SB learns that it is Monday. This provides her with some information about the current toss results, namely that it is 1/4 for Tails. Do you address this problem in the next posts?

Comment by avturchin on Lessons from Failed Attempts to Model Sleeping Beauty Problem · 2024-07-05T18:25:25.461Z · LW · GW

One interesting observation I have is that "probability of coin" is an ambitious term:

  1. It can denote frequentist property of a coin in general, which have a tendency to give 0.5 on Heads. 
  2. It can denote the probability that the given toss of the coin produces Heads, which again typically 0.5.

    However, there is a difference, and SB problem is highlighting it. I can have a partial information about the given toss, and thus my estimate of the probability of the given toss may differ from general probability and there is nothing surprising about it. 

    For example, you toss the coin and speak with one random person if Heads, but with two if Tails. In that case I - if approached by you - can expect that there is 2/3 that you have Tail coin even before you tell me how it fails.

Comment by avturchin on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-05T16:13:23.494Z · LW · GW

In the Laplace's sunrise problem the question is: what are the chances that Sun will rise again after it has raised 5000 previous day. Let's reframe the problem: what are the chances that a catastrophe will not happen in the year 5001 given that it didn't happen in the previous 5000 years. Laplace gives chances of no catastrophe as 1 - 1/(5000 +2). '+2" appears here because we are a) speaking about discrete events, and the next year is 5001. 

So simplifying we get that Laplace gives 1/(5000) chances of catastrophe for the next year after 5000 year of no-catastrophe.

If we take Gott's equation for Doomsday argument, it also gives probability of catastrophe 1/(5000) for the situation when I survived 5000 years without a catastrophe BUT was randomly selected from that period. Laplace and Gott achieved basically the same equation but using different methods.

I do not see Laplace's problem as problematic, it is another version of Doomsday argument and both are correct. But is shows us that "random sampling' is not a necessary condition of for having Doomsday argument.

Comment by avturchin on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-05T09:50:59.070Z · LW · GW

In the Laplace's sunrise problem you are not assuming that you are randomly sampled in time, but the prediction is basically the same as in Doomsday argument - if applied to the same period of time. 

Typically the sunrise problem is applied only to the next day, and DA is applied to the time several times longer than the whole history of existence of qualified observers, and also sunrise problem is discrete. But if we apply them to the same period of time, they give the same probability.

Comment by avturchin on avturchin's Shortform · 2024-06-27T09:59:55.045Z · LW · GW

I was not dreaming. I was observing my hypnagogic images, which is not the same as dreaming; and when streams merged I become completely awake.


However, after I know what is it, I can observe similar thing again. The receipt is following:
1. do two different unrelated things which require conscious attention but happen in different modalities, audio and video
2. increase the wideness of attention and observe that you just had two streams of more narrow attention. 

The closest thing in everyday life is "driver amnesia" - the situation when a car driver is splitting attention between driving and conversation. 

Comment by avturchin on avturchin's Shortform · 2024-06-26T20:15:47.236Z · LW · GW

I have interesting experience long time ago. In the near-sleep state my consciousness split in two streams - one was some hypnogogic images, and the other was some hypnogogic music.

They was not related to each other and each had, some how, its own observer.

A moment later something awakened me a bit and the streams seamlessly merged and I was able to observe that a moment before I had two independent streams of consciousness.

Conclusions:

1. A human can have more than one consciousness at the time.

2. It actually happens all the time but we don't care.

3. Merging of consiosnesses is easy. Moreover, binding and merging is actually the same process similar to summation.

There is no center of consciousness - homunculus or electron or whatever.

I may have other conscious processes in the brain which just do not merge with current stream of consciousness.

Qualia remain the same and preserve in each of the streams of consciousness.

Comment by avturchin on silentbob's Shortform · 2024-06-26T16:21:27.740Z · LW · GW

I heard that there is no local minima in high-dimensional spaces because there will be almost always paths to global minimum. 

Comment by avturchin on Weak AGIs Kill Us First · 2024-06-17T12:36:58.063Z · LW · GW

Yes, to kill everyone AI needs to reach like IQ = 200. Maybe enough to construct a virus. 

It is nowhere superintelligence. Superintelligence is overkill. 

Comment by avturchin on Probably Not a Ghost Story · 2024-06-13T12:39:32.395Z · LW · GW

Can your partner trick you using the second switch? Do you have voice-controlled lamps?

Comment by avturchin on Reflective consistency, randomized decisions, and the dangers of unrealistic thought experiments · 2024-06-07T12:32:12.223Z · LW · GW

I think that people reason that if everyone will constantly defect, we will get less trustworsy society, where life is dangerous and complex projects are impossible. 

Comment by avturchin on Reflective consistency, randomized decisions, and the dangers of unrealistic thought experiments · 2024-06-07T11:39:54.132Z · LW · GW

Reflexive inconsistency can manifest even in simpler decision theory experiments, such as the prisoner's dilemma. Before being imprisoned, anyone would agree that cooperation is the best course of action. However, once in jail, an individual may doubt whether the other party will cooperate and consequently choose to defect.

In the Newcomb paradox, reflexive inconsistency becomes even more central, as it is the very essence of the problem. At one moment, I sincerely commit to a particular choice, but in the next moment, I sincerely defect from that commitment. A similar phenomenon occurs in the Hitchhiker problem, where an individual makes a promise in one context but is tempted to break it when the context changes. 

Comment by avturchin on avturchin's Shortform · 2024-05-22T11:16:16.467Z · LW · GW

Anthropic did opposite thing https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html

Comment by avturchin on Can stealth aircraft be detected optically? · 2024-05-04T12:55:07.800Z · LW · GW

Also, radars are good in pinpointing exact coordinates in space and time. Optical recognition may have delays or difficultly in measuring distance. Even 1 second delay makes their information useless for supersonic aircrafts.

Comment by avturchin on Were there any ancient rationalists? · 2024-05-03T21:25:59.931Z · LW · GW

May be the most surprising answer will be Paul Valéry. He is a great poet but during one night in 1892 he decided to spend all his life in solving the mystery of intelligence and wrote extensive notebooks about it. https://collecties.kb.nl/en/koopman-collection/1951-1960/cahiers 

Comment by avturchin on Can stealth aircraft be detected optically? · 2024-05-03T12:05:18.394Z · LW · GW

It works only if it fly above your territory - and similar systems are used for drones detection now. Actually, they use people eyes and smartphones and instant messaging. But during recent attack on Iran a single F35 flied over Iraq and fired a missile from like 200 km distance on a target in Iran. 

Comment by avturchin on Can stealth aircraft be detected optically? · 2024-05-02T20:56:52.208Z · LW · GW

Did you ever see any plane that far? I saw only planes above me (10 km) and they are almost like dots. 

The difference between optics and radar is that with optics you need to know where to look - but the radar has constant 360 perception. 

Comment by avturchin on Can stealth aircraft be detected optically? · 2024-05-02T13:06:39.289Z · LW · GW

They likely use them in places where no air defence is present and still at some disatnce using JDAM. 

I think that I missed the main thing about stealth - they are stealth for radar on the distances like 100 km, but visible for radar on the distances like 10 km (arbitrary numbers). But optical observation on distances of 100 km is impossible (need large telescopes, but you need to know where to look). Also optical density of atmosphere starts playing role as well a spherical size of earth.

Comment by avturchin on Can stealth aircraft be detected optically? · 2024-05-02T11:05:29.678Z · LW · GW

Tactical support aircraft are not stealth like A-10 but can be used only if airdefence is supressed

Comment by avturchin on Can stealth aircraft be detected optically? · 2024-05-02T11:03:25.447Z · LW · GW

Flying very low, like 10-30 meters above the ground in night will protect against even MANPADS - it will fly above you in a few seconds.
 I recommend an interesting blog https://xxtomcooperxx.substack.com/p/its-the-range-stupid-part-1 which discuss a lot about air defence and current war

Comment by avturchin on Can stealth aircraft be detected optically? · 2024-05-02T08:43:17.638Z · LW · GW

That is why they prefer to flight for strikes during moonless nights. Also they can fly of very low or very high, which makes optical observation difficult.

Comment by avturchin on Magic by forgetting · 2024-04-30T20:32:52.254Z · LW · GW

non-disease copies do not need to perform any changes in their meditation routine in this model, assuming that they naturelly forget their disease status during meditation.

Comment by avturchin on avturchin's Shortform · 2024-04-30T10:11:18.517Z · LW · GW

It failed my favorite test: draw a world map in text art. 

Comment by avturchin on avturchin's Shortform · 2024-04-30T10:10:32.393Z · LW · GW

It claims to have knowledge cutoff as of Nov 2023, but failed to tell what happened on October 7 and hallucinated.

Comment by avturchin on avturchin's Shortform · 2024-04-29T21:36:18.793Z · LW · GW

Yes, they can do now a much better version - and hope they will do it internally. But deleting the public version is bad precedent and better to make all personal sideloads opensourced

Comment by avturchin on avturchin's Shortform · 2024-04-29T18:44:09.831Z · LW · GW

ChatGPT 4.5 is on preview at https://chat.lmsys.org/ under name gpt-2. 

It calls itself ChatGPT 2.0 in a text art drawing https://twitter.com/turchin/status/1785015421688799492 

Comment by avturchin on Magic by forgetting · 2024-04-29T16:07:55.917Z · LW · GW

Yes, it only works if other copies are meditating for some other reason. For example, they sleep or meditate for enlightenment. And they are exploited in this situation.

Comment by avturchin on Magic by forgetting · 2024-04-29T16:05:38.072Z · LW · GW

I assume that meditation happens naturally, like sleep. 

Comment by avturchin on Magic by forgetting · 2024-04-29T14:00:18.283Z · LW · GW

I think I understand what you say - the expected utility of the whole procedure is zero. 

For example, imagine that there are 3 copies and only one has the disease. All meditate. After the procedure, the copy with disease will have 2/3 chances of being cured. Each of two copies without the disease are getting 1/3 chance of having the disease which in sum gives 2/3 of total utility. In that case total utility of being cured = total utility of getting the disease and the whole procedure is neutral.

However, If I already know that I have the disease, and I am not altruistic to my copies, playing such game is a wining move to me?

Comment by avturchin on Magic by forgetting · 2024-04-28T10:05:40.488Z · LW · GW

The trick is to use already existing practice of meditation (or sleeping) and connect to it. Most people who go to sleep do no do it to use magic by forgetting, but it is natural to forget something during sleep. Thus, the fact that I wake up from sleeping does not provide any evidence about me having the disease. 

But it is in a sense parasitic behavior, and if everyone will use magic by forgetting every time she goes to sleep,  there will be almost no gain. Except that one can "exchange" one bad thing on another, but will not remember the exchange. 

Comment by avturchin on Self-Play By Analogy · 2024-04-26T11:58:09.694Z · LW · GW

Self-playing Adversarial Language Game Enhances LLM Reasoning

https://arxiv.org/abs/2404.10642

Comment by avturchin on LLMs seem (relatively) safe · 2024-04-26T11:56:08.024Z · LW · GW

LLMs now can also self-play in adversarial word games and it increases their performance https://arxiv.org/abs/2404.10642 

Comment by avturchin on avturchin's Shortform · 2024-04-25T19:28:08.721Z · LW · GW

Roman Mazurenko is dead again. First resurrected person, Roman lived as a chatbot (2016-2024) created based on his conversations with his fiancé. You might even be able download him as an app. 

But not any more. His fiancé married again and her startup http://Replika.ai pivoted from resurrection help to AI-girlfriends and psychological consulting. 

It looks like they quietly removed Roman Mazurenko app from public access. It is especially pity that his digital twin lived less than his biological original, who died at 32. Especially now when we have much more powerful instruments for creating semi-uploads based on LLMs with large prompt window.

Comment by avturchin on Magic by forgetting · 2024-04-25T18:29:02.156Z · LW · GW

The "repeating" will not be repeating from internal point of view of a person, as he has completely erased the memories of the first attempt. So he will do it as if it is first time. 

Comment by avturchin on Magic by forgetting · 2024-04-25T17:29:29.899Z · LW · GW

Yes, here we can define magic as "ability to manipulate one's reference class". And special minds may be much more adapted to it.

Comment by avturchin on Magic by forgetting · 2024-04-25T16:19:18.954Z · LW · GW

Presumably in deep meditation people become disconnected from reality.

Comment by avturchin on Magic by forgetting · 2024-04-25T16:17:33.710Z · LW · GW

Yes it is easy to forget something if it does not become a part of your personality. So a new bad thing is easier to forget.

Comment by avturchin on Magic by forgetting · 2024-04-25T16:15:47.805Z · LW · GW

The number of poor people is much larger than billionaire. So in most cases you will fail to wake up as a billionaire. But sometimes it will work and it is similar to law of attraction. But formulation via forgetting is more beautiful. You forget that you are poor.

UPDATE; actually, the difference with the law of attraction is that after applying the law of attraction, a person still remember that he has used the law. In magic by forgetting the fact of its use must be completely forgotten.

Comment by avturchin on Magic by forgetting · 2024-04-25T16:07:32.706Z · LW · GW

I can forget one particular thing, but preserve most of my selfidentification information