Posts

Plan B in AI Safety approach 2022-01-13T12:03:40.223Z
Each reference class has its own end 2022-01-02T15:59:17.758Z
Universal counterargument against “badness of death” is wrong 2021-12-18T16:02:00.043Z
Russian x-risks newsletter fall 2021 2021-12-03T13:06:56.164Z
Kriorus update: full bodies patients were moved to the new location in Tver 2021-11-26T21:08:47.804Z
Conflict in Kriorus becomes hot today, updated, update 2 2021-09-07T21:40:29.346Z
Russian x-risks newsletter summer 2021 2021-09-05T08:23:11.818Z
A map: "Global Catastrophic Risks of Scientific Experiments" 2021-08-07T15:35:33.774Z
Russian x-risks newsletter spring 21 2021-06-01T12:10:32.694Z
Grabby aliens and Zoo hypothesis 2021-03-04T13:03:17.277Z
Russian x-risks newsletter winter 2020-2021: free vaccines for foreigners, bird flu outbreak, one more nuclear near-miss in the past and one now, new AGI institute. 2021-03-01T16:35:11.662Z
[RXN#7] Russian x-risks newsletter fall 2020 2020-12-05T16:28:51.421Z
Russian x-risks newsletter Summer 2020 2020-09-01T14:06:30.196Z
If AI is based on GPT, how to ensure its safety? 2020-06-18T20:33:50.774Z
Russian x-risks newsletter spring 2020 2020-06-04T14:27:40.459Z
UAP and Global Catastrophic Risks 2020-04-28T13:07:21.698Z
The attack rate estimation is more important than CFR 2020-04-01T16:23:12.674Z
Russian x-risks newsletter March 2020 – coronavirus update 2020-03-27T18:06:49.763Z
[Petition] We Call for Open Anonymized Medical Data on COVID-19 and Aging-Related Risk Factors 2020-03-23T21:44:34.072Z
Virus As A Power Optimisation Process: The Problem Of Next Wave 2020-03-22T20:35:49.306Z
Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics 2020-03-18T12:44:42.756Z
Reasons why coronavirus mortality of young adults may be underestimated. 2020-03-15T16:34:29.641Z
Possible worst outcomes of the coronavirus epidemic 2020-03-14T16:26:58.346Z
More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them 2020-03-13T13:35:05.189Z
Anthropic effects imply that we are more likely to live in the universe with interstellar panspermia 2020-03-10T13:12:54.991Z
Russian x-risks newsletter winter 2019-2020. 2020-03-01T12:50:25.162Z
Rationalist prepper thread 2020-01-28T13:42:05.628Z
Russian x-risks newsletter #2, fall 2019 2019-12-03T16:54:02.784Z
Russian x-risks newsletter, summer 2019 2019-09-07T09:50:51.397Z
OpenGPT-2: We Replicated GPT-2 Because You Can Too 2019-08-23T11:32:43.191Z
Cerebras Systems unveils a record 1.2 trillion transistor chip for AI 2019-08-20T14:36:24.935Z
avturchin's Shortform 2019-08-13T17:15:26.435Z
Types of Boltzmann Brains 2019-07-10T08:22:22.482Z
What should rationalists think about the recent claims that air force pilots observed UFOs? 2019-05-27T22:02:49.041Z
Simulation Typology and Termination Risks 2019-05-18T12:42:28.700Z
AI Alignment Problem: “Human Values” don’t Actually Exist 2019-04-22T09:23:02.408Z
Will superintelligent AI be immortal? 2019-03-30T08:50:45.831Z
What should we expect from GPT-3? 2019-03-21T14:28:37.702Z
Cryopreservation of Valia Zeldin 2019-03-17T19:15:36.510Z
Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World 2019-03-11T10:30:58.676Z
Do we need a high-level programming language for AI and what it could be? 2019-03-06T15:39:35.158Z
For what do we need Superintelligent AI? 2019-01-25T15:01:01.772Z
Could declining interest to the Doomsday Argument explain the Doomsday Argument? 2019-01-23T11:51:57.012Z
What AI Safety Researchers Have Written About the Nature of Human Values 2019-01-16T13:59:31.522Z
Reverse Doomsday Argument is hitting preppers hard 2018-12-27T18:56:58.654Z
Gwern about centaurs: there is no chance that any useful man+machine combination will work together for more than 10 years, as humans soon will be only a liability 2018-12-15T21:32:55.180Z
Quantum immortality: Is decline of measure compensated by merging timelines? 2018-12-11T19:39:28.534Z
Wireheading as a Possible Contributor to Civilizational Decline 2018-11-12T20:33:39.947Z
Possible Dangers of the Unrestricted Value Learners 2018-10-23T09:15:36.582Z
Law without law: from observer states to physics via algorithmic information theory 2018-09-28T10:07:30.042Z

Comments

Comment by avturchin on NFTs, Coin Collecting, and Expensive Paintings · 2022-01-25T12:17:12.557Z · LW · GW

The works copyright has expired, but not the photography copyright. In other words, if you have a physical original you can control who and how can make copies.

https://www.quora.com/Are-famous-art-pieces-like-the-Mona-Lisa-copyrighted

Comment by avturchin on NFTs, Coin Collecting, and Expensive Paintings · 2022-01-24T19:34:33.079Z · LW · GW

I heard about the case when an artist has sold the copyright of all his works to another person (including the works owned by other people, like previous buyers.) Basically it means that the right owner could forbid all other owners to ever exhibit an even look at their own works. But in reality in the case I am speaking about it was about right to make t-shirts with prints. 

I also saw purchase agreements which deliberately state that the artwork is coming with all rights.

Comment by avturchin on NFTs, Coin Collecting, and Expensive Paintings · 2022-01-24T11:40:50.168Z · LW · GW

Owning an original painting comes with legal right to make its copies. E.g. you can legally sell t-shirts with it. Owning a copy in most cases does not give one a right to reproduce it.

Another way to profit from an original is to open a museum and sell tickets (or get prestige). 

Comment by avturchin on Omicron Post #15 · 2022-01-19T15:27:59.927Z · LW · GW

It looks like there is a second peak now in Denmark, maybe because of BA.2.  36.474 cases today.

Comment by avturchin on Plan B in AI Safety approach · 2022-01-19T13:52:38.965Z · LW · GW

I think that there is an common fallacy that superintelligent AI risks are perceived as grey goo risks. 

The main difference is that AI thinks strategically on very long distances and takes even small possibilities into account.

If AI is going to create as much paperclip as possible, then what it cares about is only its chances of colonise the whole universe and even survive the end of the universe. These chances negligibly affected by the amount of atoms on Earth, but strongly depend on AI's chances to meet other aliens eventually. Other aliens may have different values systems and some of them will be friendly to their creators. Such future AIs will be not happy to learn that Paperclipper destroyed humans and will not agree to make more paperclips. Bostrom explored similar ideas in "Hail Mary and Value Porosity" 

TL;DR: it is instrumentally reasonable to preserve humans as they could be traded with alien AIs. Human atoms have very small instrumental value. 

Comment by avturchin on Possible Dangers of the Unrestricted Value Learners · 2022-01-19T13:37:23.855Z · LW · GW

As I said in another comment:  To learn human values from, say, fixed texts is a good start, but it doesn't solve the "chicken or the egg problem": that we start from running non-aligned AI which is learning human values, but we want the first AI to be already aligned.  One possible obstacle: non-aligned AI could run away before it has finished to learn human values from the texts. 

Comment by avturchin on Possible Dangers of the Unrestricted Value Learners · 2022-01-19T13:35:57.297Z · LW · GW

To learn human values from, say, fixed texts is a good start, but it doesn't solve the "chicken or the egg problem": that we start from running non-aligned AI which is learning human values, but we want the first AI to be already aligned.  

One possible obstacke: non-aligned AI could run away before it has finished to learn human values from the texts. 

The problem of chicken and the egg could presumably be solved by some iteration-and-distillation approach. First we give some very rough model of human values (or rules) to some limited AI, and later we increase its power and its access to real human. But this suffers from all the difficulties of the iteration-and-distillation, like unexpected jumps of capabilities. 

Comment by avturchin on Plan B in AI Safety approach · 2022-01-18T14:06:43.479Z · LW · GW

Some of  risks are "instrumental risks" like "the use of human atoms", and other are "final goal risks", like "cover universe with smily faces". If final goal is something like smily faces, the AI can still preserve some humans for instrumental goals, like research the types of smiles or trade with aliens.

if some humans are preserved instrumentally, they could live better lives than we now and even be more numerous, so it is not extinction risk. Most humans who live now here are instrumental to states and corporations, but still get some reward. 

Comment by avturchin on Alex Ray's Shortform · 2022-01-16T19:32:16.724Z · LW · GW

You probably don't need 100 years bunker if you prepare only for biocatastrophe, as most pandemics has shorter timing, except AIDS.

Also, it is better not to build anything, but use already existing structures. E.g. there are coal mines in Spitzbergen which could be used for underground storages. 

Comment by avturchin on Alex Ray's Shortform · 2022-01-16T18:05:44.476Z · LW · GW

What about using remote islands as bio-bunkers? Some of them are not reachable by aviation (no airfield), so seems to be better protected. But they have science stations already populated. Example is Kerguelen islands. The main risk here is bird flu delivered by birds or some stray ship.

Comment by avturchin on Plan B in AI Safety approach · 2022-01-15T13:19:43.066Z · LW · GW

There is a very short period of time when humans are a threat and thus are needed to be exterminated: it is before AI reach the level of superintellignet omnipotence, but after that AI is already capable to cause a human extinction. 

Superintelligent AI could prevent creation of other AIs by surveillance via some nanotech. So if AI mastered nanotech, it doesn't need to exterminate humans for own safety. So only an AI before nanotech may need to exterminate humans. But how? It could create a biological virus, which is simpler than nanotech, but the problem is that such Young AI depends yet on human-built infrastructure, like electricity, so exterminating humans before nanotech is not a good idea.

I am not trying to show innate AI safety here, I just want to point that extermination of humans is not a convergent goal for AI. There are still many ways how AI could go wrong and kill all us.

Comment by avturchin on Plan B in AI Safety approach · 2022-01-14T11:09:26.647Z · LW · GW

Thanks, it looks like they died during copy-pasting. 

Comment by avturchin on avturchin's Shortform · 2022-01-12T11:26:14.514Z · LW · GW

Who "we" ? :) 

Saying a "king" I just illustrated the difference between interesting character who are more likely to be simulated in a game or in a research simulation, and "qualified observer" selected by anthropics. But these two sets clearly intersects, especially of we live in a game about "saving the world". 

Comment by avturchin on avturchin's Shortform · 2022-01-11T12:01:52.601Z · LW · GW

Anthropics imply that I should be special, as I should be "qualified observer", capable to think about anthropics. Simulations also requires that I should be special, as I should find myself living in interesting times. These specialities are similar, but not exactly. Simulation's speciality is requiring that I will be a "king" in some sense, and anthropic speciality will be satisfied that I just understand anthropics. 

I am not a very special person (as of now), therefore anthropics specialty seems to be more likely than simulation speciality. 

Comment by avturchin on avturchin's Shortform · 2022-01-08T10:17:54.520Z · LW · GW

Yes, people often mentioned Baader–Meinhof phenomenon as a evidence that we live in "matrix". But it could be explained naturally.

Comment by avturchin on avturchin's Shortform · 2022-01-07T11:55:06.899Z · LW · GW

Observable consequences of simulation:

1. Larger chances of miracles or hacks

2. Large chances of simulation’s turn off or of a global catastrophe

3. I am more likely to play a special role or to live in interesting times

4. A possibility of afterlife.

Comment by avturchin on Signaling isn't about signaling, it's about Goodhart · 2022-01-07T10:14:04.412Z · LW · GW

What is rational behaviour for a rich person – is signalling for a poor. Imaging that a rich person chose the best car for his needs and it is, say,  a 50K car.  A poor person who want to look rich will also buy the same car in leasing.  

Comment by avturchin on We need a theory of anthropic measure binding · 2022-01-05T11:49:58.035Z · LW · GW

Note that the part of your reply about entropy is related to a plot of fictional novel. However, the plot has some merit, and a similar idea of anthropic miracles was later explored by Bostrom in "Adam and Eve, UN++"

Comment by avturchin on Charlie Steiner's Shortform · 2022-01-04T10:25:05.790Z · LW · GW

These are possible worlds where you can blackmail the blaclmailer by the fact that you know that he did blackmail

Comment by avturchin on avturchin's Shortform · 2022-01-03T13:33:41.391Z · LW · GW

New b.1.640.2 variant in France. More deadly than delta. 952 cases of which 315 on ventilator.

https://www.thailandmedical.news/news/breaking-updates-on-new-b-1-640-2-variant-spreading-in-southern-france-number-of-cases-growing-and-variant-now-detected-in-united-kingdom-as-well

 

https://flutrackers.com/forum/forum/europe-aj/europe-covid-19-sept-13-2020-may-31-2021/933598-southern-france-reports-of-new-variant-with-46-mutations

Comment by avturchin on Each reference class has its own end · 2022-01-03T11:50:04.204Z · LW · GW

They can generate different dates, but they still use the same mental model which doesn't depend on the date.

It looks like that I am in the second generation of anthropic reasoning (I started read about it in 2006), but the interesting thing is that the second generation is much more numerous than the first one, thanks to Internet and LW.  So it is not surprising to be in the second generation than in the first. But why I am not in the third generation? 

Comment by avturchin on Each reference class has its own end · 2022-01-03T10:33:36.846Z · LW · GW

If they keep generating new generations, I should be not in the first generation.

Comment by avturchin on Each reference class has its own end · 2022-01-03T10:28:01.408Z · LW · GW

Agreed. The doom is the end of  the reference class, not a bang.  And if SSA-based DA is universally refuted so that no one ever even try to think in that direction, then it is the end of this type of thinking. I looked at Google Scholar and found that the number of articles about DA peaked around 2000s and is now declining. It suggests that the interest to the problem is declining. 

However, if we will exist for a very long time, there will be a few observers every millennia who still like the SSA and, for billions years, there should be many of them, more than now living SSA-believers. In that case, I am still more likely to find myself in remote future, not now – and as I am not there, I am surprised. Thus DA still predicts bang even if we assume that it will be refuted. 

Comment by avturchin on Each reference class has its own end · 2022-01-03T10:20:11.639Z · LW · GW

I actually meant "you infer or choose reference classes based on what you want to predict", but my point of interest was specific application of the problem, that is, Doomsday argument. 

Comment by avturchin on Each reference class has its own end · 2022-01-02T21:51:25.368Z · LW · GW

I mean by "middle" a large region of rooms which are not on borders, like between 20 and 80, not exactly room 50. Should clarify it in the post.

Comment by avturchin on Each reference class has its own end · 2022-01-02T18:10:46.754Z · LW · GW

I probably should have clarified that in the case of raws, I count as "middle" everything which is not on the border, that is not first or last raw. 

This caveat actually plays in the situation of the universe anthropic fine-tuning by many parameters. The number not-perfectly-fine-tuned universes is much larger than the set of fine-tuned ones. This means lower concentration of civilizations in space compared with "optimal universe". Seems to be solution of the Fermi paradox. 

Thank for the link.

Comment by avturchin on We need a theory of anthropic measure binding · 2022-01-02T15:30:29.668Z · LW · GW

I think that most BBs have low energy in absolute terms, that is, in joules. 

While total energy and measure of BBs may be very large, there are several penalties which favour real minds in anthropics:

  1. Complexity. Real mind capable to think about anthropic is rather complex, and most BBs are much simpler, and by saying "much" I mean double exponent of the brain size. 
  2. Content. Even a complex BB has the same probability to think about any random thing as about anthropics. It gives 10-100 orders of magnitude penalty. 
  3. Energy. Human mind consumes for computations, say, 1 Watt, but a BB will consume 10-30 orders of magnitude less. Here I assume that the measure is proportional to the energy of computations. 

 

Side note: there is a interesting novel about how universe tries to return to the normal state of highest entropy via creating some unexpected miracles on earth which stop progress. https://en.wikipedia.org/wiki/Definitely_Maybe_(novel) 

Comment by avturchin on What are sane reasons that Covid data is treated as reliable? · 2022-01-01T19:07:17.012Z · LW · GW

There is a correlation between several types of reported data and the real situation. Raw data is not very reliable if we don't account for biases of all kinds. For example, there is an a known difference between reported deaths and excess mortality, which is often 2-3 times larger.

Anyway, I am disturbed by your words about inability to report adverse effects to vaccines. It should not be this way.

Comment by avturchin on A non-magical explanation of Jeffrey Epstein · 2021-12-31T12:31:57.190Z · LW · GW
  1. To be dead is the best guarantee of not saying anything ever, even under torture or deal with police.
  2. The real point of blackmail may be future torture somewhere in prison.
  3. Te message could be delivered even before he went to jail, so it was some kind of agreement between the members of the band. It could be even "delivered" acausally as an reasonable expectation. There could be other ways like a secret message via a client's attorney. 
Comment by avturchin on I found a wild explanation for two big anomalies in metaphysics then became very doubtful of it · 2021-12-31T12:03:03.210Z · LW · GW

I don't see any problem with panpsychism, if we assume that we should count only observers who can think about anthropics. For example, if you are reading this now, you are not asleep and not a cat.

In that case, only two alternative seems true:

  1. Bite the bullet. We are are typical civilization and all civilizations decline. There is no supercivilization, simulations etc. So there is no surprising to be us and the doom is soon. 
  2.  We are part of a simulation and the superciviliation tries to ensure that past simulations have high measure. E.g. to win measure war against evil AI and prevent s-risks.
Comment by avturchin on We need a theory of anthropic measure binding · 2021-12-31T11:13:43.716Z · LW · GW

I come to the idea of the energy as a measure of computation based on the exploration of Ebborian brains, which are 2 dimensional beings which have thickness in 3d dimension. They could be sliced horizontally, creating copies. 

The biggest part of the mass of a computer could be removed without affecting the current computations, like different bearing and auxiliary circuits. They maybe helpful in the later computations but only current are important for observer-counting. 

This also neatly solves Boltzmann brains problem: they by definition have very low energy of computation, so they are very improbable.

And this helps us to explain the problem of thermodynamics which you mentioned. The chaotic movement of electrons could be seen as sum of many different computations. However, each individual computation has very small energy and "observer weight" if it is conscious.

I didn't read the post you linked yet, and will comment on it later. 

Comment by avturchin on A non-magical explanation of Jeffrey Epstein · 2021-12-31T11:03:19.825Z · LW · GW

He could be pressed into suicide by blackmail: if some credible says that he would kill him and his children (or whatever he cares), if he will not commit suicide. 

In that case it is still a type of murder. 

Comment by avturchin on We need a theory of anthropic measure binding · 2021-12-30T12:58:16.074Z · LW · GW

I also could suggest energy-used-for-computation-measure. First, assume that there is a minimal possible energy of computation, plank computation, which uses most theoretically effective computing. Each brain could be counterfactually sliced as sum of minimal computations. Now we could calculate which brain has more slices and conclude that I am more likely to be in such brain.

This measure is more probable than mass- or volume-based measure, as mass could be mass of computationally inert parts of the brain like bones.

Comment by avturchin on Universal counterargument against “badness of death” is wrong · 2021-12-27T12:23:24.710Z · LW · GW

Generics will be soon available

Comment by avturchin on Universal counterargument against “badness of death” is wrong · 2021-12-23T16:35:22.075Z · LW · GW

If aging will be defeated in 2030 (say, by superinteligent AI), then surviving even in poor state is reasonable.

Comment by avturchin on Universal counterargument against “badness of death” is wrong · 2021-12-23T12:04:29.755Z · LW · GW

One of the reason for this type of conclusions is thinking in "far mode", as was suggested by Robin Hanson. If we speak, say, about the perspective that "your grandmother will die tomorrow", we start to think in near-term mode, and we don't like death in near-term mode. But every death will be eventually in near-term mode.

Comment by avturchin on Universal counterargument against “badness of death” is wrong · 2021-12-22T23:37:45.854Z · LW · GW

The fact that people are not interested in the immortalists' staff is one of the greatest misteries. It is the black matter for transhumanism. Less people have signed for cryonics than was eaten by birds in Zoroastrism.

One way to explain it is Tanatos, built-in death drive. Human apoptosis. But on personal level humans try to survive.

Or fear of revolting against God's will. Religious people are ok with immortality in afterlife, and they are not afraid that the paradise will be overpopulated or will be boring. The key difference is the idea of God? But we have superintelligence as its substitute. 

Or, if we try to rationalize their argument in another way, they say: it is impossible to overstretch one parameter of the system to infinity, while other parameters are finite. Like if we get infinite lifespan, but the amount of fun is the same, the fun will be so narrowly distributed over eternity that there will be no fun at all in any given moment. The same about resources. This argument is at least reasonable, but could be objected. 

Comment by avturchin on [deleted post] 2021-12-22T17:24:02.697Z

It could also evolve to overcome protection equipment, via being more stable in air or on surfaces.

Comment by avturchin on Universal counterargument against “badness of death” is wrong · 2021-12-22T16:07:07.584Z · LW · GW

All these questions are not boredom or overpopulation – they are something like a protection from a new idea. Or protection against fear of death. Its like a Stockholm syndrome, where a victim take the side of a terrorist.

If people were really afraid about overpopulation, they should ban sex first. 

You are right: they feel that something is amiss.  The idea of immortality without an image of paradise is really boring. Becoming immortal without becoming God and without living in galactic size paradize is wrong and they feel it. 

Comment by avturchin on Experiences raising children in shared housing · 2021-12-22T15:07:09.744Z · LW · GW

One problem which may arise is different sleeping schedules for children for two co-living families with children. If one is early bird, she-he would start create noise from early morning and actively try to awake other children and adults. 

Comment by avturchin on Universal counterargument against “badness of death” is wrong · 2021-12-22T12:19:16.236Z · LW · GW

Good point that the argument works in other way if we do not postulate "death is bad" as a moral axiom and instead try to derive the badness of death from some other values. 

Some people don't place intrinsic value in life.

Actually who? Most examples I can imagine, like samurai, buddhists, drug addicts or suicide people – still have some value for not dying, but are overwhelmed by another value. 

Comment by avturchin on Universal counterargument against “badness of death” is wrong · 2021-12-21T11:46:15.971Z · LW · GW

There was an interesting article recently: Weinberg, R. (2021). Ultimate Meaning: We Don’t Have It, We Can’t Get It, and We Should Be Very, Very Sad. Controversial Ideas, 1(1), 0–0.

She tried to prove that life itself is meaningless, as meanings are the property of projects, and life is only a container of projects. I don't agree with her: all my projects can't expire simultaneously. But more importantly, I feel my life more than just a container, I feel that my existence is a value for me itself. 

Comment by avturchin on Universal counterargument against “badness of death” is wrong · 2021-12-20T23:21:54.287Z · LW · GW

Obviously we want to honor both preferences, but we just don’t know how. However, it seems to me that solving suffering as qualia of pain is technologically simpler: just add some electrodes to a right brain center which will turn it off when pain is above acceptable threshold. Death is more complex problem, but the main difference is that death is irreversible.

From Personal utilitarian view, any amount of suffering could be compensated by a future eternal paradise.

Comment by avturchin on Universal counterargument against “badness of death” is wrong · 2021-12-20T17:13:46.740Z · LW · GW

Not all humans have that preference at all times in their life - I've known a few who chose to die (including some who I understood and supported the choice), and MANY who didn't choose to suffer more in order to live longer.  Your induction is invalid.  

My point is that if someone chose death – this doesn't mean that he doesn't have the preference "do not die". It means that he has two preferences: "not die" and "not suffer", and the suffering was so strong that he choose to dish "not die" preference and choose death to stop sufferings. However, if he would have other ways to cure sufferings, he would choose them instead. 

Comment by avturchin on Universal counterargument against “badness of death” is wrong · 2021-12-20T12:31:23.547Z · LW · GW

Even if you don't agree with the counterargument, you can see that people use this construction all the time: "death is needed because some other bad thing would happen in the world without death". They just put different names for the "other thing": eternal boredom, overpopulation, stagnation as Musk recently said, lack of meaning of life etc. But it doesn't address the badness of death per se.

Humans have strong preference against personal-death-today.  Any reasonable preferece-extracting procedure will learn this. E.g. worst punishment is death penalty but not castration or tongue-cutting. 

By induction, it could be shown that if a person doesn't want his death today (and also doesn't want to change his preferences about this), he will not want it tomorrow, and in any other day in the future. So death is personally bad.

If a person is altruistic, he would not want to act against preferences of other people. So he would not want other people's death, except some trolley-problem-like situations. Thus fighting death is universal altruistic goal.

I can get behind any movement to defer or remove the effects of aging, but my value drive there is to increase total maximally-valuable experience-hours, not to privilege any existing individual over a potential one.

It means that you are not supporter of preferential utilitarianism. But real people would likely resist implementing pure hedonist utilitarianism where individual life doesn't matter, and someone could be killed because he consumes too much resources which otherwise could be used to support several less resource-consuming possible people. This will result in a war and a lot of sufferings. Thus from the consequentialists point of view hedonic utilitarianism will produce less hedons than other moral theories.  

Comment by avturchin on Universal counterargument against “badness of death” is wrong · 2021-12-19T10:32:07.756Z · LW · GW

If one say: “death is bad, but it is more likely to prevent dictatorships”, he still say that the death is bad. But he also say that preventing dictators has higher utility than death, so he doesn’t say that the death is “absolutely” bad.

But now it is a technical problem: how to make the world where both conditions are satisfied.

Comment by avturchin on Magna Alta Doctrina · 2021-12-16T12:43:24.613Z · LW · GW

Could school education be an example of deep factoring? Like a skill of writing is used later in math and all others classes, and math knowledge is later used in physics and chemistry classes?

Comment by avturchin on Omicron Post #6 · 2021-12-14T19:48:50.362Z · LW · GW

Excess mortality is growing is South Africa, for last week of November, it was 1000 above trend. https://www.samrc.ac.za/sites/default/files/files/2021-12-08/weekly4Dec2021.pdf

Comment by avturchin on A fate worse than death? · 2021-12-14T18:29:08.773Z · LW · GW

If you create all possible minds, you will resurrect any given mind. Creating all minds is simple in everetian universe, you just need a random files generator.

Comment by avturchin on Nuclear war anthropics · 2021-12-13T10:43:39.956Z · LW · GW

Nuclear war doesn't need to produce human extinction to cause anthropic effects. In the world with such war there will be (presumably) less universities and less people who are interested in anthropic, as most such centres will be destroyed during nuclear exchange, and people will be more busy in survival. 

Also global internet is less likely to exist in the world after a large scale nuclear war, which means lesser exchange of the ideas and lesser chances for the things like LessWrong to exist – or smaller number of people participating in them.

I guestimate that in the world after 1-billion-loss-nuclear exchange where will be 10 time less people interested in anthropics.

Thus it is not very surprising to find oneself in the world where large scale nuclear war never happened, but it is not evidence that nuclear war causes extinction.