Posts

Russian x-risks newsletter spring 21 2021-06-01T12:10:32.694Z
Grabby aliens and Zoo hypothesis 2021-03-04T13:03:17.277Z
Russian x-risks newsletter winter 2020-2021: free vaccines for foreigners, bird flu outbreak, one more nuclear near-miss in the past and one now, new AGI institute. 2021-03-01T16:35:11.662Z
[RXN#7] Russian x-risks newsletter fall 2020 2020-12-05T16:28:51.421Z
Russian x-risks newsletter Summer 2020 2020-09-01T14:06:30.196Z
If AI is based on GPT, how to ensure its safety? 2020-06-18T20:33:50.774Z
Russian x-risks newsletter spring 2020 2020-06-04T14:27:40.459Z
UAP and Global Catastrophic Risks 2020-04-28T13:07:21.698Z
The attack rate estimation is more important than CFR 2020-04-01T16:23:12.674Z
Russian x-risks newsletter March 2020 – coronavirus update 2020-03-27T18:06:49.763Z
[Petition] We Call for Open Anonymized Medical Data on COVID-19 and Aging-Related Risk Factors 2020-03-23T21:44:34.072Z
Virus As A Power Optimisation Process: The Problem Of Next Wave 2020-03-22T20:35:49.306Z
Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics 2020-03-18T12:44:42.756Z
Reasons why coronavirus mortality of young adults may be underestimated. 2020-03-15T16:34:29.641Z
Possible worst outcomes of the coronavirus epidemic 2020-03-14T16:26:58.346Z
More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them 2020-03-13T13:35:05.189Z
Anthropic effects imply that we are more likely to live in the universe with interstellar panspermia 2020-03-10T13:12:54.991Z
Russian x-risks newsletter winter 2019-2020. 2020-03-01T12:50:25.162Z
Rationalist prepper thread 2020-01-28T13:42:05.628Z
Russian x-risks newsletter #2, fall 2019 2019-12-03T16:54:02.784Z
Russian x-risks newsletter, summer 2019 2019-09-07T09:50:51.397Z
OpenGPT-2: We Replicated GPT-2 Because You Can Too 2019-08-23T11:32:43.191Z
Cerebras Systems unveils a record 1.2 trillion transistor chip for AI 2019-08-20T14:36:24.935Z
avturchin's Shortform 2019-08-13T17:15:26.435Z
Types of Boltzmann Brains 2019-07-10T08:22:22.482Z
What should rationalists think about the recent claims that air force pilots observed UFOs? 2019-05-27T22:02:49.041Z
Simulation Typology and Termination Risks 2019-05-18T12:42:28.700Z
AI Alignment Problem: “Human Values” don’t Actually Exist 2019-04-22T09:23:02.408Z
Will superintelligent AI be immortal? 2019-03-30T08:50:45.831Z
What should we expect from GPT-3? 2019-03-21T14:28:37.702Z
Cryopreservation of Valia Zeldin 2019-03-17T19:15:36.510Z
Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World 2019-03-11T10:30:58.676Z
Do we need a high-level programming language for AI and what it could be? 2019-03-06T15:39:35.158Z
For what do we need Superintelligent AI? 2019-01-25T15:01:01.772Z
Could declining interest to the Doomsday Argument explain the Doomsday Argument? 2019-01-23T11:51:57.012Z
What AI Safety Researchers Have Written About the Nature of Human Values 2019-01-16T13:59:31.522Z
Reverse Doomsday Argument is hitting preppers hard 2018-12-27T18:56:58.654Z
Gwern about centaurs: there is no chance that any useful man+machine combination will work together for more than 10 years, as humans soon will be only a liability 2018-12-15T21:32:55.180Z
Quantum immortality: Is decline of measure compensated by merging timelines? 2018-12-11T19:39:28.534Z
Wireheading as a Possible Contributor to Civilizational Decline 2018-11-12T20:33:39.947Z
Possible Dangers of the Unrestricted Value Learners 2018-10-23T09:15:36.582Z
Law without law: from observer states to physics via algorithmic information theory 2018-09-28T10:07:30.042Z
Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse 2018-09-27T10:09:56.182Z
Quantum theory cannot consistently describe the use of itself 2018-09-20T22:04:29.812Z
[Paper]: Islands as refuges for surviving global catastrophes 2018-09-13T14:04:49.679Z
Beauty bias: "Lost in Math" by Sabine Hossenfelder 2018-09-05T22:19:20.609Z
Resurrection of the dead via multiverse-wide acausual cooperation 2018-09-03T11:21:32.315Z
[Paper] The Global Catastrophic Risks of the Possibility of Finding Alien AI During SETI 2018-08-28T21:32:16.717Z
Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence 2018-07-25T17:12:32.442Z
[1607.08289] "Mammalian Value Systems" (as a starting point for human value system model created by IRL agent) 2018-07-14T09:46:44.968Z

Comments

Comment by avturchin on Core Pathways of Aging · 2021-06-05T11:53:27.547Z · LW · GW

In which types of cells the most of transpasone damage happens? In stem cells? Other types of cells are recycled quickly. The same question arises about ROS. 

Also, how your theory explains difference in life expectancy between different species?

Comment by avturchin on Russian x-risks newsletter spring 21 · 2021-06-03T13:02:04.447Z · LW · GW

A person who works on other vaccine, told me that Sputnik (and other similar vaccines based on vectors) generate like 2000 random antibodies and there is a chance that some of them will turn autoimmune and cause, say, encephalitis. Other types of vaccines generate antibody not the whole vector but only to spike protein, like 30 different ones, and there are less chances of autoimmune reaction.

But most people do not know these considerations. However, they had observed how government manipulated data during elections and Olympic games and are sure that they will lie again; or they believe in "Bill Gates' chip". 

Comment by avturchin on Is driving worth the risk? · 2021-05-11T22:13:52.403Z · LW · GW

BTW, my personal choice is Uber Black.: I don't have car and I delegate driving to special trained person. Every time I take Comfort, I regret, as I have near-miss accidents. It is relatively cheap in my area.

I have two-three people who I knew and who died in accidents: all of the were "reckless pedestrians". It supports you point about the ability of pedestrians to manage risks.

Can't find a link on statistic of accidents by car types 

Comment by avturchin on Is driving worth the risk? · 2021-05-11T11:40:54.076Z · LW · GW

Around half deaths from car accidents are pedestrians (may be less in US). By choosing not to drive, you increase the time of walking and your chances of being hit by other person's car. 

Other means of transport like cycling or buses are also risky. 

Sitting home is even more dangerous as there are risks of depression and being overweight. 

Finally, some cars are like two-three orders safer than others, if we look at the number of reported deaths per billion km driving. I saw once that Toyota Prius had 1 death for 1 billion km, but Kia Rio was only 1 for 10 millions. Also, there are special racing cars which are reinforced from inside and can roll safely

Wearing helmet inside a car is also useful. 

Waiting few years for self-driving Tesla Cybertrack may be an option.

Comment by avturchin on Simulation theology: practical aspect. · 2021-05-07T12:49:07.887Z · LW · GW

Ok. But what if there are other more effective methods to start believe in things which are known to be false? For example, hypnosis is effective for some. 

Comment by avturchin on Simulation theology: practical aspect. · 2021-05-06T13:06:51.863Z · LW · GW

Placebo could work because it has some evolutionary fitness, like the ability to stop pain in case of the need of activity. 

Benevolent simulators could create an upper limit of subjectively perceived pain, like turning off qualia but continue screaming. This will be unobservable scientifically. 

Comment by avturchin on Simulation theology: practical aspect. · 2021-05-05T12:22:50.055Z · LW · GW

The ability to change probability of future events in favor of your wishes is not a proof of simulation, because there is an non-simulation alternative where it is also possible. 

Imagine that natural selection in quantum multiverse worked in the direction that helped to survive beings which are capable to influence probabilities in favorable way. Even a slightest ability to affect probabilities will give an enormous increase of measure, so the anthropics favor you to be in such world, and such anthropic shift may be even stronger than the shift in the simulation direction. 

In that case, it perfectly reasonable to expect that your wishes (in your subjective timeline) will have a higher probability to be fulfilled.  A technological example of such probability shift was discussed in The Anthropic Trilemma by EY.

Comment by avturchin on What weird beliefs do you have? · 2021-05-03T23:39:30.480Z · LW · GW

Doomsday argument and quantum immortality are both true, and it means that I will be the only survivor of a global catastrophe. Moreover, it will be in a simulation.

Both DA and QI could be tested in other fields. DA was tested to predict other things besides the end of the world by Gott. QI is anthropic principle applied to the future. 
Aranyosi claimed that DA and Simulation argument cancel each other, but actually they support each other: I live (or will live because of QI) in a simulation which simulates a doomsday event with one survivor.

Comment by avturchin on Your Dog is Even Smarter Than You Think · 2021-05-01T09:32:03.612Z · LW · GW

So we could create another intelligent species on Earth by combining selection and designed culture. Any risks?

Comment by avturchin on "Who I am" is an axiom. · 2021-04-28T14:42:38.625Z · LW · GW

We could test Doomsday argument on other things, like Gott has tested it on broadway shows. For example, I can predict that your birthday is not 1 of January with high confidence. It is also true for my birthday date, which is randomly selected from all dates of the year. So despite my "who I am" axiom, my external properties are distributed randomly.

Comment by avturchin on "Who I am" is an axiom. · 2021-04-26T11:14:44.781Z · LW · GW

All that we know about x-risks tells us that Doomsday argument should be true, or at least very probable. So we can't use its apparent falsity as an argument that some form of anthropic reasoning is false.

Comment by avturchin on Is there anything that can stop AGI development in the near term? · 2021-04-24T13:10:33.677Z · LW · GW

Not any nuclear war will work. If it will be a nuclear exchange between major superpowers, but many well developed countries survive, it will not slow down AI significantly. Even such war may accelerate AI as it will be seen as a new protective weapon.

Only anti-AI nuclear war which will target chip fabs, data centeres, electricity sources and brain tanks + global economy in general may be "effective" in halting AI development. I don't endorse it, as the extinction probability here is close to AI's extinction probability.

Comment by avturchin on Is there anything that can stop AGI development in the near term? · 2021-04-24T13:03:07.577Z · LW · GW

Narrow AI Nanny? A system of global control based on narrow AI which is used to prevent creation of AGI and other bad things.

Comment by avturchin on The Case for Extreme Vaccine Effectiveness · 2021-04-18T12:26:06.460Z · LW · GW

It is underreporting. There was an analysis of reported deaths, and the picture is grim: It looks like Russia has several times more deaths than was reported and maybe Russia is a world leader. Additional mortality is 338 000 deaths in 2020 (for 140 million population). Almost all my friends had covid.

Comment by avturchin on On Falsifying the Simulation Hypothesis (or Embracing its Predictions) · 2021-04-15T12:06:21.168Z · LW · GW

Glitches may appear if simulators use very simple world-modelling systems, like 2D surface modelling instead of 3D space modelling, or simple neural nets to generate realistic images like our GANs.

Comment by avturchin on The Case for Extreme Vaccine Effectiveness · 2021-04-14T10:28:07.705Z · LW · GW

I now have covid after being vaccinated 3 months ago by Russian Sputnik-V vaccine. For now, it is mild: one day of 38 C, 3 days of 37.5C, only upper level infection, no cough. I lost smell, but it is is slowly returning. Oxygen at my normal level. 

Comment by avturchin on What if AGI is near? · 2021-04-14T08:43:34.367Z · LW · GW

Yes, it is a reversed doomsday argument: it is unlikely that the end is nigh.

Comment by avturchin on On Falsifying the Simulation Hypothesis (or Embracing its Predictions) · 2021-04-12T10:59:15.284Z · LW · GW

I agree that simpler simulations are more probable. As a result the cheapest and one-observer-cenetered simulation are the most numerous. But cheapest simulation will have the highest probability of glitches. Thus the main observable property of  living in simulation is higher probability to observer miracles.

 Wrote about it here: "Simulation Typology and Termination Risks" and Glitch in the Matrix: Urban Legend or Evidence of the Simulation?

Comment by avturchin on How do you reconcile the lack of choice that children have in being born? · 2021-04-07T11:58:00.356Z · LW · GW

Everything exists in the multiverse, so there is no choice: a child will be born anyway

Comment by avturchin on Could billions spacially disconnected "Boltzmann neurons" give rise to consciousness? · 2021-03-31T20:53:04.109Z · LW · GW

Here you have neurons and order of their connection. This order is a graph and could be described as one long number. Mind states appear as a brain moves from one state to another, and here it will be transition from one number to another.  

This is exactly what Muller wrote in his article, which you linked: you need just numbers and a law based on Kolmogorov's complexity which connects them  -  to create an illusion of stream of consciousness.  Neurons are not needed at all. 

Comment by avturchin on Why Selective Breeding is a Bad Way to do Genetic Engineering · 2021-03-06T13:44:23.797Z · LW · GW

One don't need to limit breeding ability to get genetic selection: just move those who you not interesting on into outside world. For example, i want to breed dogs who are good to live in mountains. I put a group of dogs on a mountain. As generations pass, those dogs which are good in living in the mountain will remain there and even move higher, and those who are not good in it, will move to live in plains. Those who live in plains may even have higher number of children.

Comment by avturchin on Grabby aliens and Zoo hypothesis · 2021-03-05T13:01:20.670Z · LW · GW

If we are inside the colonisation volume, its change will be isotropic and will not look strange for us. For example, if aliens completely eliminated stars of X class as they are the best source of energy, we will not observe it, as there will be no X stars in any direction. 

Comment by avturchin on Russian x-risks newsletter winter 2020-2021: free vaccines for foreigners, bird flu outbreak, one more nuclear near-miss in the past and one now, new AGI institute. · 2021-03-01T20:24:47.179Z · LW · GW

Both answers are correct: manufacturing is slow and a lot of people are against vaccination. More about in this article in Russian. 

Population trust is low and many people deny vaccination. Only around 5 per cent of people in Moscow has been vaccinated, despite easy availability of vaccine for everybody in Moscow.  Also, there is a typical situation in Russia from Soviet times when Moscow is oversupplied and regions are undersupplied.  Ural region has run out of vaccine. 

In my circle, 80 per cent of people had covid, but only a few friends vaccinated (around 5-10 per cent based on the share of people who visited my last year wedding and later vaccinated). There are two explanation according to them: either they already had covid and has antibodies and don't see the reason to take risks of vaccination, or "they are too old" and afraid of side effects. Interestingly, Russian vaccine was initially approved only for people below 60, Also, unfortunately, Putin didn't take the shot and this didn't help the belief in vaccine.

Two more Russian vaccines are in the late stages of approval, and some people wait for them as they are based on more conventional technology: not viral vectors, but dead coronavirus. 

Comment by avturchin on Bootstrapped Alignment · 2021-02-27T21:44:45.346Z · LW · GW

 as a weak alignment techniques we might use to bootstrap strong alignment.

Yes, it also reminded me Christiano approach of amplification and distillation.

Comment by avturchin on Mathematical Models of Progress? · 2021-02-18T12:44:08.409Z · LW · GW

Interesting hyperbolic model here: https://www.researchgate.net/publication/325664983_The_21_st_Century_Singularity_and_its_Big_History_Implications_A_re-analysis 

Comment by avturchin on [deleted post] 2021-02-12T09:18:36.449Z

Yes. I  am working on an article about this idea. This is is especially true in the case of climate change. Runaway global warming is much more probable because of survivorship bias.

Comment by avturchin on Nuclear war is unlikely to cause human extinction · 2021-02-10T19:23:28.214Z · LW · GW

The idea of the Doomsday weapon as it was envision by Kahn is that it will be activated automatically and can't be turned off by survivors - and it is well known fact for all players. 

Comment by avturchin on OpenAI: "Scaling Laws for Transfer", Hernandez et al. · 2021-02-05T09:43:20.258Z · LW · GW

gwern on reddit: 

"The most immediate implication of this would be that you now have a scaling law for transfer learning, and so you can predict how large a general-purpose model you need in order to obtain the necessary performance on a given low-n dataset. So if you have some economic use-cases in mind where you only have, say, _n_=1000 datapoints, you can use this to estimate what scale model is necessary to make that viable. My first impression is that this power law looks quite favorable*, and so this is a serious shot across the bow to any old-style AI startups or businesses which thought "our moat is our data, no one has the millions of datapoints necessary (because everyone knows DL is so sample-inefficient) to compete with us". The curves here indicate that just training a large as possible model on broad datasets is going to absolutely smash anyone trying to hand-curate finetuning datasets, especially for the small datasets people worry most about...

* eg the text->python example: the text is basically just random Internet text (same as GPT-3), Common Crawl etc, nothing special, not particularly programming-related, and the python is Github Python code; nevertheless, the transfer learning is very impressive: a 10x model size increase in the pretrained 'text' model is worth 100x more Github data!"

Comment by avturchin on AI Alignment, Philosophical Pluralism, and the Relevance of Non-Western Philosophy · 2021-01-22T11:43:32.868Z · LW · GW

Philpaper archive sends recommendations of similar articles. 

Comment by avturchin on #3: Choosing a cryonics provider · 2021-01-20T17:29:21.232Z · LW · GW

Some notes on Kriorus. 

It allows "sign up after death": that is, a relative may try to sign up for an already deceased person. Many people were cryopreserved this way when their relatives started googling after the death of a person (or a pet).

Last year Kriorus had internal conflict but the attempt to change management seems to fail. 

Comment by avturchin on Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse · 2021-01-20T16:58:58.621Z · LW · GW

My concern is that fusing experiences may lead to loss of individuality. We could fuse all minds into one simple eternal bliss but its is nit far from death. 

One solution is fuse which is not destroying personal identity. Here I assume that "personal identity" is a set of observer-moments which mutually recognise each other a same person

Comment by avturchin on Some thoughts on risks from narrow, non-agentic AI · 2021-01-19T12:55:22.539Z · LW · GW

Also, narrow AI may be used for production of dangerous weapons, e.g. quick generation of the code of a biological virus which will be be able exterminate humanity.

Comment by avturchin on What is going on in the world? · 2021-01-17T16:31:44.544Z · LW · GW

A few other narratives:

If reactor grade plutonium could be used to make nuclear weapons, there is enough material in the world to make million nukes and it is dispersed through many actors. 

Only arctic methane eruption matters, as it could trigger runaway global warming.

Only Peak oil matters, and in next 10 years we will see shortages of it and other raw materials.

Only coronavirus mutations matters, as they could become more deadly.

Only reports about UFO matters, as they imply that our world model is significantly wrong.

Comment by avturchin on Grey Goo Requires AI · 2021-01-15T11:57:03.098Z · LW · GW

And it could be made via some modifications of E.Coli or other simple bacteria, like adding ability to fix nitrogen. It almost happened already during Azolla event.

Comment by avturchin on #2: Neurocryopreservation vs whole-body preservation · 2021-01-13T13:41:08.939Z · LW · GW

A strong argument for brain-only preservation is that by law (in Russia) only skeleton is a body, and the brain is only a tissue sample, so less possible problems if police asks about legal basis.  I did brain-only preservation for my mother, and they returned the upper part of the skull back, and she looked as if nothing happened and she had full christian service in open casket with many people attending – and nobody knows that she is cryopreserved and a possible conflict was avoided. 

Comment by avturchin on Overall numbers won't show the English strain coming · 2021-01-02T19:44:39.290Z · LW · GW

Ireland also exploded. https://www.worldometers.info/coronavirus/country/ireland/

I would watch NY and CA.

Comment by avturchin on Overall numbers won't show the English strain coming · 2021-01-02T17:28:23.405Z · LW · GW

In some states the spread of the British variant will be obvious earlier, and it could be observed as double peak on charts. 

The situation in the UK could be also informative. There are 57K cases today in UK.

Comment by avturchin on Anti-Aging: State of the Art · 2021-01-01T22:44:23.520Z · LW · GW

That is true for therapies which work on damage (SENS). But if we see aging as a process which creates the damages, than it is reasonable to stop it on early age. 

Also, I've seen a recent article "Longevity‐related molecular pathways are subject to midlife “switch” in humans" which implies that many interventions should happen early in life.

Thanks for great post!

Comment by avturchin on Anti-Aging: State of the Art · 2021-01-01T21:10:52.024Z · LW · GW

It is safe enough to be sold OTC, and there are some research which connects with life extension effects. The real problem is that we don't have human tests of its effects on longevity, despite its widespread use. The first study like this will be TAME, which will explore life extension properties of metformin. There are several reasons why such studies are difficult to perform. Firstly, they are costly, but known safe things are non-patentable. Secondly, they need to be very long., and long human studies are especially costly.

Comment by avturchin on Anti-Aging: State of the Art · 2021-01-01T20:59:05.807Z · LW · GW

Unfortunately, it seems that most intervention works before aging actually developed, so we need to give them to younger people, at least before 50.

Comment by avturchin on AI Alignment, Philosophical Pluralism, and the Relevance of Non-Western Philosophy · 2021-01-01T17:35:38.144Z · LW · GW

There is an article which covers similar topics, but only abstract is available: 

African Reasons Why Artificial Intelligence Should Not Maximize Utility

https://philpapers.org/rec/METARW?ref=mail

Comment by avturchin on Anti-Aging: State of the Art · 2021-01-01T17:24:16.635Z · LW · GW

There is a problem with most anti-aging interventions: long expected duration of human trials, as results and lack of side effects will be obvious only decades after the start oа such trials. Without trials, FDA will never approve such therapies. 

However, there is a way to increase the speed of trials using biomarkers of aging - or testing of already known to be safe interventions, like vitamin D. But biomarkers need to be calibrated and safe interventions provide only small effects on aging. Thus, it looks like some way to accelerate trials is needed if we want radical solution to aging to 2030. What could it be?

Comment by avturchin on avturchin's Shortform · 2020-12-26T17:18:34.180Z · LW · GW

Glitch in the Matrix: Urban Legend or Evidence of the Simulation? The article is here: https://philpapers.org/rec/TURGIT
In the last decade, an urban legend about “glitches in the matrix” has become popular. As it is typical for urban legends, there is no evidence for most such stories, and the phenomenon could be explained as resulting from hoaxes, creepypasta, coincidence, and different forms of cognitive bias. In addition, the folk understanding of probability does not bear much resemblance to actual probability distributions, resulting in the illusion of improbable events, like the “birthday paradox”. Moreover, many such stories, even if they were true, could not be considered evidence of glitches in a linear-time computer simulation, as the reported “glitches” often assume non-linearity of time and space—like premonitions or changes to the past. Different types of simulations assume different types of glitches; for example, dreams are often very glitchy. Here, we explore the theoretical conditions necessary for such glitches to occur and then create a typology of so-called “GITM” reports. One interesting hypothetical subtype is “viruses in the matrix”, that is, self-replicating units which consume computational resources in a manner similar to transposons in the genome, biological and computer viruses, and memes.

 

Comment by avturchin on Covid 12/24: We’re F***ed, It’s Over · 2020-12-26T11:19:10.726Z · LW · GW

But for the flu virus reassortment (more correct word here) is happening from time to time, when two viruses infect the same cell and exchange genes.

Comment by avturchin on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T20:30:23.962Z · LW · GW

I have seem claims that origin of coronavirus could be explained via recombination, but I would like to learn more about it.

Comment by avturchin on Covid 12/24: We’re F***ed, It’s Over · 2020-12-24T20:22:34.343Z · LW · GW

In South Africa infections grew almost 10 times in a month. https://www.worldometers.info/coronavirus/country/south-africa/

There is also quick growth in Czech republic and Netherlands. It looks like new strains are already there. Also, what worry me, is what happen when these new strains from different places recombines.

Comment by avturchin on New SARS-CoV-2 variant · 2020-12-21T11:50:30.824Z · LW · GW

It looks like that not only the share of infections by new virus, but the total number of infection is also rising. UK had record 35k infections yesterday. Netherlands has a spike of infections from 5k to 14k during December. Thus even if this virus is not deadly per se, it will put more pressure on the medical system and will turn deadlier at the end.

Comment by avturchin on Homogeneity vs. heterogeneity in AI takeoff scenarios · 2020-12-16T20:18:41.871Z · LW · GW

If we run two non-communicating copies of the same AI, could it be helpful in detecting failures? 

Comment by avturchin on avturchin's Shortform · 2020-12-16T12:39:54.203Z · LW · GW

"Back to the Future: Curing Past Suffering and S-Risks via Indexical Uncertainty"

I uploaded the draft of my article about curing past sufferings.

Abstract:

The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate of personal identity and thus a copy equals original, then by creating many copies of the next observer-moment of a person in pain in which he stops suffer, we could create indexical uncertainty in her future location and thus effectively steal her consciousness from her initial location and immediately relieve her sufferings. However, to accomplish this for people who have already died, we need to perform this operation for all possible people thus requiring enormous amounts of computations. Such computation could be performed by the future benevolent AI of Galactic scale. Many such AIs could cooperate acausally by distributing parts of the work between them via quantum randomness. To ensure their success, they need to outnumber all possible evil AIs by orders of magnitude, and thus they need to convert most of the available matter into computronium in all universes where they exist and cooperate acausally across the whole multiverse. Another option for curing past suffering is the use of wormhole time-travel to send a nanobot in the past which will, after a period of secret replication, collect the data about people and secretly upload them when their suffering becomes unbearable. https://philpapers.org/rec/TURBTT

Comment by avturchin on SIA fears (expected) infinity · 2020-12-02T14:32:12.189Z · LW · GW

It seems to me that is we have infinite population, which include all possible observers, then SIA merges with SSA. For example, in presumptuous philosopher, it would mean that there are two regions of the multiverse: one with trillion observers, and another with trillion of trillions, and it will be not surprising to be located in the larger one. 

SIA in PP becomes absurd only for a finite universe (and no other universes), where only one of two regions exists. But the absurdity is in the definition: it is absurd to think that the universe could be provably finite, as there should be some force above the universe which limits its size.