[RXN#7] Russian x-risks newsletter fall 2020 2020-12-05T16:28:51.421Z
Russian x-risks newsletter Summer 2020 2020-09-01T14:06:30.196Z
If AI is based on GPT, how to ensure its safety? 2020-06-18T20:33:50.774Z
Russian x-risks newsletter spring 2020 2020-06-04T14:27:40.459Z
UAP and Global Catastrophic Risks 2020-04-28T13:07:21.698Z
The attack rate estimation is more important than CFR 2020-04-01T16:23:12.674Z
Russian x-risks newsletter March 2020 – coronavirus update 2020-03-27T18:06:49.763Z
[Petition] We Call for Open Anonymized Medical Data on COVID-19 and Aging-Related Risk Factors 2020-03-23T21:44:34.072Z
Virus As A Power Optimisation Process: The Problem Of Next Wave 2020-03-22T20:35:49.306Z
Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics 2020-03-18T12:44:42.756Z
Reasons why coronavirus mortality of young adults may be underestimated. 2020-03-15T16:34:29.641Z
Possible worst outcomes of the coronavirus epidemic 2020-03-14T16:26:58.346Z
More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them 2020-03-13T13:35:05.189Z
Anthropic effects imply that we are more likely to live in the universe with interstellar panspermia 2020-03-10T13:12:54.991Z
Russian x-risks newsletter winter 2019-2020. 2020-03-01T12:50:25.162Z
Rationalist prepper thread 2020-01-28T13:42:05.628Z
Russian x-risks newsletter #2, fall 2019 2019-12-03T16:54:02.784Z
Russian x-risks newsletter, summer 2019 2019-09-07T09:50:51.397Z
OpenGPT-2: We Replicated GPT-2 Because You Can Too 2019-08-23T11:32:43.191Z
Cerebras Systems unveils a record 1.2 trillion transistor chip for AI 2019-08-20T14:36:24.935Z
avturchin's Shortform 2019-08-13T17:15:26.435Z
Types of Boltzmann Brains 2019-07-10T08:22:22.482Z
What should rationalists think about the recent claims that air force pilots observed UFOs? 2019-05-27T22:02:49.041Z
Simulation Typology and Termination Risks 2019-05-18T12:42:28.700Z
AI Alignment Problem: “Human Values” don’t Actually Exist 2019-04-22T09:23:02.408Z
Will superintelligent AI be immortal? 2019-03-30T08:50:45.831Z
What should we expect from GPT-3? 2019-03-21T14:28:37.702Z
Cryopreservation of Valia Zeldin 2019-03-17T19:15:36.510Z
Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World 2019-03-11T10:30:58.676Z
Do we need a high-level programming language for AI and what it could be? 2019-03-06T15:39:35.158Z
For what do we need Superintelligent AI? 2019-01-25T15:01:01.772Z
Could declining interest to the Doomsday Argument explain the Doomsday Argument? 2019-01-23T11:51:57.012Z
What AI Safety Researchers Have Written About the Nature of Human Values 2019-01-16T13:59:31.522Z
Reverse Doomsday Argument is hitting preppers hard 2018-12-27T18:56:58.654Z
Gwern about centaurs: there is no chance that any useful man+machine combination will work together for more than 10 years, as humans soon will be only a liability 2018-12-15T21:32:55.180Z
Quantum immortality: Is decline of measure compensated by merging timelines? 2018-12-11T19:39:28.534Z
Wireheading as a Possible Contributor to Civilizational Decline 2018-11-12T20:33:39.947Z
Possible Dangers of the Unrestricted Value Learners 2018-10-23T09:15:36.582Z
Law without law: from observer states to physics via algorithmic information theory 2018-09-28T10:07:30.042Z
Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse 2018-09-27T10:09:56.182Z
Quantum theory cannot consistently describe the use of itself 2018-09-20T22:04:29.812Z
[Paper]: Islands as refuges for surviving global catastrophes 2018-09-13T14:04:49.679Z
Beauty bias: "Lost in Math" by Sabine Hossenfelder 2018-09-05T22:19:20.609Z
Resurrection of the dead via multiverse-wide acausual cooperation 2018-09-03T11:21:32.315Z
[Paper] The Global Catastrophic Risks of the Possibility of Finding Alien AI During SETI 2018-08-28T21:32:16.717Z
Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence 2018-07-25T17:12:32.442Z
[1607.08289] "Mammalian Value Systems" (as a starting point for human value system model created by IRL agent) 2018-07-14T09:46:44.968Z
“Cheating Death in Damascus” Solution to the Fermi Paradox 2018-06-30T12:00:58.502Z
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks 2018-06-23T13:31:13.641Z
[Paper]: Classification of global catastrophic risks connected with artificial intelligence 2018-05-06T06:42:02.030Z


Comment by avturchin on AI Alignment, Philosophical Pluralism, and the Relevance of Non-Western Philosophy · 2021-01-22T11:43:32.868Z · LW · GW

Philpaper archive sends recommendations of similar articles. 

Comment by avturchin on #3: Choosing a cryonics provider · 2021-01-20T17:29:21.232Z · LW · GW

Some notes on Kriorus. 

It allows "sign up after death": that is, a relative may try to sign up for an already deceased person. Many people were cryopreserved this way when their relatives started googling after the death of a person (or a pet).

Last year Kriorus had internal conflict but the attempt to change management seems to fail. 

Comment by avturchin on Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse · 2021-01-20T16:58:58.621Z · LW · GW

My concern is that fusing experiences may lead to loss of individuality. We could fuse all minds into one simple eternal bliss but its is nit far from death. 

One solution is fuse which is not destroying personal identity. Here I assume that "personal identity" is a set of observer-moments which mutually recognise each other a same person

Comment by avturchin on Some thoughts on risks from narrow, non-agentic AI · 2021-01-19T12:55:22.539Z · LW · GW

Also, narrow AI may be used for production of dangerous weapons, e.g. quick generation of the code of a biological virus which will be be able exterminate humanity.

Comment by avturchin on What is going on in the world? · 2021-01-17T16:31:44.544Z · LW · GW

A few other narratives:

If reactor grade plutonium could be used to make nuclear weapons, there is enough material in the world to make million nukes and it is dispersed through many actors. 

Only arctic methane eruption matters, as it could trigger runaway global warming.

Only Peak oil matters, and in next 10 years we will see shortages of it and other raw materials.

Only coronavirus mutations matters, as they could become more deadly.

Only reports about UFO matters, as they imply that our world model is significantly wrong.

Comment by avturchin on Grey Goo Requires AI · 2021-01-15T11:57:03.098Z · LW · GW

And it could be made via some modifications of E.Coli or other simple bacteria, like adding ability to fix nitrogen. It almost happened already during Azolla event.

Comment by avturchin on #2: Neurocryopreservation vs whole-body preservation · 2021-01-13T13:41:08.939Z · LW · GW

A strong argument for brain-only preservation is that by law (in Russia) only skeleton is a body, and the brain is only a tissue sample, so less possible problems if police asks about legal basis.  I did brain-only preservation for my mother, and they returned the upper part of the skull back, and she looked as if nothing happened and she had full christian service in open casket with many people attending – and nobody knows that she is cryopreserved and a possible conflict was avoided. 

Comment by avturchin on Overall numbers won't show the English strain coming · 2021-01-02T19:44:39.290Z · LW · GW

Ireland also exploded.

I would watch NY and CA.

Comment by avturchin on Overall numbers won't show the English strain coming · 2021-01-02T17:28:23.405Z · LW · GW

In some states the spread of the British variant will be obvious earlier, and it could be observed as double peak on charts. 

The situation in the UK could be also informative. There are 57K cases today in UK.

Comment by avturchin on Anti-Aging: State of the Art · 2021-01-01T22:44:23.520Z · LW · GW

That is true for therapies which work on damage (SENS). But if we see aging as a process which creates the damages, than it is reasonable to stop it on early age. 

Also, I've seen a recent article "Longevity‐related molecular pathways are subject to midlife “switch” in humans" which implies that many interventions should happen early in life.

Thanks for great post!

Comment by avturchin on Anti-Aging: State of the Art · 2021-01-01T21:10:52.024Z · LW · GW

It is safe enough to be sold OTC, and there are some research which connects with life extension effects. The real problem is that we don't have human tests of its effects on longevity, despite its widespread use. The first study like this will be TAME, which will explore life extension properties of metformin. There are several reasons why such studies are difficult to perform. Firstly, they are costly, but known safe things are non-patentable. Secondly, they need to be very long., and long human studies are especially costly.

Comment by avturchin on Anti-Aging: State of the Art · 2021-01-01T20:59:05.807Z · LW · GW

Unfortunately, it seems that most intervention works before aging actually developed, so we need to give them to younger people, at least before 50.

Comment by avturchin on AI Alignment, Philosophical Pluralism, and the Relevance of Non-Western Philosophy · 2021-01-01T17:35:38.144Z · LW · GW

There is an article which covers similar topics, but only abstract is available: 

African Reasons Why Artificial Intelligence Should Not Maximize Utility

Comment by avturchin on Anti-Aging: State of the Art · 2021-01-01T17:24:16.635Z · LW · GW

There is a problem with most anti-aging interventions: long expected duration of human trials, as results and lack of side effects will be obvious only decades after the start oа such trials. Without trials, FDA will never approve such therapies. 

However, there is a way to increase the speed of trials using biomarkers of aging - or testing of already known to be safe interventions, like vitamin D. But biomarkers need to be calibrated and safe interventions provide only small effects on aging. Thus, it looks like some way to accelerate trials is needed if we want radical solution to aging to 2030. What could it be?

Comment by avturchin on avturchin's Shortform · 2020-12-26T17:18:34.180Z · LW · GW

Glitch in the Matrix: Urban Legend or Evidence of the Simulation? The article is here:
In the last decade, an urban legend about “glitches in the matrix” has become popular. As it is typical for urban legends, there is no evidence for most such stories, and the phenomenon could be explained as resulting from hoaxes, creepypasta, coincidence, and different forms of cognitive bias. In addition, the folk understanding of probability does not bear much resemblance to actual probability distributions, resulting in the illusion of improbable events, like the “birthday paradox”. Moreover, many such stories, even if they were true, could not be considered evidence of glitches in a linear-time computer simulation, as the reported “glitches” often assume non-linearity of time and space—like premonitions or changes to the past. Different types of simulations assume different types of glitches; for example, dreams are often very glitchy. Here, we explore the theoretical conditions necessary for such glitches to occur and then create a typology of so-called “GITM” reports. One interesting hypothetical subtype is “viruses in the matrix”, that is, self-replicating units which consume computational resources in a manner similar to transposons in the genome, biological and computer viruses, and memes.


Comment by avturchin on Covid 12/24: We’re F***ed, It’s Over · 2020-12-26T11:19:10.726Z · LW · GW

But for the flu virus reassortment (more correct word here) is happening from time to time, when two viruses infect the same cell and exchange genes.

Comment by avturchin on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T20:30:23.962Z · LW · GW

I have seem claims that origin of coronavirus could be explained via recombination, but I would like to learn more about it.

Comment by avturchin on Covid 12/24: We’re F***ed, It’s Over · 2020-12-24T20:22:34.343Z · LW · GW

In South Africa infections grew almost 10 times in a month.

There is also quick growth in Czech republic and Netherlands. It looks like new strains are already there. Also, what worry me, is what happen when these new strains from different places recombines.

Comment by avturchin on New SARS-CoV-2 variant · 2020-12-21T11:50:30.824Z · LW · GW

It looks like that not only the share of infections by new virus, but the total number of infection is also rising. UK had record 35k infections yesterday. Netherlands has a spike of infections from 5k to 14k during December. Thus even if this virus is not deadly per se, it will put more pressure on the medical system and will turn deadlier at the end.

Comment by avturchin on Homogeneity vs. heterogeneity in AI takeoff scenarios · 2020-12-16T20:18:41.871Z · LW · GW

If we run two non-communicating copies of the same AI, could it be helpful in detecting failures? 

Comment by avturchin on avturchin's Shortform · 2020-12-16T12:39:54.203Z · LW · GW

"Back to the Future: Curing Past Suffering and S-Risks via Indexical Uncertainty"

I uploaded the draft of my article about curing past sufferings.


The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate of personal identity and thus a copy equals original, then by creating many copies of the next observer-moment of a person in pain in which he stops suffer, we could create indexical uncertainty in her future location and thus effectively steal her consciousness from her initial location and immediately relieve her sufferings. However, to accomplish this for people who have already died, we need to perform this operation for all possible people thus requiring enormous amounts of computations. Such computation could be performed by the future benevolent AI of Galactic scale. Many such AIs could cooperate acausally by distributing parts of the work between them via quantum randomness. To ensure their success, they need to outnumber all possible evil AIs by orders of magnitude, and thus they need to convert most of the available matter into computronium in all universes where they exist and cooperate acausally across the whole multiverse. Another option for curing past suffering is the use of wormhole time-travel to send a nanobot in the past which will, after a period of secret replication, collect the data about people and secretly upload them when their suffering becomes unbearable.

Comment by avturchin on SIA fears (expected) infinity · 2020-12-02T14:32:12.189Z · LW · GW

It seems to me that is we have infinite population, which include all possible observers, then SIA merges with SSA. For example, in presumptuous philosopher, it would mean that there are two regions of the multiverse: one with trillion observers, and another with trillion of trillions, and it will be not surprising to be located in the larger one. 

SIA in PP becomes absurd only for a finite universe (and no other universes), where only one of two regions exists. But the absurdity is in the definition: it is absurd to think that the universe could be provably finite, as there should be some force above the universe which limits its size. 

Comment by avturchin on Snyder-Beattie, Sandberg, Drexler & Bonsall (2020): The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare · 2020-11-28T15:53:08.256Z · LW · GW

We may not actually kill them, but we prevent their appearance in the future by colonisation of their planets millions of years before a civilisation will have chance to appearance on them. This is the Berezin's idea.  

Comment by avturchin on Troy Macedon's Shortform · 2020-11-28T09:58:31.089Z · LW · GW

A possible solution: Immortals will be intersted to have dreams about youth, so you are in some kind of simulation. 

BTW, it is a variant of Doomsday argument, applied to personal life.

Comment by avturchin on Measure's Shortform · 2020-11-27T17:02:31.314Z · LW · GW

Maybe, trance ?

Comment by avturchin on Snyder-Beattie, Sandberg, Drexler & Bonsall (2020): The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare · 2020-11-25T20:35:55.201Z · LW · GW

I also have a following question: which order of transitions will imply that Earth is nor rare? One answer is that time until oceans' evaporation will be much longer, like not 1, but 4 billion years - but it didn't account timing of the 4 main transitions. 

But imagine that each transition typically has rate 1 time in 1 billions years. In that case having 4 transitions in 4 billion years seems pretty normal. 

If we assume that typical time for each transition is 10 billion years, current Earth age will be not normal, but it is not informative as we got just what we assumed. 

Comment by avturchin on Snyder-Beattie, Sandberg, Drexler & Bonsall (2020): The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare · 2020-11-25T15:22:21.537Z · LW · GW

Berezin suggested that if the first civilisation will kill all other civilizations, than we are this first civilisation. 

Also, if panspermia is true, the age of civilizations will be similar, and several civilizations could become first almost simultaneously in one galaxy, which creates interesting colonisation dynamics. 

Comment by avturchin on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-24T14:45:00.062Z · LW · GW

Facebook is optimised for likes generation and short dopamine bursts, and could evolve into wireheading machine and-or social manipulation tool. Imagine a global distributed wireheading machine which could affect election results in the ways which will give it even more power.

The same about bitcoin. It rewards people monetary for creation of some useless infrastructure. It is not intelligent but could buy someone's intelligence for solving its task of growth. 

Comment by avturchin on Snyder-Beattie, Sandberg, Drexler & Bonsall (2020): The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare · 2020-11-24T14:11:55.176Z · LW · GW

SIA also implies that we are likely live in the world with interstellar panspermia and many inhabitable planets in our galaxy, as I explored here. In that case, the difficulty of abiogenesis is not a big problem as there will be many planets inseminated with life from one source.


Moreover, SIA and SSA seems to converge in very-very large universe where all possible observers exist: in it I will find myself in the region where most observers exist, and it - with some caveats - will be a region with the high concentration of observers.

Comment by avturchin on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-24T11:23:33.160Z · LW · GW

One more idea in this direction is an AI service which is so good that we can't stop using it, but which becomes Moloh after it becomes large enough. Possible examples: 

  • Facebook
  • Bitcoin
  • Market and money-making in general.

Future possible examples: 

  • An AI which finds perfect win-win solution for any conflict, and thus people are interested to come to it with any problem, but it gains more and more power after solving each problem. (Real world example is google search).
  • AI-powered wireheading, like virtual games combined with brain stimulation. 
Comment by avturchin on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-24T11:14:37.280Z · LW · GW

Yes. His brain healed itself.

Comment by avturchin on Melatonin: Much More Than You Wanted To Know · 2020-11-23T23:43:25.080Z · LW · GW

The science is solid for naltrexon low dose therapy which is used to up-regulate opiate receptors. The idea is to use small doses of antagonist to make the exiting receptors more sensible after some period of time. The same principle could be applied to other depressants, including melatonin, which start to simulate because of withdrawal effects.

Comment by avturchin on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-20T21:52:49.164Z · LW · GW

Can confirm about drugs. A friend of mine tried LSD and become absolutely suggestible to the things he knew were false - for a few months. For example, a person near him said that he is allergic to strawberries. My friend immediately had full blown panic attack that he is also allergic to strawberries, despite knowing that he is not allergic to them.

Comment by avturchin on Should we postpone AGI until we reach safety? · 2020-11-18T18:08:23.930Z · LW · GW

One possible way to postpone AGI is that one party will reach world domination using powerful Narrow AI and then use this Narrow AI to implement strict control over AGI development. What such Narrow AI could be: some combination of AI-powered drones and Palantir-like targeting + effective control of minds via memes and social network surveillance. It doesn't sound nice and-or sexy, but we are half-way here. 

An alternative in AGI-preventing is a nuclear war against chips manufacturers and AI-labs and it is obviously worse. To clarify, I am against it, just mentioned as a bad alternative.

Comment by avturchin on Raemon's Shortform · 2020-11-18T10:47:09.796Z · LW · GW

Some journals, like Futures, require 5 short phrases as highlights summarising key ideas as addition to the abstract. See e.g. here:




The stable climate of the Holocene made agriculture and civilization possible. The unstable Pleistocene climate made it impossible before then.

Human societies after agriculture were characterized by overshoot and collapse. Climate change frequently drove these collapses.

Business-as-usual estimates indicate that the climate will warm by 3°C-4 °C by 2100 and by as much as 8°–10 °C after that.

Future climate change will return planet Earth to the unstable climatic conditions of the Pleistocene and agriculture will be impossible.

Human society will once again be characterized by hunting and gathering."

Comment by avturchin on [deleted post] 2020-11-16T18:22:46.971Z

In that view, identity is very fragile and cryopreservation could damage it. Thus no risks of s-risks via cryonics.

A weaker argument is: if evil AI is interested in torturing real people, it maybe satisfied with billions ones who live, and additional cost of developing resurrection technology only for torture cryopatients may be too high. It will be cheaper to procreate new people with the same efforts.

Comment by avturchin on [deleted post] 2020-11-16T15:38:11.815Z

Not much help in that case from refusal to sign for cryonics. Evil AI could reconstruct personality based on its digital footrpint, or even reconstruct all possible minds using quantum random mind generator.

Comment by avturchin on On Arguments for God · 2020-11-14T16:36:20.076Z · LW · GW

Better examples of authorless simulation are Boltzmann brains or dust theory. 

Comment by avturchin on On Arguments for God · 2020-11-14T13:35:42.589Z · LW · GW

This vast difference is only philosophical, but there is no practical difference: both (if they exist) are able to create miracles, install rules, promise paradise, immortality or hell after death. The only difference is the relation to the ontology of the Universe: the real God exists forever, and the simulation creator has evolved from the dead matter. But this difference doesn't create any observables. 

Comment by avturchin on On Arguments for God · 2020-11-14T11:55:59.320Z · LW · GW

If we are in a simulation, it has a creator(s), who is almost god-like. But simulation hypothesis is more popular than idea of God in rationalist circles, which looks like a contradiction: P(simulation) = P(creator of simulation exists).

Comment by avturchin on ozziegooen's Shortform · 2020-11-13T22:08:22.075Z · LW · GW

Yes. But the head also ages and could have terminal diseases: cancer, stroke, ALZ. Given the steep nature of the Gompertz law, the life expectancy of even a perfect head in jar (of an old man) will be less than 10 years (I guess). So it is not immortality, but a good way to wait until better life extension technologies.

Comment by avturchin on Nuclear war is unlikely to cause human extinction · 2020-11-08T16:32:25.770Z · LW · GW

There are two scientifically proven ways to cheaper nukes: proliferation via laser isotope separation and the use of reactor plutonium for nukes

The use of nukes for artificial nuclear winter via nuclear explosions in taiga was also discussed (can't find the link now).

Comment by avturchin on Nuclear war is unlikely to cause human extinction · 2020-11-08T15:14:05.613Z · LW · GW

Thanks for your reply. One thing which is in play here is that Doomsday and geophysical weapons is the last resort of weakest side. If a stronger side has effective anti-missle tech and-or first strike capability, than having nuclear misseles becomes useless. This is a situation for Russia now. This is the reason why they are building Poseidon. 

Giving the mindset, a county like North Korea may invest in Doomsday weapon, but not a western country. Russia and China also could do it.

Comment by avturchin on Nuclear war is unlikely to cause human extinction · 2020-11-07T22:55:45.225Z · LW · GW

Large bomb of gigaton scale could be useful if one wants disperse large amount of radioactivity over whole surface of the planet. In that case, lifting large amount of exhaust in the upper atmosphere will help radioactive elements to be dispersed over all surface of the Earth. This is needed for doomsday bomb, envisioned by Khan. Such bomb is ultimate defence weapon: no one will dare to attack country if it has one. 

Also, Russian Poseidon nuclear torpedo was said to be equipped with 100 MT bombs, intended to create tsunami. 

Comment by avturchin on How can I bet on short timelines? · 2020-11-07T20:48:44.478Z · LW · GW

A way to bet on shorter timeline is getting instant payment now and return 10x in 2035.  And you will use this money for shorter timeline research. For example, someone who believes in long timelines, gives you 100 USD now, and you return 1000 USD in 2035. 

Comment by avturchin on Nuclear war is unlikely to cause human extinction · 2020-11-07T12:18:23.779Z · LW · GW

I think that this analysis is based on idea that nuclear war will be "conventional nuclear war", like it was envisioned in the middle of 20 century: that is, a nuclear exchange between two super powers via nuclear missiles.  However, unconventional nuclear war is also possible because of changes in strategy and-or technology.

The main technological changes:

  1. Very large and salted weapons. Teller worked on 10 Gigatonn bomb. Khan wrote about stationary very large nuclear weapon which is covered with cobalt and intended to produce large radioactivity around the world - doomsday weapon - as a mean of universal defence.  Russians created now nuclear torpedo with cobalt-salted warhead (Poseidon). Nukes also could be used against nuclear power stations which will create very large amount of radioactivity in the air. To evaporate a nuclear power plant, 1 Mt bomb is need.
  2. Very cheap weapons. If cold fusion works, home made fusion bombs could become a possibility. If any terrorsit can create them, there will be much more nuclear explosions in case of global guerilla. 
  3. Unconventional use of nuclear weapons to affect weather. There is an often discussed idea to use nukes to trigger supervolcano, eg, Yellowstone - and US adversaries may try to target the caldera with multiple warheads. 

Strategy changes:

  1. Nuclear blackmail via Doomsday weapons
  2. Nuclear guerrilla - many small states use nukes often.
  3. Geophysical attacks - attempts to cause natural disaster via nuclear weapons: supervolcanos, asteroids deflection to Earth, forrest fires. 
Comment by avturchin on Multiple Worlds, One Universal Wave Function · 2020-11-05T12:16:32.265Z · LW · GW

Wave function is described using imaginary numbers. If we "taking the wave function seriously as a physical entity" - does it mean that imaginary part have physical sense? For example, if a cat has amplitude (0;1) does it mean that real part of doesn't exist, but imaginary is full of life?

Comment by avturchin on Automated intelligence is not AI · 2020-11-02T11:04:40.243Z · LW · GW

One more example is "multiplication table".

Comment by avturchin on Containing the AI... Inside a Simulated Reality · 2020-10-31T17:54:33.686Z · LW · GW

The best counterargument here was presented by EY: that superintelligent AI will easily recognise and crack the simulation from inside. See That Alien Message.

In my few, it may be useful to install uncertainty in AI that it could be in simulation which is testing its behaviour. Rolf suggested to do it by making public precommitment to create many such simulations before any AI is created. However, it could work only as our last line of defence after everything else (alignment, control systems, boxing,) fails. 

Comment by avturchin on Top Time Travel Interventions? · 2020-10-27T15:52:24.531Z · LW · GW

I will wait a little until nanotech and advance AI appear. After that I will send one nanobot to the beginning of the universe. It will secretly replicate and cover all visible universe with its secret copies. It will go inside every person brain and upload this person at the moment of death. It will also turnoff pain during intense suffering. Thus I will solve the problem of past sufferings and resurrection of the dead.