Russian x-risks newsletter Summer 2020 2020-09-01T14:06:30.196Z · score: 22 (10 votes)
If AI is based on GPT, how to ensure its safety? 2020-06-18T20:33:50.774Z · score: 20 (6 votes)
Russian x-risks newsletter spring 2020 2020-06-04T14:27:40.459Z · score: 16 (7 votes)
UAP and Global Catastrophic Risks 2020-04-28T13:07:21.698Z · score: 7 (4 votes)
The attack rate estimation is more important than CFR 2020-04-01T16:23:12.674Z · score: 10 (4 votes)
Russian x-risks newsletter March 2020 – coronavirus update 2020-03-27T18:06:49.763Z · score: 11 (4 votes)
[Petition] We Call for Open Anonymized Medical Data on COVID-19 and Aging-Related Risk Factors 2020-03-23T21:44:34.072Z · score: 6 (1 votes)
Virus As A Power Optimisation Process: The Problem Of Next Wave 2020-03-22T20:35:49.306Z · score: 6 (4 votes)
Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics 2020-03-18T12:44:42.756Z · score: 76 (33 votes)
Reasons why coronavirus mortality of young adults may be underestimated. 2020-03-15T16:34:29.641Z · score: 32 (18 votes)
Possible worst outcomes of the coronavirus epidemic 2020-03-14T16:26:58.346Z · score: 20 (14 votes)
More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them 2020-03-13T13:35:05.189Z · score: 56 (22 votes)
Anthropic effects imply that we are more likely to live in the universe with interstellar panspermia 2020-03-10T13:12:54.991Z · score: 11 (4 votes)
Russian x-risks newsletter winter 2019-2020. 2020-03-01T12:50:25.162Z · score: 9 (6 votes)
Rationalist prepper thread 2020-01-28T13:42:05.628Z · score: 21 (8 votes)
Russian x-risks newsletter #2, fall 2019 2019-12-03T16:54:02.784Z · score: 22 (9 votes)
Russian x-risks newsletter, summer 2019 2019-09-07T09:50:51.397Z · score: 41 (21 votes)
OpenGPT-2: We Replicated GPT-2 Because You Can Too 2019-08-23T11:32:43.191Z · score: 20 (5 votes)
Cerebras Systems unveils a record 1.2 trillion transistor chip for AI 2019-08-20T14:36:24.935Z · score: 8 (3 votes)
avturchin's Shortform 2019-08-13T17:15:26.435Z · score: 6 (1 votes)
Types of Boltzmann Brains 2019-07-10T08:22:22.482Z · score: 9 (4 votes)
What should rationalists think about the recent claims that air force pilots observed UFOs? 2019-05-27T22:02:49.041Z · score: -3 (12 votes)
Simulation Typology and Termination Risks 2019-05-18T12:42:28.700Z · score: 8 (2 votes)
AI Alignment Problem: “Human Values” don’t Actually Exist 2019-04-22T09:23:02.408Z · score: 32 (12 votes)
Will superintelligent AI be immortal? 2019-03-30T08:50:45.831Z · score: 9 (4 votes)
What should we expect from GPT-3? 2019-03-21T14:28:37.702Z · score: 23 (8 votes)
Cryopreservation of Valia Zeldin 2019-03-17T19:15:36.510Z · score: 22 (8 votes)
Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World 2019-03-11T10:30:58.676Z · score: 6 (2 votes)
Do we need a high-level programming language for AI and what it could be? 2019-03-06T15:39:35.158Z · score: 6 (2 votes)
For what do we need Superintelligent AI? 2019-01-25T15:01:01.772Z · score: 14 (8 votes)
Could declining interest to the Doomsday Argument explain the Doomsday Argument? 2019-01-23T11:51:57.012Z · score: 7 (8 votes)
What AI Safety Researchers Have Written About the Nature of Human Values 2019-01-16T13:59:31.522Z · score: 43 (12 votes)
Reverse Doomsday Argument is hitting preppers hard 2018-12-27T18:56:58.654Z · score: 9 (7 votes)
Gwern about centaurs: there is no chance that any useful man+machine combination will work together for more than 10 years, as humans soon will be only a liability 2018-12-15T21:32:55.180Z · score: 32 (10 votes)
Quantum immortality: Is decline of measure compensated by merging timelines? 2018-12-11T19:39:28.534Z · score: 10 (8 votes)
Wireheading as a Possible Contributor to Civilizational Decline 2018-11-12T20:33:39.947Z · score: 4 (2 votes)
Possible Dangers of the Unrestricted Value Learners 2018-10-23T09:15:36.582Z · score: 12 (5 votes)
Law without law: from observer states to physics via algorithmic information theory 2018-09-28T10:07:30.042Z · score: 14 (8 votes)
Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse 2018-09-27T10:09:56.182Z · score: 7 (4 votes)
Quantum theory cannot consistently describe the use of itself 2018-09-20T22:04:29.812Z · score: 8 (7 votes)
[Paper]: Islands as refuges for surviving global catastrophes 2018-09-13T14:04:49.679Z · score: 12 (6 votes)
Beauty bias: "Lost in Math" by Sabine Hossenfelder 2018-09-05T22:19:20.609Z · score: 9 (3 votes)
Resurrection of the dead via multiverse-wide acausual cooperation 2018-09-03T11:21:32.315Z · score: 22 (12 votes)
[Paper] The Global Catastrophic Risks of the Possibility of Finding Alien AI During SETI 2018-08-28T21:32:16.717Z · score: 13 (8 votes)
Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence 2018-07-25T17:12:32.442Z · score: 13 (5 votes)
[1607.08289] "Mammalian Value Systems" (as a starting point for human value system model created by IRL agent) 2018-07-14T09:46:44.968Z · score: 11 (4 votes)
“Cheating Death in Damascus” Solution to the Fermi Paradox 2018-06-30T12:00:58.502Z · score: 13 (8 votes)
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks 2018-06-23T13:31:13.641Z · score: -1 (5 votes)
[Paper]: Classification of global catastrophic risks connected with artificial intelligence 2018-05-06T06:42:02.030Z · score: 4 (1 votes)
Levels of AI Self-Improvement 2018-04-29T11:45:42.425Z · score: 19 (7 votes)


Comment by avturchin on “Unsupervised” translation as an (intent) alignment problem · 2020-09-30T12:07:44.833Z · score: 3 (2 votes) · LW · GW

Maybe we can ask GPT to output English-Klingon dictionary? 

Comment by avturchin on On Destroying the World · 2020-09-28T19:07:16.732Z · score: 4 (5 votes) · LW · GW

I have read about one possible case of false nuclear alarm which involveв something like fishing attack. Not sure if it was real or not, and I can't find the story now, but it could be real or could be creepypasta. Below is what I remember:

In 50s, nuclear-tipped US cruise missiles were stationed in Okinawa in several locations. One location got an obviously false (for some reasons) lunch command: the procedure was incorrect. They recognised it as false and decided to wait for clarification. But another location nearby recognised the command as legit and started preparing the launch. They had to send armed personal to stop them from launching the cruise missile, and some kind of standoff happened, but nobody was killed. During this, a clarification arrived that there is no launch order. No information about who send the false command were ever provided and everybody signed NDA.

Comment by avturchin on Vanessa Kosoy's Shortform · 2020-09-26T17:14:48.960Z · score: 2 (1 votes) · LW · GW

Another way to describe the same (or similar) plateau: we could think about GPT-n as GLUT with approximation between prerecorded answers: it can produce intelligent products similar to the ones which were created by humans in the past and are presented in its training dataset – but not above the human intelligence level, as there is no superintelligent examples in the dataset. 

Comment by avturchin on Dach's Shortform · 2020-09-25T10:32:15.762Z · score: 2 (1 votes) · LW · GW

Future superintelligences could steal minds to cure "past sufferings" and to prevent s-risks, and to resurrect all the dead. These is actually a good thing, but for the resurrection of the dead they have to run the whole world simulation once again for last few thousands years. In that case it will look almost like normal world. 

Comment by avturchin on avturchin's Shortform · 2020-09-25T10:25:23.096Z · score: 2 (1 votes) · LW · GW

Quantum immortality of the second type. Classical theory of QI is based on the idea that all possible futures of a given observer do exist because of MWI and thus there will be always a future where he will not die in the next moment, even in the most dangerous situations (e.g. Russian roulette).

QI of the second type makes similar claims but about past. In MWI the same observer could appear via different past histories. 

The main claim of QI-2: for any given observer there is a past history where current dangerous situation is not really dangerous. For example, a person has a deadly car accident. But there is another similar observer who is night dreaming about the same accident, or who is having much less severe accident but hallucinate that it is really bad. Interestingly, QI-2 could be reported: a person could say: "I have memory of really bad accident, but it turn out to be nothing. Maybe I died in the parallel world". There are a lot of such report on reddit. 

Comment by avturchin on Where is human level on text prediction? (GPTs task) · 2020-09-20T17:22:18.874Z · score: 2 (1 votes) · LW · GW

Agreed. Superhuman levels will unlikely be achieved simultaneously in different domain even for universal system. For example, some model could be universal and superhuman in math, but not superhuman in say emotion readings. Bad for alignment.

Comment by avturchin on Where is human level on text prediction? (GPTs task) · 2020-09-20T13:00:48.571Z · score: 2 (1 votes) · LW · GW

Why it lengthens your timelines?

Comment by avturchin on Draft report on AI timelines · 2020-09-19T11:20:00.455Z · score: 14 (6 votes) · LW · GW

If we use median AI timings, we will be 50 per cent dead before that moment. May be it will be useful different measure, like 10 per cent of TAI, before which our protective measures should be prepared?

Also, this model contradicts naive model of GPT growth in which the number of parameters has been growing 2 orders of magnitude a year last couple of years, and if this trend continues, it could reach human level of 100 trillion parameters in 2 years.

Comment by avturchin on Mati_Roy's Shortform · 2020-09-18T12:59:27.818Z · score: 4 (2 votes) · LW · GW

Interestingly, an hour in childhood is subjectively equal between a day or a week in adulthood, according to recent poll I made. As a result, the middle of human life in term of subjective experiences is somewhere in teenage.

Also, experiences of an adult are more dull and similar to each other.

Tin Urban tweeted recently: "Was just talking to my 94-year-old grandmother and I was saying something about how it would be cool if I could be 94 one day, a really long time from now. And she cut me off and said “it’s tomorrow.” The "years go faster as you age" phenomenon is my least favorite phenomenon."

Comment by avturchin on My computational framework for the brain · 2020-09-16T22:02:05.902Z · score: 4 (2 votes) · LW · GW

I reread the post and have some more questions:

  • Where is "human values" in this model? If we give this model to an AI which wants to learn human values and have full access to human brain, where it should search for human values?
  • If cortical algorithm will be replaced with GPT-N in some human mind model, will the whole system work?
Comment by avturchin on My computational framework for the brain · 2020-09-15T17:41:53.809Z · score: 5 (3 votes) · LW · GW

Thanks. I think that a plausible explanation of dreaming is generating of virtual training environments where an agent is training to behave in the edge cases, on which it is too costly to train in real life or in real world games. That is why the generic form of the dreams is nightmare: like, a lion attack me, or I am on stage and forget my speech.

From "technical" point view, dream generation seems rather simple: if the brain has world-model generation engine, it could generate predictions without any inputs, and it will look like an dream.

Comment by avturchin on My computational framework for the brain · 2020-09-15T14:02:57.055Z · score: 7 (4 votes) · LW · GW

I have several questions:

Where are qualia and consciousness in this model?

Is this model address difference between two hemispheres?

What about long term-memory? Is it part of neocortex?

How this model explain the phenomenon of night dreams?

Comment by avturchin on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-14T14:11:36.514Z · score: 8 (6 votes) · LW · GW

For Marx, capitalism was Moloch, and communism was a solution.

For Unabomber, the method to stop Moloch was the destruction of complex technological society and all complex coordination problems.

Comment by avturchin on ESRogs's Shortform · 2020-09-14T11:11:24.687Z · score: 4 (2 votes) · LW · GW

Predictive world model?

Comment by avturchin on The Anthropic Trilemma · 2020-09-13T18:44:39.948Z · score: 2 (1 votes) · LW · GW

Maybe I am too late to comment here and it is already covered in collapsed comments, but it looks like that it is possible to make this experiment in real life.

Imagine that instead of copying, I will use waking up. If I win, I will be waked up 3 times and informed that I won and will be given a drug which will make me forget the act of awakening. If I lose, I will be wakened only one time and informed that I lost. Now I have 3 to 1 observer moments where I informed about winning.

In such setup in is exactly the Sleeping beauty problem, with all its pro and contra, which I will not try to explore here.

Comment by avturchin on avturchin's Shortform · 2020-09-11T17:18:36.602Z · score: 2 (1 votes) · LW · GW

EY suggested (if I remember correctly) that MWI interpretation of quantum mechanics is true as it is simplest explanation. There are around hundred other more complex interpretations of QM. Thus, in his interpretation, P(MWI) is more than a sum of probabilities of all other interpretations.

Comment by avturchin on avturchin's Shortform · 2020-09-11T17:14:29.219Z · score: 2 (1 votes) · LW · GW

It means that p(one of them is true) is more than p(simplest explanation is true)

Comment by avturchin on avturchin's Shortform · 2020-09-11T12:54:49.100Z · score: 4 (2 votes) · LW · GW

Two types of Occam' razor:

1) The simplest explanation is the most probable, so the distribution of probabilities for hypotheses looks like: 0.75, 0.12, 0.04 .... if hypothesis are ordered from simplest to more complex.

2) The simplest explanation is the just more probable, so the distribution of probabilities for hypotheses looks like: 0.09, 0.07, 0.06, 0.05.

The interesting feature of the second type is that simplest explanation is more likely to be wrong than right (its probability is less than 0.5).

Different types of Occam razor are applicable in different situations. If the simplest hypothesis is significantly simpler than others, it is the first case. If all hypothesis are complex, it is the second. First situation is more applicable some inherently simple models, e.g. laws of physics or games. The second situation is more about complex situation real life.

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-10T19:04:59.624Z · score: 2 (1 votes) · LW · GW

It looks like that you think that modal realism is false and everything possible doesn't exist. What is the argument which convinced you in it?

Comment by avturchin on Donald Hobson's Shortform · 2020-09-10T19:00:26.459Z · score: 2 (1 votes) · LW · GW

We can make a test on computer viruses. What is the probability that a random code will be self-replicating program? 10^50 probability is not that extraordinary - it is just a probability of around 150 bits of code being on right places.

Comment by avturchin on Luna First, But Not To Live There · 2020-09-09T13:30:28.904Z · score: 3 (2 votes) · LW · GW

There are more than 1 million households in US which have 10 mln usd capitalisation, and they could afford such travel without damaging their wealth. No all will go, but may be a 10 thousand a year will. This gives 1 billion a year for tickets and they will spend at least the same amount on the Moon. So it is 2 billion dollar a year tourist economy. Not much.

Comment by avturchin on Donald Hobson's Shortform · 2020-09-09T13:17:51.162Z · score: 2 (1 votes) · LW · GW

We can estimate apriori probability that some sequence will work at all by taking a random working protein and comparing its with all other possible strings of the same length. I think this probability will be very small.

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-09T13:13:02.961Z · score: 2 (1 votes) · LW · GW

Any reasoning is based on some assumptions and it is not a problem. We may list these assumptions and convert the into constrains of the model (with some probabilities).

Ok, lets try to prove the opposite thing.

Firstly, Kant in "Critiques of pure reason" explored these topics of the universe infinity in space in time and find that both propositions could be equally proved (finite and infinite), from which he concluded that the topics can't be solved and is beyond human knowledge. However, Kant suggested on the margins one more proof of modal realism (I cite here by memory): "If a thing is possible in all aspects, there is no difference between it and real thing".

The strongest argument against the existence of everything is non-randmoness of our experiences. If I am randomly selected from all possible minds, my observations should be very chaotic as most random minds are just random. There are several conter-arguments here: related either to chains of observer-moments converging to less random mind, or different measure of different minds, or that selection process of self-aware minds is a source of antirandomness, or that we in fact are random but can't observe it, of that the internal structure of an observer is something like a convolutional neural net where randomness is concentrated to inputs and "order" to output. I will not elaborate these arguments here as it will be very long.

Another line of reasoning is connected with idea of actuality. In it, only me-now is real, and everything else is just possible. This line of reasoning is plausible, but it is even more weird than modal realism.

Then again, the idea of (Christian) God which creates only a few worlds. Improbable.

During last year EA forum, the following prove of the finitness of the universe was suggested:

"1) Finite means that available compute in the quantum theoretic sense in our future light cone is finite.

2) The Bekenstein bound says the information in a region is bounded proportional to area.

3) The universe's expnasion is accelerating, so the there is a finite region of space that determines our future light cone.

4) Quantum mechanics is reversible, so the information of our future light cone is finite.

5) Only finite compute can be done given a finite information bound without cycling."

But it is applicable only to our universe, but not to other universes.

Comment by avturchin on Anthropic effects imply that we are more likely to live in the universe with interstellar panspermia · 2020-09-09T12:36:27.985Z · score: 2 (1 votes) · LW · GW

Here we speak about minimum length of self-replicating string. If first such string 100 bits and the next is 110, the second one will be 1000 times less probable during random generation of strings. Thus longer strings could be ignored.

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-08T19:17:13.363Z · score: 3 (2 votes) · LW · GW

The could form chains, like in dust theory or its mathematical formalism here:

Comment by avturchin on Donald Hobson's Shortform · 2020-09-08T19:01:45.303Z · score: 4 (2 votes) · LW · GW

Ok. will try to explain the analogy:

There are two views of the problem of abiogenesis of life on Earth:

a) our universe is just simple generator of random strings of RNA via billions of billions planets and it randomly generate the string capable to self-replication which was at the beginning of life. The minimum length of such string is 40-100 bits. It was estimated that 10^80 Hubble volumes is needed for such random generation.

b) Our universe is adapted to generate strings which are more capable to self-replication. It was discussed in the comment to this post.

This looks similar to what you described: (a) is a situation of the universe of low Kolmogorov complexity, which just brut force life. (b) is the universe with higher Kolmogorov complexity of physical laws, which however is more effective in generating self-replicating strings. The Kolmogorov complexity of such string is very high.

Comment by avturchin on Luna First, But Not To Live There · 2020-09-08T18:31:18.226Z · score: 2 (1 votes) · LW · GW

If Musk's Starship will work, it will lower price of Moon tourism. If a launch will cost 1.5 mln USD and there will be 100 people on board, it means 15k a person for a ticket.

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-08T16:14:16.434Z · score: 2 (1 votes) · LW · GW

Actually, I didn't assume realism about time, but the language we use works this way. Popping into existence may relate to Boltzmann brains which don't have time.

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-08T16:10:19.945Z · score: 2 (1 votes) · LW · GW

Ok, I suggested you three independent lines of reasoning which implies that everything possible exists (physical theories, self-sampling logic similar to presumptuous philosopher and the idea that if Big Bang happened once it should also happen uncountably many times.)

Also, If only limited number of thing exist, there should be ontological force which prevent them popping from existence - and given that we exist we know that such popping is possible. The only thing which can limit the number of appearing is God. Bingo, we just got new proof of God's existence!

But jokes asides, we obviously can't prove factually existence of everything as it is unobservable, but we could use logical uncertainty to estimate probability of such claim. It is much more probable that everything possible exists, as there are three independent ways argue for it, and also if we assume the opposite, we have to invent some "limiting force" similar to God, which has low apriori probability.

Based on these my confidence in "everything possible exists" is 80-90 per cent.

Comment by avturchin on Donald Hobson's Shortform · 2020-09-08T12:28:01.674Z · score: 2 (1 votes) · LW · GW

Looks like the problem of abiogenesis, that boils down to the problem of creation of the first string of RNA capable to self-replicate, which is estimated to be at least 100 pairs.

Comment by avturchin on Luna First, But Not To Live There · 2020-09-08T10:15:31.706Z · score: 1 (2 votes) · LW · GW

Could tourism become an economic engine of the space colonisation? For example, Moon has 6 times lower gravity and this allows new types of sport. Older people can jump again there!

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-07T17:50:00.358Z · score: 2 (1 votes) · LW · GW

There is also a metaphysical argument, not depending on any empirical data, so it is less likely to be wrong. It may be more difficult to explain but I will try.

I call the argument "the unboundedness of nothingness". It goes as following:

1. The world as we see it, appeared from nothing via some unknown process.

2. "Nothing" doesn't have any properties by definition, so it doesn't have a counter of worlds which appeared from it.

3. Thus if it create one world, it will produce infinitely many of them, because its ability to create worlds can't be exhausted or stopped.

Or, in other words, if everything-that-exists has finite size and its growth is limited by some force, there is a contradiction as such force will not be a part of everything-that-exist. Thus such force doesn't exist.

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-07T13:19:03.579Z · score: 2 (1 votes) · LW · GW

There are several not mutually exclusive and plausible theories which implies existence of everything.

If universe for whatever reason is infinite, then everything possible exist. If MWI is true, everything possible exist. If Bolztmann brains are possible, again, all possible observers do exist. If Tegmarks mathematical universe is possible, again, all possible observers do exist.

Moreover, the fact that I exist at all implies very large number of attempts to create an observer, including something like 10^500 universes with different physical laws, which itself implied the existence of some unlimited source of attempts to create different things.

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-06T22:25:29.057Z · score: 2 (1 votes) · LW · GW

Ok, updated my world model and now I think that:

There is no randomness in metaphysical sense: everything possible exists.

However, there is relation inside an observer which looks like randomness: For any thought "this is a dog" there a billion possible different observations of different dogs. In some sense it looks like that there are billion observers of the reference class dog-seers. This relation between macro interpretations and its micro variants, is similar to entropy and it is numerical and could be regarded as probabilities for practical purposes.

Comment by avturchin on Tofly's Shortform · 2020-09-06T14:43:55.985Z · score: 2 (1 votes) · LW · GW

I wrote once about levels of AI self-improvement and come to a similar conclusion: any more advance version of such AI will require more and more extensive testing, to ensure its stability and alignment, and the complexity of the testing task will grow very quickly, thus slowing down any intelligent explosion. This, however, doesn't preclude creation of Dangerous AI (capable to solve the task of human extinction and just slight superhuman in some domains).

Comment by avturchin on Daniel Kokotajlo's Shortform · 2020-09-04T12:05:30.887Z · score: 6 (3 votes) · LW · GW

It is known that birds brains are much more mass-effective than mammalian.

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-04T12:02:56.307Z · score: 2 (1 votes) · LW · GW

Practically, it means difference in the expected probabilities of future observations. What is your opinion on these questions?

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-03T19:58:30.460Z · score: 2 (1 votes) · LW · GW

Ok, let's look at a real world example: "drivers in next lane are going faster" suggested by Bostrom. It is true from observer's point of view but not true from the God's view.

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-02T18:53:21.940Z · score: 2 (1 votes) · LW · GW
the probability that a randomly chosen observer is simulated rather than the probability that "I" am simulated.

But if randomly chosen observer is simulated, and I am randomly chosen observer, I should be simulated?

Another way to reason here - in a situation where we can't make a rational choice – is "meta-doomsday argument" which I discussed before: I assume that both alternatives have equal probabilities, based on logical uncertainty about self-location believes. E.g. it gives 5/12 for Sleeping Beauty.

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-02T17:24:34.560Z · score: 2 (1 votes) · LW · GW

Your intuition seems reasonable, but what about situations where I have to make a choice based on self-location believes?

Comment by avturchin on Russian x-risks newsletter Summer 2020 · 2020-09-02T17:14:09.109Z · score: 3 (2 votes) · LW · GW

A friend of mine works for Sberbank-related company, but not the Russiansuperglue as I know.

Why this name concerns you?

There are two biggest AI-companies in Russia: Yandex and Sberbank. Sberbank's CEO is a friend of Putin and probably explained him something about superintelligence. Yandex is more about search engine and self-driving cars.

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-02T01:02:55.354Z · score: 2 (1 votes) · LW · GW

If such self-location probability view is invalid, should I always use only God's view?

Comment by avturchin on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-01T15:17:59.594Z · score: 2 (1 votes) · LW · GW

Could you explain more how you come from your premises to your conclusion, e.g. that simulation argument is false?

Comment by avturchin on Russian x-risks newsletter Summer 2020 · 2020-09-01T15:11:53.811Z · score: 4 (2 votes) · LW · GW

US could put the same capabilities now in Estonia or Ukraine, so not much change in nuclear strategy here. However, Russia has important long distance communication center with nuclear submarinies in Belarus.

Also, Kaliningrad district will be much more vulnerable as well as export-import routes. In case of ground invasion, Belarus is also located strategically, and both Napoleon and Hitler quickly advanced through Minsk in Moscow direction.

The biggest problem for Putin is that if Lukashenko fails, he will be next. So he is not interested in his demise, but he wants to make Lukashenko as weak as possible and then annex Belorussia. He tried to do it last year, and he then hoped to become a president of a new country consisting of Belarus and Russia. Lukashenko said no, and Putin had to use his plan B: the change of constitution to remain in power after 2024.

Comment by avturchin on Is there a possibility of being subjected to eternal torture by aliens? · 2020-08-30T11:04:24.845Z · score: 2 (1 votes) · LW · GW

If an alien civilization is 1 billion light years from us in one direction (and it is highest distance for contact), it implies that median distances between civilizations is 1 billion ly, and there are 5 others: in opposite direction, as well as up, down, right and left direction. So it is 1 civ or at least 7 including ours, based on some symmetry considerations. Two civs seems unlikely.

The idea about prevention s-risks via acausal trade is discussed by me here:

Comment by avturchin on My guide to lifelogging · 2020-08-28T22:25:14.193Z · score: 5 (3 votes) · LW · GW

I would add that lifelogging should include not only passive recording, but also active uploading of data in the form of memoirs, dairy, drawings, and creation of objects of art.

Comment by avturchin on Is there a possibility of being subjected to eternal torture by aliens? · 2020-08-28T22:22:18.938Z · score: 2 (1 votes) · LW · GW

If there is one another alien race in observable universe, where should at least several more, and they may not like the idea of torture: they will be "exo-humanists", that is like effective altruist but for other alien races.

A superintelligent AI on Earth which has a goal of global torture is worse as it looks like that the help will never arrive (actually, it can but from other universes via complex acausal trade and indexical uncertainty).

Comment by avturchin on Is there a possibility of being subjected to eternal torture by aliens? · 2020-08-28T12:37:33.717Z · score: 3 (4 votes) · LW · GW

If aliens could torture us, another alien race could come and save us.

Comment by avturchin on Learning human preferences: optimistic and pessimistic scenarios · 2020-08-18T14:46:16.719Z · score: 2 (1 votes) · LW · GW

What do you think about the idea that neural nets while modeling humans will converge to natural abstractions, which will be human values, as described in this post?

Comment by avturchin on Alignment By Default · 2020-08-14T18:25:04.702Z · score: 4 (2 votes) · LW · GW

But human "wants" are not actually a good thing which AI should follow. If I am fasting, I obviously want to eat, but me decision is not eating today. And if I have a robot helping me, I prefer it care about my decisions, not my "wants". This distinction between desires and decisions was obvious for last 2.5 thousand years, and "human values" is obscure and not natural idea.