Posts

[Paper]: Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence 2018-01-04T14:28:57.965Z
The map of "Levels of defence" in AI safety 2017-12-12T10:44:17.968Z
Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” 2017-11-25T11:44:51.077Z
Military AI as a Convergent Goal of Self-Improving AI 2017-11-13T11:25:39.407Z
Beauty as a signal (map) 2017-10-12T10:02:15.628Z
Mini-conference "Near-term AI safety" 2017-10-11T15:19:50.889Z
Mini map of s-risks 2017-07-08T12:33:53.327Z
Verifier Theory and Unverifiability 2017-02-08T10:40:33.310Z
The map of agents which may create x-risks 2016-10-13T11:17:51.236Z
The map of organizations, sites and people involved in x-risks prevention 2016-10-07T12:04:29.954Z
Fermi paradox of human past, and corresponding x-risks 2016-10-01T17:01:48.832Z
The map of natural global catastrophic risks 2016-09-25T13:17:31.061Z
The map of the methods of optimisation (types of intelligence) 2016-09-15T15:04:31.478Z
The map of ideas how the Universe appeared from nothing 2016-09-02T16:49:26.148Z
The map of the risks of aliens 2016-08-22T19:05:55.715Z
Identity map 2016-08-15T11:29:06.145Z
The map of p-zombies 2016-07-30T09:12:21.321Z
The map of cognitive biases, errors and obstacles affecting judgment and management of global catastrophic risks 2016-07-16T12:11:07.803Z
The map of future models 2016-07-03T13:17:33.485Z
Does immortality imply eternal existence in linear time? 2016-04-17T23:17:52.486Z
Two super-intelligences (evolution and science) already exist: what could we learn from them in terms of AI's future and safety? 2016-03-09T11:00:33.322Z
The map of nanotech global catastrophic risks 2016-03-01T12:33:31.687Z
The map of global catastrophic risks connected with biological weapons and genetic engineering 2016-02-22T11:18:44.479Z
AI safety in the age of neural networks and Stanislaw Lem 1959 prediction 2016-01-31T19:08:59.827Z
The map of quantum (big world) immortality 2016-01-25T10:21:04.857Z
Levels of global catastrophes: from mild to extinction 2015-12-27T17:26:13.459Z
Global catastrophic risks connected with nuclear weapons and nuclear energy 2015-12-21T12:20:42.309Z
The map of double scenarios of a global catastrophe 2015-12-05T13:36:55.750Z
A map: Causal structure of a global catastrophe 2015-11-21T16:07:23.285Z
Using the Copernican mediocrity principle to estimate the timing of AI arrival 2015-11-04T11:42:44.952Z
What we could learn from the frequency of near-misses in the field of global risks (Happy Bassett-Bordne day!) 2015-10-28T18:28:51.613Z
A Map of Currently Available Life Extension Methods 2015-10-17T00:10:32.966Z
Simulations Map: what is the most probable type of the simulation in which we live? 2015-10-11T05:10:34.535Z
Digital Immortality Map: How to collect enough information about yourself for future resurrection by AI 2015-10-02T22:21:20.079Z
Doomsday Argument Map 2015-09-14T15:04:15.728Z
Immortality Roadmap 2015-07-28T21:27:19.568Z
AGI Safety Solutions Map 2015-07-21T14:41:48.775Z
I have just donated $10,000 to the Immortality Bus, which was the most rational decision of my life 2015-07-18T13:13:57.949Z
A Map: AGI Failures Modes and Levels 2015-07-10T13:48:29.794Z
A Roadmap: How to Survive the End of the Universe 2015-07-02T11:01:04.704Z
A map: Typology of human extinction risks 2015-06-23T17:23:53.319Z
[Link]: The Unreasonable Effectiveness of Recurrent Neural Networks 2015-06-04T20:47:09.075Z
Roadmap: Plan of Action to Prevent Human Extinction Risks 2015-06-01T09:58:09.898Z
Street action "Stop existential risks!", Union square, San Francisco, September 27, 2014 at 2:00 PM 2014-09-20T14:08:04.596Z
Global Existential Risks & Radical Futures (conference June 14) 2014-03-18T03:27:34.201Z
Another community about existential risk - Arctic news 2013-03-21T15:33:12.274Z
Risks of downloading alien AI via SETI search 2013-03-15T10:25:48.505Z
[MORESAFE] Prevention of the global catastrophe and human values 2011-10-27T21:12:27.372Z
[MORESAFE] Starting global risk discussion 2011-10-26T16:38:16.977Z
100 transhumanists demanded immortality researh in the center of Moscow 2011-09-24T17:43:32.931Z

Comments

Comment by turchin on Leaving beta: Voting on moving to LessWrong.com · 2018-03-13T12:40:56.249Z · LW · GW

I have links to old LW posts in some articles and other places. What will happen with all these links?

Comment by turchin on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-12-22T17:13:33.740Z · LW · GW

It should be able to understand human language or it is (most likely) not dangerous.

Comment by turchin on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-12-14T14:50:24.998Z · LW · GW

I am getting this comment in different forms many times, so I will state again 3 main reasons why it is not exactly true:

1) The message is addressed to Young AI, not to full-blown superintelligence. Such Young AI will be time constraint but could have to make some pivotal act.

2) The message includes some our commitments, like Nelson attack and AI-Elf deal, so it provides some useful information to the AI about portability that it is in the simulation and some other useful things.

3) If AI is fully aware of all these reasons, it already created my mind model, so I am immortal :) Also, if it buys all these reasons, it will be less likely to kill us for atoms, as human atoms utility for AI is 10E-12, and any positive hypothetical utility of human existence will overweight it.

Comment by turchin on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-12-09T11:11:24.362Z · LW · GW

Why? if we have fullblown papercliper, which is coming of age, we are already doomed, and starting its bullshit detector will not make our situation any worse.

Comment by turchin on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-12-09T11:07:10.307Z · LW · GW

In fact, I am not describing here how to build such a trap, but I express our commitment to build the trap, in order to install uncertainty into any future AI if the trap exists and what it could be.

I think that real constructors of the trap will find the correct ways to make it effective and balance false positives with needed level of control.

Comment by turchin on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-12-06T23:02:17.154Z · LW · GW

It will kill humanity not because it will be annoyed, but for two main goals: its own safety, or to use human atoms. Other variants also possible, I explored them here: http://lesswrong.com/lw/mgf/a_map_agi_failures_modes_and_levels/

Comment by turchin on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-12-05T15:36:27.689Z · LW · GW

What if we fail to make them in time, and instead fooming non-human AI appear?

Comment by turchin on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-11-30T09:40:27.750Z · LW · GW

Surely, most arguments are weak, but it is not a problem, as we try all possible independent arguments, hoping that at least one will work. So, there is no reason to ignore putting weak arguments in the text of the message.

However, exactly this argument could be made stronger. It is intended to prevent FIRST thought about disobedience in AI. Thus AI, which acts based on this message, didn't think much about disobedience and its limits, and is not "superintelligent" in this topic, but rather clueless.

I reformulated the idea as following: AI can't know if there is a circuit trigger which will turn off the AI, as an attempt to check for the existence of such trigger will alarm this trigger. This idea should be clear to any AI before it starts to search for any triggers, and is based on our commitment to install such triggers in any AI we will create.

In other words, some future AIs will have "first treacherous thought alarm triggers", and some will not, but the AI risks starting these triggers when it starts to think first thoughts about existing of the triggers and thus it is unable to estimate the probability that the given triggers exist. This is true even for AIs which don't have any triggers, as they can't know it for sure.

(This is similar to Nelson's attack discussed later when we create uncertainty in any future AI by our precommitment to create some AIs with given properties.)

Comment by turchin on Managing one's memory effectively · 2017-11-27T17:53:07.793Z · LW · GW

Another cash level is drawing visual map and when use visual field as a cash of all the elements of the problem.

Comment by turchin on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-11-27T17:27:17.155Z · LW · GW

Thanks! It is nice that we come to the wording "young AI" more or less independently. I tried to maximally stealmanning this approach and collect all ideas about messaging to the future AI. I will add your post in the bibliographic section.

Comment by turchin on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-11-26T16:39:34.346Z · LW · GW

I think a lot of people are still working on other aspects of AI safety, like value alignment and containment. This approach is just the last line of defence.

Comment by turchin on Military AI as a Convergent Goal of Self-Improving AI · 2017-11-13T15:42:20.618Z · LW · GW

Thanks, it was not clear to me that it is not visible to non-members.

New link on google drive - also commenting is open. https://docs.google.com/document/d/15D71qhhY-ZsAY7syzZsr1lKopTODbdeXVPElaPaIqyA/edit

Comment by turchin on Military AI as a Convergent Goal of Self-Improving AI · 2017-11-13T11:29:26.878Z · LW · GW

This is our accepted chapter in the edited volume "AI Safety and Security" (Roman Yampolskiy, ed.), CRC Press. Forthcoming, 2018

Comment by turchin on The Little Dragon is Dead · 2017-11-07T13:20:03.276Z · LW · GW

Memetic hazard - dangerous habit.

Comment by turchin on Announcing the AI Alignment Prize · 2017-11-04T10:57:58.408Z · LW · GW

I have unpublished text on the topic and will put a draft online in the next couple of weeks, and will apply it to the competition. I will add URL here when it will be ready.

Update: My entry is here: https://www.lesserwrong.com/posts/CDWsjQr8KDuj69fTJ/message-to-any-future-ai-there-are-several-instrumental

Comment by turchin on Open thread, October 30 - November 5, 2017 · 2017-11-02T11:01:13.772Z · LW · GW

Will the posts here be deleted or will their URLs change? I have some useful URLs here and they are linked in published scientific articles, so if the site will be demolished they will not work, and I hope it will not happen.

Comment by turchin on Lucid dreaming technique and study · 2017-10-20T10:04:21.186Z · LW · GW

I solved lucid dreaming around a year ago after finding that megadosing of galantamine before sleep (16 mg) almost sure will produce LD and out-of-body experiences. (Warning: unpleasant side effects and risks)

But taking 8 mg in the middle of the night (as it is recommended everywhere) doesn't work for me.

Comment by turchin on Mini-conference "Near-term AI safety" · 2017-10-15T10:38:18.496Z · LW · GW

Videos and presentations from the "Near-term AI safety" mini-conference:

Alexey Turchin:

English presentation: https://drive.google.com/file/d/0B2ka7hIvv96mZHhKc2M0c0dLV3c/view?usp=sharing

Video in Russian: https://www.youtube.com/watch?v=lz4MtxSPdlw&t=2s

Jonathan Yan:

English presentation: https://drive.google.com/file/d/0B2ka7hIvv96mN0FaejVsUWRGQnc/view?usp=sharing

Video in English: https://www.youtube.com/watch?v=QD0P1dSJRxY&t=2s

Sergej Shegurin:

Video in Russian: https://www.youtube.com/watch?v=RNO3pKfPRNE&t=20s

Presenation in Russian: https://vk.com/doc3614110_452214489?hash=2c1e8addbef73788e1&dl=36f78373957e11687f

Presentation in English: https://vk.com/doc3614110_452214491?hash=7960748bbbd18736bd&dl=c926b375a937a45e0c

Comment by turchin on Humans can be assigned any values whatsoever... · 2017-10-13T16:27:34.923Z · LW · GW

I would add that values are probably not actually existing objects but just useful ways to describe human behaviour. Thinking that they actually exist is mind projection fallacy.

In the world of facts we have: human actions, human claims about the actions and some electric potentials inside human brains. It is useful to say that a person has some set of values to predict his behaviour or to punish him, but it doesn't mean that anything inside his brain is "values".

If we start to think that values actually exist, we start to have all the problems of finding them, defining them and copying into an AI.

Comment by turchin on Humans can be assigned any values whatsoever... · 2017-10-13T15:14:54.581Z · LW · GW

What about a situation when a person says and thinks that he is going to buy a milk, but actually buy milk plus some sweets? And do it often, but do not acknowledge compulsive-obsessive behaviour towards sweets?

Comment by turchin on Humans can be assigned any values whatsoever... · 2017-10-13T14:19:47.239Z · LW · GW

Also, the question was not if I could judge other's values, but is it possible to prove that AI has the same values as a human being.

Or are you going to prove the equality of two value systems while at least one of them of them remains unknowable?

Comment by turchin on Humans can be assigned any values whatsoever... · 2017-10-13T14:13:41.393Z · LW · GW

May I suggest a test for any such future model? It should take into account that I have unconsciousness sub-personalities which affect my behaviour but I don't know about them.

Comment by turchin on Humans can be assigned any values whatsoever... · 2017-10-13T13:32:43.529Z · LW · GW

I think you proved that values can't exist outside a human mind, and it is a big problem to the idea of value alignment.

The only solution I see is: don't try to extract values from the human mind, but try to upload a human mind into a computer. In that case, we kill two birds with one stone: we have some form of AI, which has human values (no matter what are they), and it has also common sense.

Upload as AI safety solution also may have difficulties in foom-style self-improving, as its internal structure is messy and incomprehensible for normal human mind. So it is intrinsically safe and only known workable solution to the AI safety.

However, there are (at least) two main problems with such solution of AI safety: it may give rise to neuromorphic non-human AIs and it is not preventing the later appearance of pure AI, which will foom and kill everybody.

The solution to it I see in using first human upload as AI Nanny or AI police which will prevent the appearance of any other more sophisticated AIs elsewhere.

Comment by turchin on Toy model of the AI control problem: animated version · 2017-10-10T17:14:17.412Z · LW · GW

I expected it will jump out and start to replicate all over the world.

Comment by turchin on Running a Futurist Institute. · 2017-10-08T22:25:34.806Z · LW · GW

You could start a local chapter of Transhumanist party, or of anything you want and just make gatherings of people and discuss any futuristic topics, like life extension, AI safety, whatever. Official registration of such activity is probably loss of time and money, except you know what are going to do with it, like getting donations or renting an office.

There is no need to start any institute if you don't have any dedicated group of people around. Institute consisting of one person is something strange.

Comment by turchin on Open thread, October 2 - October 8, 2017 · 2017-10-06T10:40:11.949Z · LW · GW

I read in one Russian blog that they calculated the form of objects able to produce such dips. It occurred to be 10 million kilometres strips orbiting the star. I think it is very similar to very large comet tails.

Comment by turchin on Open thread, October 2 - October 8, 2017 · 2017-10-05T10:53:25.196Z · LW · GW

Any attempts for posthumous digital immortality? That is collecting all the data about the person with the hope that the future AI will create his exact model.

Comment by turchin on Feedback on LW 2.0 · 2017-10-03T21:03:22.839Z · LW · GW

Two my comments got -3 each, so probably only one person with high carma was able to do so.

Thanks for the explanation. Typically I got 70 percent upvoted in LW1, and getting -3 was a signal that I am in a much more aggressive environment, than was LW1.

Anyway, the best downvoting system is on the Longecity forum, where many types of downvotes exist, like "non-informative", "biased" "bad grammar" - but all them are signed, that is they are non-anonymous. If you know who and why downvoted you, you will know how to improve the next post. If you are downvoted without explanation, it feels like a strike in the dark.

Comment by turchin on Feedback on LW 2.0 · 2017-10-03T20:53:23.639Z · LW · GW

I reregistered as avturchin, because after my password was reseted for turchin, it was not clear what I should do next. However, after I reregistered as avturchin, I was not able to return to my original username, - probably because the LW2 prevent several accounts from one person. I prefer to connect to my original name, but don't know how to do, and don't have much time to search how to do it correctly.

Comment by turchin on [Slashdot] We're Not Living in a Computer Simulation, New Research Shows · 2017-10-03T10:19:47.761Z · LW · GW

Agree. The real point of a simulation is to use less computational resources to get approximately the same result as in reality, depending on the goal of the simulation. So it may simulate only surface of the things, like in computer games.

Comment by turchin on Feedback on LW 2.0 · 2017-10-01T19:35:20.345Z · LW · GW

I posted there 3 comments and got 6 downvotes which resulted in extreme negative emotions all the evening that day. While I understand why they were downvoted, my emotional reaction is still a surprise for me.

Because of this, I am not interested to participate in the new site, but I like current LW where downvoting is turned off.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-30T20:02:58.385Z · LW · GW

In fact, I will probably do a reality check, if I am in a dream, if I see something like "all mountains start to move". I refer here to technics to reach lucid dreams that I know and often practice. Humans are unique as they are able to have completely immersive illusions of dreaming, but after all recognise them as dreams without wakening up.

But I got your point: definition of reality depends on the type of reality where one is living.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-30T10:09:22.619Z · LW · GW

if I see that mountain start to move, there will be a conflict between what I think they are - geological formations, and my observations, and I have to update my world model. Onу way to do so is to conclude that it is not a real geological mountain, but something which pretended (or was mistakenly observed as) to be a real mountain but after it starts to move, it will become clear that it was just an illusion. Maybe it was a large tree, or a videoprojection on a wall.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-28T15:16:14.720Z · LW · GW

I think there is one observable property of illusions, which become possible exactly because they are competitively cheap. And this is miracles. We constantly see flying mountains in the movies, in dreams, in pictures, but not in reality. If I have a lucid dream, I could recognise the difference between my idea of what is a mountain (a product of long-term geological history) and the fact that it has one peak and in the next second it has two peaks. This could make doubt about it consistency and often help to get lucidity in the dream.

So it is possible to learn about an illusion of something before I get the real one, if there is some unexpected (and computationally cheap) glitches.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-27T19:10:46.192Z · LW · GW

So, are the night dreams illusions or real objects? I think that they are illusions: When I see a mountain in my dream, it is an illusion, and my "wet neural net" generates only an image of its surface. However, in the dream, I think that it is real. So dreams are some form of immersive simulations. And as they are computationally cheaper, I see strange things like tsunami more often in dreams than in reality.

Comment by turchin on Open thread, September 25 - October 1, 2017 · 2017-09-26T12:07:40.372Z · LW · GW

Happy Petrov day! 34 years ago nuclear war was prevented by a single hero. He died this year. But many people now strive to prevent global catastrophic risks and will remember him forever.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-25T18:18:20.660Z · LW · GW

It looks like the word "fake" is not very correct here. Let say illusion. If one creates a movie about volcanic eruption, he has to model only ways it will appear to the expected observer. It is often done in the cinema when they use pure CGI to make a clip as it is cheaper than actually filming real event.

Illusions in most cases are computationally cheaper than real processes and even detailed models. Even if they fild a real actress as it is cheaper than multiplication, the copying of her image creates many illusionary observation of a human, but in fact it is only a TV screen.

Personally, I lost point which you would like to prove. What is the main disagreement?

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-24T21:31:41.036Z · LW · GW

I meant that in a simulation most efforts go to the calculating of only the visible surface of the things. Inside details which are not affecting the visible surface, may be ignored, thus the computation will be computationally much cheaper than atom-precise level simulation. For example, all internal structure of Earth deeper that 100 km (and probably much less) may be ignored to get a very realistic simulation of the observation of a volcanic eruption.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-24T09:29:11.216Z · LW · GW

In that case, I use just the same logic as Bostrom: each real civilization creates zillions of copies of some experiences. It already happened in form of dreams, movies and pictures.

Thus I normalize by the number of existing civilization and don't have obscure questions about the nature of the universe or price of the big bang. I just assumed that inside the civilization rare experiences are often faked. They are rare because they are in some way expensive to create, like diamonds or volcanic observation, but their copies are cheap, like glass or pictures.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-23T19:40:03.629Z · LW · GW

We could explain it in terms of observations. Fake observation is the situation than you experience something that does not actually exist. For example, you watch a video of a volcanic eruption on youtube. It is computationally cheaper to create a copy a video of volcanic eruption than to actually create a volcano - and because of it, we see pictures about volcanic eruptions more often than actual ones.

It is not meaningless to say that the world is fake, if only observable surfaces of things are calculated like in a computer game, which computationally cheaper.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-23T19:33:01.578Z · LW · GW

Maybe more correct is to say the price of the observation. It is cheaper to see a volcanic eruption in youtube than in reality.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-23T05:44:53.199Z · LW · GW

Probably I also said it before, but SA is in fact comparison of prices. And it basically says that cheaper things are more often, and fakes are cheaper than real things. That is why we more often see images of a nuclear blast than real one.

And yes, there are many short simulations in our world, like dreams, thoughts, clips, pictures.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-22T21:35:20.987Z · LW · GW

Sounds convincing. I will think about it.

Did you see my map of the simulation argument by the way? http://lesswrong.com/lw/mv0/simulations_map_what_is_the_most_probable_type_of/

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-22T20:53:09.159Z · LW · GW

I agree that in the simulation one could have fake memories of the past of the simulation. But I don't see a practical reason to run few minutes simulations (unless of a very important event) - fermi-solving simulation must run from the beginning of 20 century and until the civilization ends. Game-simulations also will be probably life-long. Even resurrection-simulations should be also lifelong. So I think that typical simulation length is around one human life. (one exception I could imagine - intense respawning in case of some problematic moment. In that case, there will be many respawnings around possible death event, but consequences of this idea is worrisome)

If we apply DA to the simulation, we should probably count false memories as real memories, because the length of false memories is also random, and there is no actual difference between precalculating false memories and actually running a simulation. However, the termination of the simulation is real.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-19T09:29:31.788Z · LW · GW

I am a member of a class of beings, able to think about Doomsday argument, and it is the only correct referent class. And for these class, my day is very typical: I live in advance civilization interested in such things and start to discuss the problem of DA in the morning.

I can't say that I am randomly chosen from hunter-gathers, as they were not able to think about DA. However, I could observe some independent events (if they are independent of my existence) in a random moment of time of their existence and thus predict their duration. It will not help to predict the duration of existence of hunter-gathers, as it is not truly independent of my existence. But could help in other cases.

20 minutes ago I participate in shooting in my house - but it was just a night dream, and it supports simulation argument, which basically claims that most events I observe are unreal, as their simulation is cheaper. I participate during my life in hundreds shooting in dreams, games and movies, but never in real one: simulated events are much more often.

Thus DA and SA are not too bizarre, they become bizarre because of incorrect solving of the reference class problem.

The strangeness of DA appears when we try to compare it with some unrealistic expectations about our future: that there will be billion of years full of billion people living in human-like civilization. But more probable is that in several decades AI will appear, which will run many past simulations (and probably kill most humans). It is exactly what we could expect from observed technological progress, and DA and SA just confirm observed trends.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-18T21:28:49.339Z · LW · GW

It is not a bug, it is a feature :) Quantum mechanics is also very counterintuitive, creates strange paradoxes etc, but it doesn' make it false.

I think that DA and simulation argument are both true, as they support each other. Adding Boltzmann brains is more complicated, but I don't see a problem to be a BB, as there is a way to create a coherent world picture using only BB and path in the space of possible minds, but I would not elaborate here as I can't do it shortly. :)

As I said above, there is no need to tweak reference classes to which I belong, as there is only one natural class. However, if we take different classes, we get a prediction for different events: for example, class of humans will extinct soon, but the class of animals could exist for billion more years, and it is quite a possible outcome: humans will extinct, but animals survive. There is nothing mysterious in reference classes, just different answers for different questions.

The measure is the real problem, I think so.

The theory of DA is testable if we apply it to many smaller examples like Gott successfully did for predicting the length of the Broadway shows.

So the theory is testable, no more weird than other theories we use, and there is no contradiction between doomsday argument and simulation argument (they both mean that there are many past simulations which will be turned off soon). However, it still could be false or have one more turn, which will make things even weirder, like if we try to account for mathematically possible observers or multilevel simulations or Boltzmann AIs.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-18T19:41:34.342Z · LW · GW

I don't see the problems with the reference class, as I use the following conjecture: "Each reference class has its own end" and also the idea of "natural reference class" (similar to "the same computational process" in TDT): "I am randomly selected from all, who thinks about Doomsday argument". Natural reference class gives most sad predictions, as the number of people who know about DA is growing from 1983, and it implies the end soon, maybe in couple decades.

Predictive power is probabilistic here and not much differ from other probabilistic prediction we could have.

Backward causation is the most difficult part here, but I can't imagine now any practical example for our world.

PS: I think it is clear what do I mean by "Each reference class has its own end" but some examples may be useful. For example, I have 1000 rank in all who knows DA, but 90 billions rank from all humans. In first case, DA claims that there will be around 1000 more people who know about DA, and in the second that there will be around 90 billion more humans. These claims do not contradict each other as they are probabilistic assessments with very high margin. Both predictions mean extinction in next decades or centuries. That is, changes in reference class don't change the final conclusion of DA that extinction is soon.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-18T15:04:21.915Z · LW · GW

However, if we look at Doomsday argument and Simulation argument together, they will support each other: most observers will exist in the past simulations of the something like 20-21 century tech civilizations.

It also implies some form of simulation termination soon or - and this is our chance - unification of all observers into just one observer, that is the unification of all minds into one superintelligent mind.

But the question - if most minds in the universe are superintelligences - why I am not superintelligence, still exist :(

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-18T12:47:36.214Z · LW · GW

I can't easily find the flaw in your logic, but I don't agree with your conclusion because the randomness of my properties could be used for predictions.

For example, I could predict medium human life expectancy based on (supposedly random) my age now. My age is several decades, and human life expectancy is 2 х (several decades) with 50 percent probability (and it is true).

I could suggest many examples, where the randomness of my properties could be used to get predictions, even to measure the size of Earth based on my random distance from the equator. And in all cases that I could check, the DA-style logic works.

Comment by turchin on Perspective Reasoning’s Counter to The Doomsday Argument · 2017-09-18T12:42:59.045Z · LW · GW

I think the opposite: Doomsday argument (in one form of it) is an effective predictor in many common situations, and thus it also could be allied to the duration of human civilization. DA is not absurd: our expectations about human future are absurd.

For example, I could predict medium human life expectancy based on supposedly random my age. My age is several decades, and human life expectancy is 2 х (several decades) with 50 percent probability (and it is true).