Comment by avturchin on Cryonics before natural death. List of companies? · 2019-06-12T20:14:19.445Z · score: 12 (5 votes) · LW · GW

The thing you are looking for is called cryothanasia and first case has happened recently. California Man Becomes the First ‘Death With Dignity’ Patient to Undergo Cryonic Preservation

Comment by avturchin on Dissolving the zombie argument · 2019-06-10T11:33:05.128Z · score: 6 (3 votes) · LW · GW

Dissolving the "dissolving". The idea of p-zombies, as well as many other philosophical ideas (like consciousness), is based on a combination of many similar but eventually different ideas. "Dissolving" here is in fact creating a list of all subtypes. Another type of "dissolving" would be complete elimination of the idea, but in this case we just lose a descriptive instrument and get a feeling of an absent tooth on its place, which will be eventually replaced with some ad hoc constructions, like: "yes, we dissolved the idea of X, but as we still need to speak about something like X, we will continue to say "X", but must remember that X is actually dissolved."

I tried also to dissolve p-zombies by creating a classification of many possible (imaginable) types of p-zombies here.


Comment by avturchin on Visiting the Bay Area from 17-30 June · 2019-06-07T15:52:48.625Z · score: 2 (1 votes) · LW · GW

I also will be visiting SF for these dates for EA global, and I am interesting to discuss things like fighting aging as EA cause, as well as more fringe ideas like Boltzmann brains, simulation's termination risks and sending messages to the future AI.

Comment by avturchin on Map of (old) MIRI's Research Agendas · 2019-06-07T15:18:48.161Z · score: 3 (2 votes) · LW · GW

Thanks, great map. It would be also interesting to see which AI safety related fields are not part of the MIRI agenda.

Comment by avturchin on How is Solomonoff induction calculated in practice? · 2019-06-04T19:25:28.312Z · score: 3 (2 votes) · LW · GW

I also was interested in this question as it implies different answers about the nature of Occam razor. If the probability (from complexity) function quickly diminish, the simplest outcome is the most probable. However, if this function has a fat tail, its median could be somewhere half way to infinite complexity, which means that most true theories as incredible complex.

Comment by avturchin on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-29T07:58:37.417Z · score: 2 (1 votes) · LW · GW

Good point. Actually, I think that all almost early time "saucer's photos" are home-made fakes.

Comment by avturchin on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T18:31:00.107Z · score: 3 (4 votes) · LW · GW

But if there is antigravty using (nuclear powered?) drone capable to do thing described by the pilots it also probably needs advance forms of AI to be controlled.

Also NSA is the biggest employer of mathematician and is known to be 20-30 years ahead of civilian science in some math areas.

TL;DR: if we assume some form of "supercivilization" inside military complex, it also should include advance AI.


Comment by avturchin on Drowning children are rare · 2019-05-28T18:24:43.941Z · score: 7 (5 votes) · LW · GW

Sure, in the best case; moreover, as poor countries are becoming less poor, the price of saving life in them is growing. (And one may add that functional market economy is the only known way to make them less poor eventually.)

However, I also think that EA could reach even higher efficiency in saving lives in other cause areas, like fighting aging and preventing global risks.

Comment by avturchin on Drowning children are rare · 2019-05-28T18:02:03.990Z · score: 12 (3 votes) · LW · GW

In some sense, EA is arbitrage between the price of life in rich and poor countries, and such price will eventually become more equal.

Another point is that saving life locally sometimes is possible almost for free, if you happen to be in right place and thus have unique information. For example, calling 911 if you see a drawing child may be very effective and cost you almost nothing. There were several cases in my life when I had to put attention of a taxi driver to a pedestrian ahead - not sure if I actually saved life. But to save life locally one needs to pay attention on what is going around him and know how to react effectively.

Comment by avturchin on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T17:34:24.750Z · score: 2 (1 votes) · LW · GW

Maybe they tested some radar-jamming tech. I also find more discussion about new radars there here.

Comment by avturchin on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T17:32:09.008Z · score: 2 (1 votes) · LW · GW

May be they deliberately tested some kind of radar-jamming technology, which produces false targets on radar. However, if true, they would keep the whole story secret.

Comment by avturchin on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T17:28:48.264Z · score: 1 (2 votes) · LW · GW

If it is true, it should also applied to AI. Should we expect that secret AIs are 20 years ahead of OpenAI? If yes, what about their safety?

Comment by avturchin on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T12:47:37.960Z · score: -1 (3 votes) · LW · GW

1. In Soviet Union there was a practise to publish an article about UFO in tightly controlled central press just before major food price increase announcement. I personally remembered such article carbon copied and discussed for days around 1988. (Fact check: it was 1985, full story here in Russian). But it could be just a conspirological interpretation of publishing practices.

2. Personally, I don't buy extraterrestrial explanation of UFOs (it is so middle-20-century) and prefer something more interesting, like self-replicating glitches in the matrix or intrusions of randomness into the chains of observer-moments of Boltzmann Brains. This may explain variety of experience and their absurdness.

3. If we stick to the extraterrestrial explanation of UFOs, than Zoo hypothesis prevails. It goes like following: even if most colonisation waves by ETI destroy potentially habitable planets, we could find ourself only in the world there ETI decided not destroy life, but observe it. Thus we live in some kind of natural reserve. However, its ETI-owners sell tickets to different other civilizations tourists for short trips to Earth. This explains variety of observed crafts and their irrational behaviour. But this explanation is also too anthropomorphic.

Comment by avturchin on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T10:51:12.688Z · score: 5 (2 votes) · LW · GW

One possible answer is that most of our observational system are very specialised for a target and they ignore any other objects as noise. To record UFOs – if they do exist and can be recorded – one need a net of telescopes with very wide angles or many airplanes with different sensors. The last thing is similar to a squadrons of military aircrafts.

Comment by avturchin on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T10:44:12.188Z · score: 3 (3 votes) · LW · GW

If we take the NY Times article as a true report, it is strong argument against american "secret, experimental, or stealth aircraft" as they would not risk to crash it by flying between two airplanes in tight formation. But other explanation are possible, like disinformation.

Comment by avturchin on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T10:41:07.303Z · score: 4 (4 votes) · LW · GW

It is difficult to make even a Moon's photo with a smartphone https://www.popsci.com/how-to-photograph-the-moon#page-2


What should rationalists think about the recent claims that air force pilots observed UFOs?

2019-05-27T22:02:49.041Z · score: -3 (12 votes)
Comment by avturchin on Are you in a Boltzmann simulation? · 2019-05-21T20:49:48.449Z · score: 2 (1 votes) · LW · GW

There is a possible type of causal BBs: a process which has a sheer causal skeleton similar to a causal structure of an observer-moment (which itself has, – at first approximation, – a causal structure of convolutional neural net). In that case, there is causality inside just one OM.

Comment by avturchin on Are you in a Boltzmann simulation? · 2019-05-21T07:40:48.693Z · score: 2 (1 votes) · LW · GW

Thanks, I have seen them, but yet have to make a connection between the topic and Boltzmann brains.

Comment by avturchin on Are you in a Boltzmann simulation? · 2019-05-18T19:37:46.244Z · score: 4 (2 votes) · LW · GW

Did you publish it? link?

Simulation Typology and Termination Risks

2019-05-18T12:42:28.700Z · score: 8 (2 votes)
Comment by avturchin on You are (mostly) a simulation. · 2019-05-17T21:29:27.312Z · score: 2 (1 votes) · LW · GW

if you are still here, check this: https://arxiv.org/abs/1712.01826

Seems you were right after all with dust theory.

Also, is there any way to see your original version of the post?

Comment by avturchin on Integrating disagreeing subagents · 2019-05-14T16:44:56.304Z · score: 2 (1 votes) · LW · GW

Interestingly, an agent with an unitary utility function may still find itself in a situation similar to akrasia, if it can't make a choice between two lines of actions, which have almost equal weights. This was described as a situation of Buridan ass by Lamport, and he shows that the problem doesn't have easy solutions and cause real life accidents.

Another part of the problem is that if I have a to make choice between equal alternatives – and the situations of choice are always choice between seemingly equal alternatives, or there are no need to make a choice – is that I have to search for additional evidence which of the alternatives is better, and as result my choice is eventually decided by very small piece of evidence. This make me vulnerable for adversarial attacks by, say, sellers, which could press me to make a choice by saying "It is 5 per cent discount today."

Comment by avturchin on On immortality · 2019-05-11T08:11:14.107Z · score: 2 (1 votes) · LW · GW

A new article on the topic:

Boltzmannian Immortality* Christian Loew†

https://core.ac.uk/download/pdf/157868191.pdf

Comment by avturchin on Interpretations of "probability" · 2019-05-09T20:10:05.834Z · score: 2 (1 votes) · LW · GW

There are also two schools of bayesian thinking: "It is popular to divide Bayesians into two main categories, “objective” and “subjective” Bayesians. The divide is sometimes made formal, there are conferences labelled as one but not the other, for example.

A caricature of subjective Bayes is that all probabilities are just opinion, and the best we can do with an opinion is make sure it isn’t self contradictory, and satisfying the rules of probability is a way of ensuring that. A caricature of objective Bayes is that there exists a correct probability for every hypothesis given certain information, and that different people with the same information should make exactly the same probability judgments."

Comment by avturchin on Claims & Assumptions made in Eternity in Six Hours · 2019-05-09T12:31:20.324Z · score: 0 (2 votes) · LW · GW

The main assumption, imho, is that if we input very large capabilities, we will get very large achievements. That is, if we we assume possibility of superintelligence and nanotech, we will get a possibility of quick intergalactic travel.

However, arguing from infinities could be problematic, as it could produce contradicting results, as is well known in the philosophy: https://en.wikipedia.org/wiki/Omnipotence_paradox

Comment by avturchin on What are some good examples of incorrigibility? · 2019-05-09T12:19:41.038Z · score: 4 (2 votes) · LW · GW

Safety systems must be foolproof. (I am now in the airport and is going to board a Russian plane which will fly almost the same rout as the one which had a catastrophic fire a few day ago.)

Comment by avturchin on What are some good examples of incorrigibility? · 2019-05-08T07:49:05.938Z · score: 6 (3 votes) · LW · GW

One more example from aviation safety: a pilot put his son on a steering wheel, knowing that it is turned off and the plane is controlled by autopilot. However, after the wheel was turned 15 degrees, the autopilot turned off, as it has a new feature of "corrigibility" and the plane crashed.

https://en.wikipedia.org/wiki/Aeroflot_Flight_593

Comment by avturchin on AI Alignment Problem: “Human Values” don’t Actually Exist · 2019-05-04T13:46:39.707Z · score: 2 (1 votes) · LW · GW

Thanks, will correct in my working draft.

Comment by avturchin on What are some good examples of incorrigibility? · 2019-04-30T20:03:13.857Z · score: 4 (2 votes) · LW · GW

In Chernobyl, they added small pieces of uranium (or something which increases reactivity) on the tips of boron rods which should stop reactor. It was done to increase control of the reactor, as smaller changes in rods' position will produce larger changes of reactivity. However, it didn't works well (obviously), when they tried to use these rods to stop near critical reactor: as the rods entered the reactor, they produced a jump of total reactivity, which contributed to its explosion. This is how I remember the story.

TL;DR: The system which increases corrigibility contributed to runaway changes during a nuclear accident.

Comment by avturchin on Reference request: human as "backup" · 2019-04-29T18:38:39.019Z · score: 3 (2 votes) · LW · GW

I wrote something like this in "Message to any future AI" there I suggest to the future AI that humans may have some now unknown instrumental value, and thus it is no wise to kill them now (this is in the end of the post).

Also in Global solution to AI safety, again at the end, there I look at solutions where AI consists of humans and in First human upload as AI Nanny which title is self-explanatory.

You may also have read The Age of Ems by Hanson.

Comment by avturchin on Open Thread April 2019 · 2019-04-28T05:40:50.002Z · score: 2 (1 votes) · LW · GW

"preference "my decisions should be mine" - and many people seems to have it"

I think it could be explained by social games. A person whose decision are unmovable are more likely to dominate eventually and by demostrating inflexibility a person pretends to have higher status. Also the person escapes any possible exploits, playing game of chicken preventively.

Comment by avturchin on Open Thread April 2019 · 2019-04-27T07:53:07.412Z · score: 2 (1 votes) · LW · GW

If I have preference "my decisions should be mine" - and many people seems to have it - then letting taxi driver decide is not ok.

There are "friends" who claim to have the same goals as me, but later turns out that they have hidden motives.

Comment by avturchin on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-26T12:06:04.719Z · score: 2 (1 votes) · LW · GW

But 10 000 IQ AI can cheat 1000 IQ AI? If yes, only equally powerful AIs will cooperate.

Comment by avturchin on Open Thread April 2019 · 2019-04-26T11:19:33.997Z · score: 2 (1 votes) · LW · GW

Sometimes I overupdate on the evidence.

For example, I have equal preference to go to my country house for a weakened or to stay home, 50 to 50. I decide to go, but then I find that a taxi would be too long to wait, and this shift expected utility to stay home option (51-to-49). I decided to stay, but later I learn that sakura start to bloom, and I decide to go again (52-48), but now I find that a friend invited to me somewhere on the evening.

This have two negative results: I spend half a day meandering between options, like Buridan ass.

Second consequence is that I give the power over my final decisions to small random events around me, and more over, a potential adversary could manipulate my decisions by providing me with small pieces of evidence which favours his interest.

Other people, I know them, stick rigorously to any decision they made no matter what and ignore any incoming evidence. This eventually often turn to be winning strategy, compared to the flexible strategy of constant updating expected utility.

Anyone have similar problem or a solution?

Comment by avturchin on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-26T09:39:42.874Z · score: 1 (3 votes) · LW · GW

This may fall in the the fallowing type of reasoning: "Superinteligent AI will be super in any human capability X. Human can cooperate. Thus SAI will have superhuman capability to cooperate."

The problem of such conjecture is that if we take an opposite human quality not-X, SAI will also have superhuman capability in it. For example, if X= cheating, then superintelligent AI will have superhuman capability in cheating.

However, SAI can't be simultaneously super-cooperator and super-cheater.

Comment by avturchin on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-25T09:34:04.945Z · score: 0 (2 votes) · LW · GW

AI could have a superhuman capability to find win-win solutions and sell it as a service to humans in form of market arbitrage, courts, partner matching (e.g; Tinder).

Based on this win-win solutions finding capability, AI will not have to "take over the world" - it could negotiate its way to global power, and everyone will win because of it (at leat, initially).

Comment by avturchin on AI Alignment Problem: “Human Values” don’t Actually Exist · 2019-04-23T16:40:39.212Z · score: 3 (2 votes) · LW · GW

There is some troubles in creating full and safe list of such human preferences, and there were an idea that AI will be capable to learn actual human preferences by observing human behaviour or by other means, like inverse reinforcement learning.

This my post basically shows that value learning will also have troubles, as there is no real human values, so some other ways to create such list of preferences is needed.

How to align the AI with existing preference, presented in human language, is another question. Yudkowsky wrote that without taking into account the complexity of value, we can't make safe AI, as it would wrongly interpret short commands without knowing the context.

Comment by avturchin on 1960: The Year The Singularity Was Cancelled · 2019-04-23T08:51:48.522Z · score: 3 (3 votes) · LW · GW

Interestingly, in 1960 the Foerster's law has ended, but in 1965 the Moore's law was born.

And in 2010s the Moore's law is dying, but OpenAI's law of growth in compute with 3 month doubling appeared.

This should not be surprising from the point of view of Spenser's laws or progress (1857) where is said that the focal point of progress is constantly shifting to different domains and only inside this focal point the exponential progress is happening. (Note that this interpretation of Spenser's laws comes from my university lecture, not form the reading of his book, and may be my interpretation of my teacher interpretation, but it seems correct, if we look on earlier explosions of innovations in different fields, like aviation).

I think that the focal point of progress is shifting towards the self-improving AI: that is, the focal point, where the growth of productivity increases the growth of productivity is moving form material supporting systems, like population, to intelligent systems, like computers and later to their programs.

Comment by avturchin on AI Alignment Problem: “Human Values” don’t Actually Exist · 2019-04-23T07:35:56.231Z · score: 4 (2 votes) · LW · GW

I got the idea of Table of Content as primary reading experience form Drexler's CAIS, where each subsection's name is a short sentence with a statement, like "I.6 The R&D automation model distinguishes development from functionality."

Comment by avturchin on AI Alignment Problem: “Human Values” don’t Actually Exist · 2019-04-22T21:02:35.915Z · score: 5 (2 votes) · LW · GW

It occurred to me that, for a human being, there is no way not to make a choice between different preferences: in any next moment of time I do something, even continue to think or indulge in procrastination. I either eat, or run, so the conflict is always resolved.

However, an interesting thing is that sometimes a person tries to do two things simultaneously, for example, if content of the speech and the tone do not match. It has happened to me – and I had to explain that only content matter, and the tone should be ignored.

Comment by avturchin on AI Alignment Problem: “Human Values” don’t Actually Exist · 2019-04-22T20:18:29.742Z · score: 2 (1 votes) · LW · GW

Terminal value monism may be possible as a pure philosophical model, but real biological humans have more complex motivational systems.

Comment by avturchin on AI Alignment Problem: “Human Values” don’t Actually Exist · 2019-04-22T16:58:25.697Z · score: 2 (1 votes) · LW · GW

A good description why any one value may be not good is in https://www.academia.edu/173502/A_plurality_of_values

I am sure you have more than one value - for example, the best way to prevent even slightest possibility of suffering is suicide, but as you are alive, you care to be alive. Moreover, I think that claims about values are not values - they are just good claims.

The real case of "one value person" are maniacs: that is a human version of a paperclipper. Typical examples of such maniacs are people obsessed with sex, money, or collecting of some random things; also drug addicts. Some of them are psychopaths: they look normal and are very effective, but do everything just for one goal.

Thanks for your comment - I will update the conclusion, so the bullet points will be linked with parts of the text which will explains them.

Comment by avturchin on AI Alignment Problem: “Human Values” don’t Actually Exist · 2019-04-22T09:49:44.659Z · score: 2 (1 votes) · LW · GW

Before "being something", values need to actually exist as some kind of object. Non-existing object can't have properties. For example, Sun exists, and thus we can discuss its mass. Zeus doesn't exist, and it makes any discussion about his mass futile.

AI Alignment Problem: “Human Values” don’t Actually Exist

2019-04-22T09:23:02.408Z · score: 25 (11 votes)
Comment by avturchin on How do S-Risk scenarios impact the decision to get cryonics? · 2019-04-21T19:24:28.896Z · score: 2 (1 votes) · LW · GW

Another argument to ignore "total measure" comes from many-worlds interpretation: as the world branches, my total measure should decline many orders of magnitude every second, but it doesn't affect my decision making.

Comment by avturchin on How do S-Risk scenarios impact the decision to get cryonics? · 2019-04-21T18:53:03.610Z · score: 0 (2 votes) · LW · GW

If I care only about the relative share of the outcomes, the total resurrection probability doesn't matter. e.g. if there is 1 000 000 timelines, and I will be resurrected in 1000 of them, and 700 of them will be s-risk, my P(alive in the future and in s-risks)=0.7.

If I care about the total world share (the rest 999 000 of timelines) I should chose absurd actions which will increase my total share in the world, for example, forgetting things and merging with other timelines, more here.

Comment by avturchin on How do S-Risk scenarios impact the decision to get cryonics? · 2019-04-21T18:46:49.593Z · score: 4 (2 votes) · LW · GW

If we assume that the total share matters, we will get some absurd capabilities to manipulate such share by selective forgetting things and thus merging with our copies in different worlds and increase our total share. I tried to explain this idea here. So only relative share matters.

Comment by avturchin on How do S-Risk scenarios impact the decision to get cryonics? · 2019-04-21T16:50:32.638Z · score: 6 (3 votes) · LW · GW

Paradoxically, if a person doesn't sign for cryonics and expresses the desire not to be resurrected by other means, say, resurrectional simulation, she will be resurrected only in those worlds where the superintelligent AI doesn't care about her decisions. Many of this worlds are s-risks worlds.

Thus, by not signing for cryonics she increases the share of her futures where she will be hostily resurrected in total share of her futures.

Comment by avturchin on No Safe AI and Creating Optionality · 2019-04-17T14:26:38.988Z · score: 1 (2 votes) · LW · GW

Hate to say it and do not endorse it, but only large scale nuclear war could stop AI development.

Comment by avturchin on Liar Paradox Revisited · 2019-04-17T11:31:49.159Z · score: 2 (1 votes) · LW · GW

A similar conjecture is: "Omniscent Omega tells you that you see is an illusion."

It could be interpreted as a) Omega is real, it said truth and thus I see is an illusion. b) Omega is not real and thus Omega is illusion, no matter what it says. In both cases I see an illusion.

This paradox appears in the discussions about the Simulation Argument in the following form: some people object to SA: if I am in simulation, I can't make any conclusion about the outside world and thus I can't use the computer power estimations to prove future AI capabilities, and thus SA does not work.

However, as it is already assumed that you are in simulation, SA is already proved, no matter what you can or can not conclude, and it is similar to (b) branch of Omega paradox from above.

Comment by avturchin on Scrying for outcomes where the problem of deepfakes has been solved · 2019-04-15T13:34:18.160Z · score: 2 (1 votes) · LW · GW

May be use something like a projected outside crypto timestamp, by the photo flash?

Comment by avturchin on Scrying for outcomes where the problem of deepfakes has been solved · 2019-04-15T09:39:38.055Z · score: 1 (5 votes) · LW · GW

Yes. But maybe instead the physical sealing we could use a blockchain, which registers time and geotag of the recording?

Will superintelligent AI be immortal?

2019-03-30T08:50:45.831Z · score: 9 (4 votes)

What should we expect from GPT-3?

2019-03-21T14:28:37.702Z · score: 11 (5 votes)

Cryopreservation of Valia Zeldin

2019-03-17T19:15:36.510Z · score: 20 (7 votes)

Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World

2019-03-11T10:30:58.676Z · score: 6 (2 votes)

Do we need a high-level programming language for AI and what it could be?

2019-03-06T15:39:35.158Z · score: 6 (2 votes)

For what do we need Superintelligent AI?

2019-01-25T15:01:01.772Z · score: 14 (8 votes)

Could declining interest to the Doomsday Argument explain the Doomsday Argument?

2019-01-23T11:51:57.012Z · score: 7 (8 votes)

What AI Safety Researchers Have Written About the Nature of Human Values

2019-01-16T13:59:31.522Z · score: 42 (11 votes)

Reverse Doomsday Argument is hitting preppers hard

2018-12-27T18:56:58.654Z · score: 9 (7 votes)

Gwern about centaurs: there is no chance that any useful man+machine combination will work together for more than 10 years, as humans soon will be only a liability

2018-12-15T21:32:55.180Z · score: 23 (9 votes)

Quantum immortality: Is decline of measure compensated by merging timelines?

2018-12-11T19:39:28.534Z · score: 10 (8 votes)

Wireheading as a Possible Contributor to Civilizational Decline

2018-11-12T20:33:39.947Z · score: 4 (2 votes)

Possible Dangers of the Unrestricted Value Learners

2018-10-23T09:15:36.582Z · score: 12 (5 votes)

Law without law: from observer states to physics via algorithmic information theory

2018-09-28T10:07:30.042Z · score: 14 (8 votes)

Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse

2018-09-27T10:09:56.182Z · score: 4 (3 votes)

Quantum theory cannot consistently describe the use of itself

2018-09-20T22:04:29.812Z · score: 8 (7 votes)

[Paper]: Islands as refuges for surviving global catastrophes

2018-09-13T14:04:49.679Z · score: 12 (6 votes)

Beauty bias: "Lost in Math" by Sabine Hossenfelder

2018-09-05T22:19:20.609Z · score: 9 (3 votes)

Resurrection of the dead via multiverse-wide acausual cooperation

2018-09-03T11:21:32.315Z · score: 20 (10 votes)

[Paper] The Global Catastrophic Risks of the Possibility of Finding Alien AI During SETI

2018-08-28T21:32:16.717Z · score: 12 (7 votes)

Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence

2018-07-25T17:12:32.442Z · score: 13 (5 votes)

[1607.08289] "Mammalian Value Systems" (as a starting point for human value system model created by IRL agent)

2018-07-14T09:46:44.968Z · score: 11 (4 votes)

“Cheating Death in Damascus” Solution to the Fermi Paradox

2018-06-30T12:00:58.502Z · score: 13 (8 votes)

Informational hazards and the cost-effectiveness of open discussion of catastrophic risks

2018-06-23T13:31:13.641Z · score: 5 (4 votes)

[Paper]: Classification of global catastrophic risks connected with artificial intelligence

2018-05-06T06:42:02.030Z · score: 4 (1 votes)

Levels of AI Self-Improvement

2018-04-29T11:45:42.425Z · score: 16 (5 votes)

[Preprint for commenting] Fighting Aging as an Effective Altruism Cause

2018-04-16T13:55:56.139Z · score: 24 (8 votes)

[Draft for commenting] Near-Term AI risks predictions

2018-04-03T10:29:08.665Z · score: 19 (5 votes)

[Preprint for commenting] Digital Immortality: Theory and Protocol for Indirect Mind Uploading

2018-03-27T11:49:31.141Z · score: 29 (7 votes)

[Paper] Surviving global risks through the preservation of humanity's data on the Moon

2018-03-04T07:07:20.808Z · score: 15 (5 votes)

The Utility of Human Atoms for the Paperclip Maximizer

2018-02-02T10:06:39.811Z · score: 8 (5 votes)

[Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale

2018-01-14T10:29:49.926Z · score: 11 (3 votes)

Paper: Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence

2018-01-04T14:21:40.945Z · score: 12 (3 votes)

The map of "Levels of defence" in AI safety

2017-12-12T10:45:29.430Z · score: 16 (6 votes)

Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”

2017-11-28T15:39:37.000Z · score: 0 (0 votes)

Message to any future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” [AI alignment prize entry]

2017-11-25T11:28:04.420Z · score: 16 (9 votes)

Military AI as a Convergent Goal of Self-Improving AI

2017-11-13T12:17:53.467Z · score: 17 (5 votes)

Military AI as a Convergent Goal of Self-Improving AI

2017-11-13T12:09:45.000Z · score: 0 (0 votes)

Mini-conference "Near-term AI safety"

2017-10-11T14:54:10.147Z · score: 5 (4 votes)

AI safety in the age of neural networks and Stanislaw Lem 1959 prediction

2016-02-06T12:50:07.000Z · score: 0 (0 votes)