Comment by avturchin on Quantifying anthropic effects on the Fermi paradox · 2019-02-17T09:43:14.742Z · score: 1 (1 votes) · LW · GW

Thanks for the link on the Paul comment.

My objection was that highest population and highest population density don't necessary correlate even on Earth. For example, in India, the highest population density maybe in Bombey, there around 20 million people live, but most people (1 billion) live in rural areas with lower population density. It means that anthropic reasoning can't be used to estimate density without some prior consideration of the density distribution and the size of low populated ares.

Comment by avturchin on Weird question: could we see distant aliens? · 2019-02-16T22:23:43.943Z · score: 1 (1 votes) · LW · GW

It could be a drawing, but consisting of quasars, not from individual stars. A cube with a side of 1 billion ly could have a few million galaxies in it, so the drawing's patter could be rather complex and provide tens or hundred kilobytes of information. Or else, the drawing could be rather simple beacon like a circle.

Comment by avturchin on Quantifying anthropic effects on the Fermi paradox · 2019-02-15T13:32:33.073Z · score: 3 (2 votes) · LW · GW

Really interesting article, which should be on arxiv.

A few notes:

I expected to find here a link on the Grace SIA Doomsday argument. She uses the same logic as you, but then turns to the estimation of the probability that Great filter is ahead. It looks like you ignore possible high extinction rate implied by SIA (?). Also, Universal DA by Vilenkin could be mentioned.

Another question, which is interesting for me, is how all this affects the possibility of SETI-attack - sending malicious messages with the speed of light on the intergalactic distance.

The third idea I had related to this is the possibility that "bad fine tuning" of the universe will overweight the expected gain of the civilisation density from SIA. For example, if a universe will be perfectly fine-tuned, every star will have a planet with life; however, it requires almost unbelievable fidelity of its parameters tuning. The more probable is the set of the universes there fine tuning is not so good, and the habitable planets are very rare.

Now the question arises: where should I find myself: in super-fine-tuned universe or in the set of all bad-fined-tunes universes? The answer depends on the number of parameters in the fine-tuning, or the number of dimension of functional space. I will illustrate this by the following example: the biggest density is in the center of the Sun (150 g/cm3), however, an average atom inside the Sun is located rather far from the center on much smaller densities (1.4 g/cm3.), that is, hundred times less.

This is the key possible objection to your argument: the total number of the observers and the total density of observers may not correlate, if we take into account existence of very larger set of universes with very small densities. I think it could be addressed by analysing actual distribution of parameters important for fine-tuning.

Comment by avturchin on How does OpenAI's language model affect our AI timeline estimates? · 2019-02-15T09:51:01.373Z · score: 2 (5 votes) · LW · GW

It lowers expected AI timing but not only because it is so great achievement, but also because it demonstrates that large part of human thinking could be just generating plausible continuation of the input text.

Comment by avturchin on Who owns OpenAI's new language model? · 2019-02-14T19:58:07.216Z · score: -3 (8 votes) · LW · GW

The model is trained on web-pages of different (I think so) legal copyright status. Most of web-page owners didn't provide license that their content will be used to train a neural net. So the model is massive copyright violation and could be seized by government. However, the model is nothing without supportive scientists, so it can't be used by an outsider.

Comment by avturchin on Humans interpreting humans · 2019-02-13T21:15:40.693Z · score: 1 (1 votes) · LW · GW

Human ability to model other human preferences may be an evidence that alignment is possible: we evolved to present and predict each other (and our own) goals. So our goals are expressed in the ways which could be reconstructed by other agent.

However, "X is not about X" could be true here. What humans think to be their "goals" or "rationality", could be not it, but just some signals. For example, being angry on someone and being the one on whom someone is angry is very clear situation for both humans, but what it actually mean for outside non-human observer? Is it a temporary tantrum of a friend, or a precommitment to kill? Is it a joke, a theatre, an expression of love or an act of war?

Comment by avturchin on Learning preferences by looking at the world · 2019-02-13T20:01:41.708Z · score: 5 (3 votes) · LW · GW

How to clean streets after snow? Each snowflake is unique and will be destroyed.

Comment by avturchin on Learning preferences by looking at the world · 2019-02-13T10:36:56.042Z · score: 3 (2 votes) · LW · GW

But "ordered and low-entropy" objects could be also natural? E.g. spherical planets, ice crystals, anthills?

Comment by avturchin on Would I think for ten thousand years? · 2019-02-12T10:30:59.468Z · score: 1 (3 votes) · LW · GW

Also, the thinking about own meaning of life is an important part of human activity, and if we delegate this to an AI, we will destroy significant part of human values. In other words, if AI will make philosophers unemployed, they will not be happy. Or else, we will not accept the AI's answer, and will continue to search for the ultimate goal.

One more thought: the more one argues for the creation for her own copies for some task, the more one should think that she is already is such task-solving simulation. Welcome to our value-solving matrix!

Comment by avturchin on Would I think for ten thousand years? · 2019-02-11T22:17:32.418Z · score: 8 (7 votes) · LW · GW

What more worry me is not a value drift, but the hardening of values in the wrong position.

We could see examples of people whose values have formed during their youth, and these values didn't evolve in the older age but instead become a rigid self-supporting system, not connected with reality. These old-schoolers don't have any new wisdom to tell.

Obviously, brain aging plays a role here, but it is not only a cellular aging, but also an "informational aging", that is, in particular, hardening of pavlovian reflexes between thoughts. Personally, I found that I have the same thought in my mind every time I am start eating, which is annoying ( it is not related to food: it is basically a number).

Comment by avturchin on Some Thoughts on Metaphilosophy · 2019-02-11T08:23:39.465Z · score: 2 (2 votes) · LW · GW

Creating AI for solving hard philosophical problems is like passing hot potato from right hand to left.

For example, I want to solve the problem of qualia. I can't solve it myself, but may be I can create super-intelligent AI which will help me to solve it? Now I start to working on AI, and soon encounter the the control problem. Trying to solve the control problem, I would have to specify nature of human values, and soon I will find the need to tell something about existing and nature of qualia. Now the circle is done: I have the same problem of qualia, but packed inside the control problem. If I make some assumption about what qualia should be, they will probably affect the final answer by AI.

However, I still could use some forms of AI to solve qualia problem: if I use google search, I could quickly find all relevant articles, identify the most cited, newest, maybe create an argument map. This is where Drexler's CAIS may help.

Comment by avturchin on The Argument from Philosophical Difficulty · 2019-02-10T08:55:23.505Z · score: 1 (1 votes) · LW · GW

A possible solution: we decide not to solve philosophical problems in irreversible way (e.g. "tiling universe with orgasmatronium is good") - which obviously creates astronomical opportunity costs, but also prevent astronomical risks of wrong solutions. Local agents solve different problems locally in different period of time (the same way as a normal human changes many philosophical systems and believes during his life).

Comment by avturchin on Some Thoughts on Metaphilosophy · 2019-02-10T08:40:17.324Z · score: 7 (4 votes) · LW · GW

All else equal, I prefer an AI which is not capable to philosophy, as I am afraid of completely alien conclusions which it could come to (e.g. insect are more important than humans).

More over, I am skeptical that going on meta-level simplifies the problem to the level that it will be solvable by humans (the same about meta-ethics and theory of human values). For example, if someone says that he is not able to understand math, but instead will work on meta-mathematical problems, we would be skeptical about his ability to contribute. Why meta-level would be simpler?

Comment by avturchin on HCH is not just Mechanical Turk · 2019-02-09T09:40:37.297Z · score: 1 (1 votes) · LW · GW

I am afraid that the HCH could be affected by Chinese whispers-like situation: that is, the accumulation of errors because each person has a little different understanding of the meaning of words, especially in the Mechanical Turk scenarios.

Comment by avturchin on The Hamming Question · 2019-02-09T09:23:26.347Z · score: 1 (1 votes) · LW · GW

Sometimes the most important question has less importance (say, 20 percent of total) than the sum of less important questions (say, 8x10=80 for 8 smaller problems). For example, if everybody will work on AI safety, some smaller x-risks could be completely neglected.

Comment by avturchin on Test Cases for Impact Regularisation Methods · 2019-02-06T23:32:57.479Z · score: 0 (3 votes) · LW · GW

My favorite cases are those there ambiguity of goal specification results in unintended behaviour:

1. A robot is asked to remove all ball-like objects from the a room and cut the head of its owner.

2. A robot is asked to bring coffee in bed and pour the coffee in the bed.

These examples are derivates from short joke stories which I heard elsewhere, but they underline inherited ambiguity of any goal specified in natural language. Thus, humor may be a source of intuitions about situations where such ambiguities could arise.

Comment by avturchin on Greatest Lower Bound for AGI · 2019-02-06T13:57:08.147Z · score: 1 (1 votes) · LW · GW

He we use an assumption that probability of AI creation is distributed linearly along the interval of AI research - which is obviously false, as it should grow to the end, may be exponentially. If we assume that the field is doubling, say, every 5 years, Copernican reasoning tells us that if we randomly selected from the members of this field, the field will end in after the next doubling with something like 50 per cent probability, and 75 per cent after 2 doublings.

TL;DR: anthropic + exponential growth = AGI to 2030.

Comment by avturchin on When to use quantilization · 2019-02-06T12:28:58.573Z · score: 3 (3 votes) · LW · GW

Someone maybe interested what is "quantilization" in a more layman terms, and it is explained in the section 2 of Jessika's article.

Comment by avturchin on Greatest Lower Bound for AGI · 2019-02-06T11:44:38.093Z · score: 1 (1 votes) · LW · GW

Gott's equation could be found in wiki and the main idea is that if I am randomly observing some external process, its age could be used to estimate its future time of existence, as, most likely, I observe it somewhere in the middle of its existence. Gott himself used this logic to predict the fall of Berlin wall when he was a student, and it actually failed in predicted timing, when Gott was already a prominent scientists and his article about it was published in Nature.

If we account for the exponential growth in AI, and assume that I am randomly taken of all AI researchers, the end will be much nearer - but all it becomes more speculative, as accounting for AI winters will dilute the prediction etc.

Comment by avturchin on Greatest Lower Bound for AGI · 2019-02-06T10:25:08.770Z · score: 2 (2 votes) · LW · GW

ups, yes.

Comment by avturchin on Greatest Lower Bound for AGI · 2019-02-06T10:23:30.965Z · score: 1 (1 votes) · LW · GW

BTW, exactly this paper is wrong, as could be seen that from his bet about predicting the age of OLD gods. The doomsday argument is statistical, so it can't be refuted by nitpicking example with specific age.

Comment by avturchin on Greatest Lower Bound for AGI · 2019-02-05T21:29:15.970Z · score: 6 (4 votes) · LW · GW

2019, based on anthropic reasoning. We are randomly located between the beginning of AI research in 1956 and the moment of AGI. 1956 was 62 years ago, which implies 50 per cent probability of creating AGI in next 62 years, according Gott's equation. This is roughly equal to 50/62 = 0.81 per cent of yearly probability.

Comment by avturchin on What are some of bizarre theories based on anthropic reasoning? · 2019-02-05T08:40:26.872Z · score: 1 (1 votes) · LW · GW

We are not currently in the situation of s-risks, so it is not typical state of affairs.

Comment by avturchin on What are some of bizarre theories based on anthropic reasoning? · 2019-02-04T20:32:23.207Z · score: 8 (5 votes) · LW · GW

I have an urge to create a complete list:

Immortality is impossible.

AI with IQ significantly higher than human is impossible, Arxiv

We will kill aliens, Arxiv

S-risks are rare.

You could manipulate probabilities by forgetting things, flux universe.

Earth is a typical civilization in the whole multiverse. Nothing interesting everywhere.

Climate change could be much worse existential risk because of the observational selection effects and underestimated fragility of our environment.

We could cure past suffering via some advance acausal trade as well as resurrect the dead.

You are now in the middle of your life. You will not die in the next second (reverse DA).

We could blackmail any future AI using reverse RB and make it safe.

We could use random strategy to escape Fermi paradox.

Comment by avturchin on (Why) Does the Basilisk Argument fail? · 2019-02-04T18:36:05.952Z · score: 1 (1 votes) · LW · GW

If Omega suggests you a blackmail, it calculated that there is some probability that you will accept it.

Comment by avturchin on (Why) Does the Basilisk Argument fail? · 2019-02-04T18:34:23.654Z · score: 2 (2 votes) · LW · GW

In real life, you can reverse blackmail by saying: "Blackmail is serious felony, an you could get one year in jail in US for blackmail, so now you have to pay me for not reporting the blackmail to the police" (I don't recommend it in real life, as you both will be arrested, but such aggressive posture may stop the blackmail.)

The same way acausal blackmail by AI could be reversed: You can threaten the AI that you had precommited to create thousands other AI which will simulate all this setup, and will punish the AI if it tries to torture any simulated being. This could be used to make a random paperclipper to behave as a Benevolent AI and the idea was suggested by Rolf Nelson. I analysed it it details in the text.

Comment by avturchin on (Why) Does the Basilisk Argument fail? · 2019-02-04T09:39:41.515Z · score: 4 (3 votes) · LW · GW

I think that RB fails because of human laziness – or more generally speaking, because human psychology can't process acausal blackmails. Thus nobody change his-her investment in beneficial AI creation based on RB.

However, I met two (independent of each other and of RB-idea) people who hoped that future AI will prize them for their projects which increase probability of AI's creation, which basically the same idea presented in more human language.

Comment by avturchin on How to stay concentrated for a long period of time? · 2019-02-03T14:53:22.521Z · score: 2 (2 votes) · LW · GW

I am bad in concentrating, but good in getting things done to a deadline. So I think that concentration itself is not so important, if you can do the work. I am distracting every 5 minutes to check facebook, news etc. I interpret this as that my "pomodoros" are very short and I just need to rest every 5 minutes.

Comment by avturchin on How to notice being mind-hacked · 2019-02-03T10:30:56.220Z · score: 1 (1 votes) · LW · GW

The problem is that "good model" is "viral" model, not predictive model. Being predictive helps model to be more viral, but it is not necessary.

Comment by avturchin on How to notice being mind-hacked · 2019-02-03T09:28:19.941Z · score: 11 (7 votes) · LW · GW

This seems to be true, but it changes the nature of truth: truth is just an effective hack in which you start to believe.

Side note: I once had an acid and after it I become hyper-suggestable. A person near me said that he has an allergy to strawberries, and I started to have panic attacks when I ate strawberries, despite the fact that I know that I don't have this allergy.

Comment by avturchin on Building up to an Internal Family Systems model · 2019-02-01T09:33:04.676Z · score: 1 (1 votes) · LW · GW

I don't think its rational part is based on any "morphic fields". If a person thinks that her mother is god, her father was a devil and suppressed any thoughts about the grandfather, it is expected (but damaged) family structure imprinted in her brain and she will repeat it again when she will try to built her own relations. The best way to learn more about family constellations is just try in ones in a local group - at least, in my case, it helped me to solve long conflict with my mother. The less effective may be to read Bert Hellinger's early books: it provides a theory, but without some experience it may look a little strange.

Comment by avturchin on Building up to an Internal Family Systems model · 2019-01-31T20:52:45.609Z · score: 4 (2 votes) · LW · GW

When I first read the post, I expected that "family systems" are related to Hellinger's family constellations: this is a different method of psychotherapy which assumes completely different set of "subagents" to define human mind and its problems. In the Hellinger's constellation method is assumed that actual family relations of a person has the biggest impact on the person's wellbeing (and motivation), and that the family structure is somehow internalised. This family structure could be invoked by group of people (assigned by a psychotherapist) playing role of "father", "mother" etc. and this group could be reorganised to be more healthy.

https://en.wikipedia.org/wiki/Family_Constellations

Comment by avturchin on Wireheading is in the eye of the beholder · 2019-01-30T22:41:23.572Z · score: 2 (2 votes) · LW · GW

Yes, that is what I meant.

Comment by avturchin on Wireheading is in the eye of the beholder · 2019-01-30T19:39:59.502Z · score: 5 (4 votes) · LW · GW

Could we say that wireheading is a direct access to one's reward function via self-modification and putting it on maximal level, which makes the function insensitive to any changes of the outside world? I think that such definition is stronger than just goodhearting.

Comment by avturchin on How much can value learning be disentangled? · 2019-01-29T21:25:05.342Z · score: 1 (1 votes) · LW · GW

Even zero impact AI which is limited to pure observation may be not acceptable for many people (not everybody wants his-her sex life to be recorded and analysed).

Comment by avturchin on Techniques for optimizing worst-case performance · 2019-01-28T22:09:30.211Z · score: 1 (1 votes) · LW · GW

I think some other approaches also could be in the direction listed in this post:

1) Active boxing or catching treacherous turn: one AI observing behaviour of another AI and predicting when it start to fails.

2) AI tripling: three very similar AI works (independently) on the same problem, and if one of them sufficiently divergent from two others, it turn offs.

Comment by avturchin on Solomonoff induction and belief in God · 2019-01-28T09:32:00.216Z · score: 1 (1 votes) · LW · GW

If we assume that Mathematical universe hypothesis is true, as EY and Tegrmark did, and all possible math structures actually exist, it means that any possible AIs also exist. More over, for any AI there is another AI (and not one) which is simulating the first AI and its world.

Thus we are simulated by infinitely large AI, which could be interpreted as a proof that "God exists". However, the practical meaning of it could be only some update of the simulation hypothesis.

Comment by avturchin on Río Grande: judgment calls · 2019-01-27T16:26:27.528Z · score: 4 (3 votes) · LW · GW

My rule of thumb is: "if I have doubts that the food is unsafe, it is unsafe". It replaced previous bad strategy: "If I have doubts that the food is unsafe, I will eat only a small part of it, reverse proportional to my doubts, and wait what will happen".

Comment by avturchin on Building up to an Internal Family Systems model · 2019-01-27T16:18:03.904Z · score: 1 (1 votes) · LW · GW

In fact, different people have different level of schizotypy or, maybe, it would be better called fractionness of mind. On one side is pure monolithic humans, and on the another is people with genius multiple personality disorder, which is very rare.

Comment by avturchin on Building up to an Internal Family Systems model · 2019-01-26T19:17:39.605Z · score: 1 (1 votes) · LW · GW

My 2 cents:

1 cent: It seems that sub-personalities do not actually exist, but are created by the human mind at the moment of query. The best way to explain this is to look at improvisation theatre, as described in the post by Valentine Intelligent social web. The consequence of this non-actual existence of the subpersonalities is that we could have different expectations about types of personalities, and still get therapeutically useful and consistently sounding results. For example, some people try to cure psychological problems by making a person to remember trauma-associated past lives. Human mind is very good in creating expected narrative, and plausible sounding stories about past lives could be immediately created by many people. I know it as I personally experimented with that practice and heard dozens of "past lives" stories, which obviously didn't provide any historically checkable information, but just recombined some background knowledge.

2 cent. In the similar "dialogue of voices" method which I practised, all these types of subpersonalities are postulated by a little bit different names, e.g. "exile"s are called "suppressed subpersonalities". However, in voice dialogue there is an overarching subpersonality of Controller which works as OS for different programs-subpersonalities and regulates when and how such subperonalities could be called to action. Controller is also a sum of all protectors-firefighters. It could be called by special procedure. Again, it doesn't actually exist.

Comment by avturchin on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-25T21:58:09.092Z · score: 4 (7 votes) · LW · GW

Ok, I read it too after my comment above... And I thought that than future evil superintellgence will start shooting at people on streets, the same commenters will said: "No, it is not a superintelligence, it is good just in tactical use of guns, and it just knows where humans are located and never misses, but its strategy is awful". Or, in other words:

Weak strategy + perfect skills = dangerous AI

Comment by avturchin on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-25T20:59:05.053Z · score: 15 (7 votes) · LW · GW

They are now explaining this in reddit AMA: "We are capping APM. Blizzard in game APM applies some multipliers to some actions, that's why you are seeing a higher number. https://github.com/deepmind/pysc2/blob/master/docs/environment.md#apm-calculation"

"We consulted with TLO and Blizzard about APMs, and also added a hard limit to APMs. In particular, we set a maximum of 600 APMs over 5 second periods, 400 over 15 second periods, 320 over 30 second periods, and 300 over 60 second period. If the agent issues more actions in such periods, we drop / ignore the actions. "

"Our network has about 70M parameters."

AMA: https://www.reddit.com/r/MachineLearning/comments/ajgzoc/we_are_oriol_vinyals_and_david_silver_from/eexs0pd/?context=3

Comment by avturchin on For what do we need Superintelligent AI? · 2019-01-25T20:50:20.579Z · score: 1 (1 votes) · LW · GW

That seems reasonable, but may be around-human-level level AI will be enough to automatise food production, and superintelligence is not needed for it? Let's make GMO crops, robotic farms in oceans and we will provide much more food for everybody.

Comment by avturchin on For what do we need Superintelligent AI? · 2019-01-25T20:16:21.210Z · score: 2 (2 votes) · LW · GW

Yes, but it is not necessary a good thing, as it may cause unemployment. People like to do things, and even for many unpleasant things there are people who like to do them. For example, I knew a bus driver, who had a severe depression after retirement, which finished only after he started making pictures.

Comment by avturchin on For what do we need Superintelligent AI? · 2019-01-25T16:35:34.644Z · score: 2 (2 votes) · LW · GW

Not everywhere, but China is surprisingly close to it. However, the most difficult question is how to put such system in every corner of earth without starting world war. Ups, I forget about Facebook.

Comment by avturchin on For what do we need Superintelligent AI? · 2019-01-25T15:58:48.323Z · score: 4 (3 votes) · LW · GW

For x-risks prevention, we should assume that risk of quick creation of AI is lower than all other x-risks combined, and it is highly uncertain from both sides. For example, I think that biorisks are underestimated in long run.

But to solve many x-risks we don't probably need full-blown superintelligence, but just need a good global control system, something which combines ubiquitous surveillance and image recognition.

Comment by avturchin on For what do we need Superintelligent AI? · 2019-01-25T15:54:13.669Z · score: 1 (1 votes) · LW · GW

Most work for AI in life extension could be done by narrow AIs, like needed data-crunching for modelling genetic networks or control of medical nanobots. Quick ascending of self-improving - and benevolent - AI may be a last chance for survival for old person who will never survive until these narrow AI services, but then again, such person could make a safer bet on cryonics.

For what do we need Superintelligent AI?

2019-01-25T15:01:01.772Z · score: 14 (8 votes)
Comment by avturchin on Following human norms · 2019-01-25T14:23:09.299Z · score: 1 (1 votes) · LW · GW

Just return to this post to mention a rather obvious assumption that following the norms assumes that these norms are stable throughout all the time and space duration of the group, which is clearly not true for large, old or internally diverse groups. Thus, some model of the group boundaries and-or internal structure should be either hand-coded, or meta-learned before norm-learning.

Comment by avturchin on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-25T14:14:28.306Z · score: 1 (3 votes) · LW · GW

This assumes that human intelligence appears from something different than training on very large dataset of books, movies, parents chats etc.

Comment by avturchin on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-25T11:59:46.995Z · score: 12 (6 votes) · LW · GW

What especially worry me is that the same type of cheating could happen during safety evaluation of the future more advanced AI.

Could declining interest to the Doomsday Argument explain the Doomsday Argument?

2019-01-23T11:51:57.012Z · score: 7 (8 votes)

What AI Safety Researchers Have Written About the Nature of Human Values

2019-01-16T13:59:31.522Z · score: 41 (10 votes)

Reverse Doomsday Argument is hitting preppers hard

2018-12-27T18:56:58.654Z · score: 9 (7 votes)

Gwern about centaurs: there is no chance that any useful man+machine combination will work together for more than 10 years, as humans soon will be only a liability

2018-12-15T21:32:55.180Z · score: 23 (9 votes)

Quantum immortality: Is decline of measure compensated by merging timelines?

2018-12-11T19:39:28.534Z · score: 12 (7 votes)

Wireheading as a Possible Contributor to Civilizational Decline

2018-11-12T20:33:39.947Z · score: 4 (2 votes)

Possible Dangers of the Unrestricted Value Learners

2018-10-23T09:15:36.582Z · score: 12 (5 votes)

Law without law: from observer states to physics via algorithmic information theory

2018-09-28T10:07:30.042Z · score: 14 (8 votes)

Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse

2018-09-27T10:09:56.182Z · score: 4 (3 votes)

Quantum theory cannot consistently describe the use of itself

2018-09-20T22:04:29.812Z · score: 8 (7 votes)

[Paper]: Islands as refuges for surviving global catastrophes

2018-09-13T14:04:49.679Z · score: 12 (6 votes)

Beauty bias: "Lost in Math" by Sabine Hossenfelder

2018-09-05T22:19:20.609Z · score: 9 (3 votes)

Resurrection of the dead via multiverse-wide acausual cooperation

2018-09-03T11:21:32.315Z · score: 20 (10 votes)

[Paper] The Global Catastrophic Risks of the Possibility of Finding Alien AI During SETI

2018-08-28T21:32:16.717Z · score: 12 (7 votes)

Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence

2018-07-25T17:12:32.442Z · score: 13 (5 votes)

[1607.08289] "Mammalian Value Systems" (as a starting point for human value system model created by IRL agent)

2018-07-14T09:46:44.968Z · score: 11 (4 votes)

“Cheating Death in Damascus” Solution to the Fermi Paradox

2018-06-30T12:00:58.502Z · score: 13 (8 votes)

Informational hazards and the cost-effectiveness of open discussion of catastrophic risks

2018-06-23T13:31:13.641Z · score: 5 (4 votes)

[Paper]: Classification of global catastrophic risks connected with artificial intelligence

2018-05-06T06:42:02.030Z · score: 4 (1 votes)

Levels of AI Self-Improvement

2018-04-29T11:45:42.425Z · score: 16 (5 votes)

[Preprint for commenting] Fighting Aging as an Effective Altruism Cause

2018-04-16T13:55:56.139Z · score: 24 (8 votes)

[Draft for commenting] Near-Term AI risks predictions

2018-04-03T10:29:08.665Z · score: 19 (5 votes)

[Preprint for commenting] Digital Immortality: Theory and Protocol for Indirect Mind Uploading

2018-03-27T11:49:31.141Z · score: 29 (7 votes)

[Paper] Surviving global risks through the preservation of humanity's data on the Moon

2018-03-04T07:07:20.808Z · score: 15 (5 votes)

The Utility of Human Atoms for the Paperclip Maximizer

2018-02-02T10:06:39.811Z · score: 8 (5 votes)

[Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale

2018-01-14T10:29:49.926Z · score: 11 (3 votes)

Paper: Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence

2018-01-04T14:21:40.945Z · score: 12 (3 votes)

The map of "Levels of defence" in AI safety

2017-12-12T10:45:29.430Z · score: 16 (6 votes)

Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”

2017-11-28T15:39:37.000Z · score: 0 (0 votes)

Message to any future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” [AI alignment prize entry]

2017-11-25T11:28:04.420Z · score: 16 (9 votes)

Military AI as a Convergent Goal of Self-Improving AI

2017-11-13T12:17:53.467Z · score: 17 (5 votes)

Military AI as a Convergent Goal of Self-Improving AI

2017-11-13T12:09:45.000Z · score: 0 (0 votes)

Mini-conference "Near-term AI safety"

2017-10-11T14:54:10.147Z · score: 5 (4 votes)

AI safety in the age of neural networks and Stanislaw Lem 1959 prediction

2016-02-06T12:50:07.000Z · score: 0 (0 votes)