Posts

Russian x-risks newsletter #2, fall 2019 2019-12-03T16:54:02.784Z · score: 22 (9 votes)
Russian x-risks newsletter, summer 2019 2019-09-07T09:50:51.397Z · score: 41 (21 votes)
OpenGPT-2: We Replicated GPT-2 Because You Can Too 2019-08-23T11:32:43.191Z · score: 12 (4 votes)
Cerebras Systems unveils a record 1.2 trillion transistor chip for AI 2019-08-20T14:36:24.935Z · score: 8 (3 votes)
avturchin's Shortform 2019-08-13T17:15:26.435Z · score: 6 (1 votes)
Types of Boltzmann Brains 2019-07-10T08:22:22.482Z · score: 9 (4 votes)
What should rationalists think about the recent claims that air force pilots observed UFOs? 2019-05-27T22:02:49.041Z · score: -3 (12 votes)
Simulation Typology and Termination Risks 2019-05-18T12:42:28.700Z · score: 8 (2 votes)
AI Alignment Problem: “Human Values” don’t Actually Exist 2019-04-22T09:23:02.408Z · score: 32 (12 votes)
Will superintelligent AI be immortal? 2019-03-30T08:50:45.831Z · score: 9 (4 votes)
What should we expect from GPT-3? 2019-03-21T14:28:37.702Z · score: 11 (5 votes)
Cryopreservation of Valia Zeldin 2019-03-17T19:15:36.510Z · score: 22 (8 votes)
Meta-Doomsday Argument: Uncertainty About the Validity of the Probabilistic Prediction of the End of the World 2019-03-11T10:30:58.676Z · score: 6 (2 votes)
Do we need a high-level programming language for AI and what it could be? 2019-03-06T15:39:35.158Z · score: 6 (2 votes)
For what do we need Superintelligent AI? 2019-01-25T15:01:01.772Z · score: 14 (8 votes)
Could declining interest to the Doomsday Argument explain the Doomsday Argument? 2019-01-23T11:51:57.012Z · score: 7 (8 votes)
What AI Safety Researchers Have Written About the Nature of Human Values 2019-01-16T13:59:31.522Z · score: 43 (12 votes)
Reverse Doomsday Argument is hitting preppers hard 2018-12-27T18:56:58.654Z · score: 9 (7 votes)
Gwern about centaurs: there is no chance that any useful man+machine combination will work together for more than 10 years, as humans soon will be only a liability 2018-12-15T21:32:55.180Z · score: 23 (9 votes)
Quantum immortality: Is decline of measure compensated by merging timelines? 2018-12-11T19:39:28.534Z · score: 10 (8 votes)
Wireheading as a Possible Contributor to Civilizational Decline 2018-11-12T20:33:39.947Z · score: 4 (2 votes)
Possible Dangers of the Unrestricted Value Learners 2018-10-23T09:15:36.582Z · score: 12 (5 votes)
Law without law: from observer states to physics via algorithmic information theory 2018-09-28T10:07:30.042Z · score: 14 (8 votes)
Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse 2018-09-27T10:09:56.182Z · score: 4 (3 votes)
Quantum theory cannot consistently describe the use of itself 2018-09-20T22:04:29.812Z · score: 8 (7 votes)
[Paper]: Islands as refuges for surviving global catastrophes 2018-09-13T14:04:49.679Z · score: 12 (6 votes)
Beauty bias: "Lost in Math" by Sabine Hossenfelder 2018-09-05T22:19:20.609Z · score: 9 (3 votes)
Resurrection of the dead via multiverse-wide acausual cooperation 2018-09-03T11:21:32.315Z · score: 20 (10 votes)
[Paper] The Global Catastrophic Risks of the Possibility of Finding Alien AI During SETI 2018-08-28T21:32:16.717Z · score: 12 (7 votes)
Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence 2018-07-25T17:12:32.442Z · score: 13 (5 votes)
[1607.08289] "Mammalian Value Systems" (as a starting point for human value system model created by IRL agent) 2018-07-14T09:46:44.968Z · score: 11 (4 votes)
“Cheating Death in Damascus” Solution to the Fermi Paradox 2018-06-30T12:00:58.502Z · score: 13 (8 votes)
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks 2018-06-23T13:31:13.641Z · score: 5 (4 votes)
[Paper]: Classification of global catastrophic risks connected with artificial intelligence 2018-05-06T06:42:02.030Z · score: 4 (1 votes)
Levels of AI Self-Improvement 2018-04-29T11:45:42.425Z · score: 16 (5 votes)
[Preprint for commenting] Fighting Aging as an Effective Altruism Cause 2018-04-16T13:55:56.139Z · score: 24 (8 votes)
[Draft for commenting] Near-Term AI risks predictions 2018-04-03T10:29:08.665Z · score: 19 (5 votes)
[Preprint for commenting] Digital Immortality: Theory and Protocol for Indirect Mind Uploading 2018-03-27T11:49:31.141Z · score: 29 (7 votes)
[Paper] Surviving global risks through the preservation of humanity's data on the Moon 2018-03-04T07:07:20.808Z · score: 15 (5 votes)
The Utility of Human Atoms for the Paperclip Maximizer 2018-02-02T10:06:39.811Z · score: 8 (5 votes)
[Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale 2018-01-14T10:29:49.926Z · score: 11 (3 votes)
Paper: Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence 2018-01-04T14:21:40.945Z · score: 12 (3 votes)
The map of "Levels of defence" in AI safety 2017-12-12T10:45:29.430Z · score: 16 (6 votes)
Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” 2017-11-28T15:39:37.000Z · score: 0 (0 votes)
Message to any future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” [AI alignment prize entry] 2017-11-25T11:28:04.420Z · score: 16 (9 votes)
Military AI as a Convergent Goal of Self-Improving AI 2017-11-13T12:17:53.467Z · score: 17 (5 votes)
Military AI as a Convergent Goal of Self-Improving AI 2017-11-13T12:09:45.000Z · score: 0 (0 votes)
Mini-conference "Near-term AI safety" 2017-10-11T14:54:10.147Z · score: 5 (4 votes)
AI safety in the age of neural networks and Stanislaw Lem 1959 prediction 2016-02-06T12:50:07.000Z · score: 0 (0 votes)

Comments

Comment by avturchin on Understanding “Deep Double Descent” · 2019-12-08T11:10:00.944Z · score: 2 (1 votes) · LW · GW

I read it somewhere around 10 years ago and don't remember the source. However, I remember an explanation they provided: that "correct answers" propagate quicker through brain's neural net, but later they become silenced by errors which arrive through longer trajectories. Eventually the correct answer is reinforced by learning and becomes strong again.

Comment by avturchin on Understanding “Deep Double Descent” · 2019-12-06T10:20:54.671Z · score: 4 (2 votes) · LW · GW

I observed and read about that it also happens with human learning. On the third lesson of X, I reached perfomance that I was not able reach again until 30th lesson.

Comment by avturchin on Values, Valence, and Alignment · 2019-12-06T09:43:17.982Z · score: 2 (1 votes) · LW · GW

Where the human valence comes from? Is it biologically encoded as positive valence of orgasm or it is learned as positive valence of Coca-Cola?

If it all biological, does it mean that our valence is shaped but convergent goals of Darwinian evolution?

Comment by avturchin on Seeking Power is Provably Instrumentally Convergent in MDPs · 2019-12-05T10:02:15.841Z · score: 0 (4 votes) · LW · GW

We explored similar idea in "Military AI as a Convergent Goal of Self-Improving AI". In that article we suggested that any advance AI will have a convergent goal to take over the world and because of this, it will have convergent subgoal of developing weapons in the broad sense of the word "weapon": not only tanks or drones, but any instruments to enforce its own will over others or destroy them or their goals.

We wrote in the abstract: "We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. This militarization trend increases global catastrophic risk or even existential risk during AI takeoff, which includes the use of nuclear weapons against rival AIs, blackmail by the threat of creating a global catastrophe, and the consequences of a war between two AIs. As a result, even benevolent AI may evolve into potentially dangerous military AI. The type and intensity of militarization drive depend on the relative speed of the AI takeoff and the number of potential rivals."

Comment by avturchin on What are the requirements for being "citable?" · 2019-11-28T23:22:59.582Z · score: 2 (1 votes) · LW · GW

Entries from PhilPapers are automatically indexing to Google Scholar. But they need to be formated as scientific articles. So, if the best LW posts will be crossposted to the PhilPapers, it will increase their scientific visibility, but not citations (based on my experience).

Really groundbreaking posts like Meditation on Moloh by Scott Alexander will be cited anyway just because they are great.

Comment by avturchin on avturchin's Shortform · 2019-11-28T12:48:52.012Z · score: 2 (1 votes) · LW · GW

How to Survive the End of the Universe

Abstract. The problem of surviving the end of the observable universe may seem very remote, but there are several reasons it may be important now: a) we may need to define soon the final goals of runaway space colonization and of superintelligent AI, b) the possibility of the solution will prove the plausibility of indefinite life extension, and с) the understanding of risks of the universe’s end will help us to escape dangers like artificial false vacuum decay. A possible solution depends on the type of the universe’s ending that may be expected: very slow heat death or some abrupt end, like a Big Rip or Big Crunch. We have reviewed the literature and identified several possible ways of survival the end of the universe, and also suggest several new ones. There are seven main approaches to escape the end of the universe: use the energy of the catastrophic process for computations, move to a parallel world, prevent the end, survive the end, manipulate time, avoid the problem entirely or find some meta-level solution.

https://forum.effectivealtruism.org/posts/M4i83QAwcCJ2ppEfe/how-to-survive-the-end-of-the-universe

Comment by avturchin on The Pavlov Strategy · 2019-11-28T10:49:01.044Z · score: 2 (1 votes) · LW · GW

I continued to work with a partner who cheated on me without punishing him, and the partner cheated even more.

Comment by avturchin on The Pavlov Strategy · 2019-11-27T11:05:02.589Z · score: 2 (1 votes) · LW · GW

It was insightful for me and helped to understand my failures in business.

Comment by avturchin on A LessWrong Crypto Autopsy · 2019-11-27T11:02:47.541Z · score: 1 (3 votes) · LW · GW

It is important to understand why we fail

Comment by avturchin on Breaking Oracles: superrationality and acausal trade · 2019-11-26T09:51:29.169Z · score: 4 (2 votes) · LW · GW

I have some obscure thought about anti-acausal-cooperative agents, which are created to make acausal cooperation less profitable. Every time two agents could acausally cooperate to get more paperclips, anti-agent predicts this and starts destroying paperclips. Thus net number of paperclips do not change and the acausal cooperation becomes useless.

Comment by avturchin on avturchin's Shortform · 2019-11-23T10:46:58.112Z · score: 3 (2 votes) · LW · GW

I converted by Immortality roadmap into an article Multilevel Strategy for Personal Immortality: Plan A – Fighting Aging, Plan B – Cryonics, Plan C – Digital Immortality, Plan D – Big World Immortality.

Comment by avturchin on Analysing: Dangerous messages from future UFAI via Oracles · 2019-11-22T19:08:22.182Z · score: 5 (5 votes) · LW · GW

It looks like a reincarnation of the RB idea, now as a chain, not one-short game.

If there are many possible UFAIs in the future, they could acausally compete for the O's reward channel, and this would create some noise and may work as a protection.

It also reminds me of the SETI-attack, now in time, not space. Recently I had a random shower thought that if all quantum computers occured to be connected with each other via some form of entaglement, when aleins could infiltrate our quantum computers as their quantum computers will be connected to such parasitic net too. It is unlikely to be true, but it illustrates that unfreindly superintelligence could find unexpected ways to penetrate through space and time.

Comment by avturchin on Ultra-simplified research agenda · 2019-11-22T15:36:49.910Z · score: 5 (3 votes) · LW · GW

Maybe we could try to put the theory of mind out of the brackets? In that case, the following type of claims will be meaningful: "For the theory of mind T1, a human being H has the set of preferences P1, and for the another theory of mind T2 he has P2". Now we could compare P1 and P2 and if we find some invariants, they could be used as more robust presentations of the preferences.

Comment by avturchin on Hard to find factors messing up experiments: Examples? · 2019-11-16T16:52:05.774Z · score: 6 (4 votes) · LW · GW

A friend told me this story many years ago: He was working on repairing some electronic staff and one block had a green light turned on when the block was turned off by a switch, but not disconnected from the power line. However, there are no short circuits in it. After long investigation, mostly for curiosity, he found that some piece of alloy covered another piece and together they created a capacitor, which was able to let in AC part of incoming signal and power the light in the gadget.

Comment by avturchin on Evolution of Modularity · 2019-11-15T15:57:34.119Z · score: 2 (1 votes) · LW · GW

Interestingly, many body parts have 2-3 different functions despite modularity. A mouth could be used for drinking, eating, biting, speaking and breathings; legs – for running and fighting

Comment by avturchin on Platonic rewards, reward features, and rewards as information · 2019-11-13T13:14:46.942Z · score: 4 (2 votes) · LW · GW

Will blackboxing the reward function help, either physically or cryptographically? It also should include the obscurity about the boundary between the BB and internal computations in AI, that is, the AI will not know which data actually trigger the BB reaction.

This is how human reward function seems to work. It is well protected from internal hacking: if I imagine that I got 100 USD, it will not create as much pleasure as in the situation when I am actually getting 100. When I send mental image of 100 USD into the my reward box, the box "knows" that I am lying and don't generate the reward. As don't know much about how the real human reward function works I have to get real 100 USD.

Comment by avturchin on What should we expect from GPT-3? · 2019-11-12T10:16:34.595Z · score: 2 (1 votes) · LW · GW

In October 2019, a model was trained by Google with on 750 GB training data and it has 11 billion parameters (vs. 40 Gb and 1.6B for GPT-2 8 months before that.)

Comment by avturchin on Operationalizing Newcomb's Problem · 2019-11-12T09:23:23.016Z · score: 1 (2 votes) · LW · GW

I could use a fair coin to decide should I open the envelope. In that case I become unpredictable.

Comment by avturchin on The randomness/ignorance model solves many anthropic problems · 2019-11-11T18:03:20.655Z · score: 5 (3 votes) · LW · GW

If the universe is infinite and has all possible things, then most of ignorance becomes randomness?

Comment by avturchin on The problem/solution matrix: Calculating the probability of AI safety "on the back of an envelope" · 2019-11-05T09:09:47.472Z · score: 2 (1 votes) · LW · GW

Assuming that we have no less than 20 problems and for each problem we have 80 per cent chances of success (if we know more, it is not a problem) we have total only 1 per cent of the total probability of success.

So, this method produces very pessimistic expectations even if problems themselves seems solvable. EY wrote somewhere that multiplying probabilities is bad way to estimate the chances of success of cryonics, as this method underestimate the growth of experience of the problem solver.

Another takeaway could be that we should search total AI safety solutions where we have less unknowns.

Comment by avturchin on Total horse takeover · 2019-11-05T08:04:28.099Z · score: 4 (2 votes) · LW · GW

One wrong take on "taking over the world" is "having causal power to change everything". The reason for it is that because of the "butterfly effect" every my action will change fates of all future people, however, in a completely unknown way.

Comment by avturchin on What are human values? - Thoughts and challenges · 2019-11-02T13:58:09.469Z · score: 4 (2 votes) · LW · GW

"Normative assumptions" by Stuart Armstrong discussion seems relevant here.

Comment by avturchin on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T13:21:35.019Z · score: 9 (6 votes) · LW · GW

My personal estimate is 10 per cent in 10 years. If it is distributed linearly, it is around 0.2 per cent until the end of 2019, most likely from unknown secret project.

Comment by avturchin on Explaining Visual Thinking · 2019-11-02T12:05:49.849Z · score: 4 (2 votes) · LW · GW

I am bad visual thinker, but I was able to reach much higher performance on dual n back after I found the trick to write down all numbers on an imaginary board.

Comment by avturchin on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T10:13:17.188Z · score: 9 (2 votes) · LW · GW

For me, it is evidence for AGI, as it says that we only just one step, may be even one idea, behind it: we need to solve "genuine causal reasoning". Something like "train a neural net to recognise patterns in in AI's plans, corresponding to some strategic principles".

Comment by avturchin on Why are people so bad at dating? · 2019-10-28T16:53:52.266Z · score: 6 (4 votes) · LW · GW

Dating is over-advertise as an effective way of getting a GF, as it helps cafes, flowers sellers, girls themselves etc. In my experience, I've got GFs in two ways:

1) Relation evolved from friendship on a course of years.

2) I lived my own life, a girl fall in love at me based her own internal processes, she showed interest, I responded and we started dating.

So, the secret is: live your own interesting life, and if a girl falls in love at you, start dating her. But stopped chasing these 9th and 10th!


Comment by avturchin on Why are people so bad at dating? · 2019-10-28T16:21:18.802Z · score: 4 (5 votes) · LW · GW

My experience: dating is just a wrong way to come to relationship. I have had several relationships and never through proper dating. I read "Mate" and it didn't help.

Comment by avturchin on Two explanations for variation in human abilities · 2019-10-27T19:20:12.711Z · score: 4 (2 votes) · LW · GW

Yes, they had Hungarian-Jewish ancestry which is known to produce many genius minds called the Martians.

Comment by avturchin on cousin_it's Shortform · 2019-10-27T12:28:12.687Z · score: 2 (1 votes) · LW · GW

It is interesting if quantum computers' computations could be used as evidence against "we are in simulation on a classical computer". Quantum supremacy can't be modeled on a classical computer. But the simulators could easily overcome the obstacle by using quantum computers too. However, it creates some limits about the nature of their hardware.

One could imagine an experiment to test if we are in a classical simulation by performing a very complex computation of a quantum computer. But this could also end our world.

Comment by avturchin on Two explanations for variation in human abilities · 2019-10-27T11:15:14.137Z · score: 4 (3 votes) · LW · GW

This was tested in the chess field. A family decided to teach their kids chess from early childhood and created three grandmasters (Plogar sisters) https://www.psychologytoday.com/intl/articles/200507/the-grandmaster-experiment (But not one Kasparov, so some variability of personal capabilities obviously exists.)


Comment by avturchin on Two explanations for variation in human abilities · 2019-10-26T10:09:59.250Z · score: 9 (7 votes) · LW · GW

Another way to look at human intelligence is to assume that it is a property of the whole humanity, but much less of individual humans. If Einstein was born in a forrest as a feral child, he will not be able to demonstrate much of his intelligence. Only because he got special training he was able to built his special relativity theory as incrementation of already existing theories.

Comment by avturchin on What economic gains are there in life extension treatments? · 2019-10-24T07:13:58.774Z · score: 3 (2 votes) · LW · GW

There is no proved working antiaging technologies. Some possible interventions are relatively cheap, e.g. metfomin bulk cost is few dollars for kg, but they could provide at best 1-3 years of life expectancy.

Given the exponential decline of the price of any new tech, future anti-aging tech (e.g. nanobots-based vaccine against aging) will be as cheap as a new smartphone eventually. For example, other current life-saving drugs are also very cheap now: vitamins, vaccines, antibiotics.

Thus saving life will be cheaper than rising a new person (at least until mind uploading when copying will be cheaper).



Comment by avturchin on AI Safety "Success Stories" · 2019-10-18T09:11:25.354Z · score: 2 (1 votes) · LW · GW

Other possible success stories are semi-success stories, where the outcome is not very good, but some humans survive and significant part of human values is preserved.

One case of the semi-success story is that many sovereign AIs control different countries or territories and implement different values in them. In some of these territories AIs' values will be very close to the best possible implementation of aligned AI. Other AIs could be completely inhuman. Slow takeoff could end in such world of many AIs.

Another case is that unfriendly AI decides not to kill humans for some instrumental reasons (research, acausal trade with other AI, just not bother killing them). It could even run many simulations of human history including simulations of friendly AIs-sovereigns and their human civilizations. In that case, many people will live very happy lives despite being controlled by unfriendly AI. Like some people were happy under Saddam Hussein rule.

Semi-success stories could be seen as a more natural outcome as we typically don't have perfect things in life.

Comment by avturchin on Categories: models of models · 2019-10-17T15:21:03.993Z · score: 2 (1 votes) · LW · GW

Thanks for this sequence.

Comment by avturchin on Thoughts on "Human-Compatible" · 2019-10-10T15:55:45.024Z · score: 2 (1 votes) · LW · GW

Decoupled AI 4: figure out which action will reach the goal, without affecting outside world (low-impact AI)

Comment by avturchin on Thoughts on "Human-Compatible" · 2019-10-10T15:51:25.287Z · score: 2 (1 votes) · LW · GW

Risks: Any decoupled AI "wants" to be coupled. That is, it will converge to the solutions which will actually affect the world, as they will provide highest expected utility.

Comment by avturchin on Does the US nuclear policy still target cities? · 2019-10-03T10:12:24.059Z · score: 3 (2 votes) · LW · GW

Agree. Also some cities have military ports, like San Diego.

Comment by avturchin on Does the US nuclear policy still target cities? · 2019-10-02T18:46:50.814Z · score: 3 (2 votes) · LW · GW

Note that military installations (launch control rooms and bunkers for leadership) goes 800 meters below Kremlin, and in case of war they could be reached only by many megaton class explosions. Moscow has around 20 million people now.

Other large Russian cities may also have military targets inside city walls - exactly because there is hope that cities will be not attacked.

Comment by avturchin on [AN #63] How architecture search, meta learning, and environment design could lead to general intelligence · 2019-09-10T21:20:51.439Z · score: 1 (3 votes) · LW · GW
very powerful and sample efficient learning algorithm

simple?

Comment by avturchin on Is my result wrong? Maths vs intuition vs evolution in learning human preferences · 2019-09-10T10:41:32.465Z · score: 8 (4 votes) · LW · GW

I would add that people overestimate their ability to guess others preferences. "He just wants money" or "She just wants to marry him". Such oversimplified models could be not just useful simplifications, buts could be blatantly wrong.

Comment by avturchin on Looking for answers about quantum immortality. · 2019-09-09T16:55:43.732Z · score: 2 (1 votes) · LW · GW

To escape creating just random minds, the future AI has to create a simulation of the history of the whole humanity, and it is still running, not maintained. I explored the topic of the resurrectional simulations here: https://philpapers.org/rec/TURYOL

Comment by avturchin on Looking for answers about quantum immortality. · 2019-09-09T16:21:55.475Z · score: 2 (1 votes) · LW · GW
How would measure affect this? If you're forced to follow certain paths due to not existing in any others, then why does it matter how much measure it has?

Agree, but some don't.

We could be (and probably are) in AI-created simulation, may be it is a "resurrectional simulation". But if friendly AIs dominate, there will be no drastic changes.

Comment by avturchin on Looking for answers about quantum immortality. · 2019-09-09T14:18:20.005Z · score: 3 (2 votes) · LW · GW

QI works only if at least three main assumptions hold, but we don't know for sure if they are true or not. One is very large size of the universe, the second is "unification of identical experiences" and the third one is that we could ignore the decline of measure corresponding to survival in MWI. So, QI validity is uncertain. Personally I think that it is more likely to be true than untrue.

It was just a toy example of rare, but stable world. If friendly AIs are dominating the measure, you most likely will be resurrected by friendly AI. Moreover, friendly AI may try to dominate total measure to increase human chances to be resurrected by it and it could try to rescue humans from evil AIs.


Comment by avturchin on Looking for answers about quantum immortality. · 2019-09-09T13:05:05.999Z · score: 2 (1 votes) · LW · GW

The world where someone wants to revive you has low measure (may be not, but let's assume), but if they will do it, they will preserve you there for very long time. For example, some semi-evil AI may want to revive you only to show red fishes for the next 10 billion years. It is a very unlikely world, but still probable. And if you are in, it is very stable.

Comment by avturchin on Looking for answers about quantum immortality. · 2019-09-09T12:28:58.935Z · score: 3 (2 votes) · LW · GW

If QI is true, no matter how small is the share of the worlds where radical life extension is possible, I will eventually find myself in it, if not in 100, maybe in 1000 years.


Comment by avturchin on Looking for answers about quantum immortality. · 2019-09-09T10:39:21.286Z · score: 5 (3 votes) · LW · GW

I wrote the article quoted above. I think I understand your feelings as when I came to the idea of QI, I realised - after first period of excitement - that it implies the possibility of eternal sufferings. However, in current situation of quick technological progress such eternal sufferings are unlikely, as in 100 years some life extending and pain reducing technologies will appear. Or, if our civilization will crash, some aliens (or owners of simulation) will eventually bring pain reduction technics.

If you have thoughts about non-existence, it may be some form of suicidal ideation, which could be side effect of antidepressants or bad circumstances. I had it, and I am happy that it is in the past. If such ideation persists, ask professional help.

While death is impossible in QI setup, a partial death is still possible, when a person forgets those parts of him-her which want to die. Partial death has already happened many times with average adult person, when she forgets her childhood personality.

Comment by avturchin on If the "one cortical algorithm" hypothesis is true, how should one update about timelines and takeoff speed? · 2019-08-26T09:43:42.348Z · score: -2 (5 votes) · LW · GW

It will appear in a random moment of time when someone will guess it. However, this "randomness" is not evenly distributed. The probability of guessing the correct algo is higher with time (as more people is trying) and also it is higher in a DeepMind-like company than in a random basement as Deep Mind (or similar company) has already hired best minds. Also larger company has higher capability to test ideas, as it has higher computational capacity and other resources.

Comment by avturchin on Soft takeoff can still lead to decisive strategic advantage · 2019-08-23T19:13:04.524Z · score: 7 (5 votes) · LW · GW

One possible way to the decisive strategic advantage is to combine rather mediocre AI with some also mediocre but rare real world capability.

Toy example: An AI is created with is capable to win in nuclear war by choosing right targets and other elements of nuclear strategy. The AI itself is not a superintelligence and maybe like something like AlphaZero for nukes. Many companies and people are capable to create such AI. However, only a nuclear power with a large nuclear arsenal could actually get any advantage of it, which could be only US, Russia and China. Lets assume that such AI gives +1000 in nuclear ELO rating between nuclear superpowers. Now the first of three countries which will get it, will have temporary decisive strategic advantage. This example is a toy example as it is unlikely that the first country which would get such "nuclear AI decisive advantage" will take a risk of first strike.

There are several other real world capabilities which could be combines with mediocre AI to get decisive strategic advantage: access to a very large training data, access to large surveillance capabilities like Prizm, access to large untapped computing power, to funds, to pool of scientists, to some other secret military capabilities, to some drone manufacturing capabilities.

All these capabilities are centered around largest military powers and their intelligence and military services. Thus, combining rather mediocre AI with a whole capabilities of a nuclear superpower could create a temporary strategic advantage. Assuming that we have around 3 nuclear superpowers, one of them could get temporary strategic advantage via AI. But each of them has some internal problems in implementing such project.


Comment by avturchin on Has Moore's Law actually slowed down? · 2019-08-21T12:29:51.491Z · score: 4 (3 votes) · LW · GW

There are two interesting developments this year.

First is very large whole waffle chips with 1.2 trillions transistors, well above trend.

Second is "chiplets" - small silicon ships which are manufactured independently but are stacked on each other for higher connectivity.

Comment by avturchin on Cerebras Systems unveils a record 1.2 trillion transistor chip for AI · 2019-08-21T12:17:33.268Z · score: 2 (1 votes) · LW · GW

They also claim increased performance in term of energy as they eliminate useless multiplications on zero which are often in matrix multiplication.