The map of agents which may create x-risks

post by turchin · 2016-10-13T11:17:51.236Z · LW · GW · Legacy · 6 comments

Contents

6 comments

Recently Phil Torres wrote an article  where he raises a new topic in existential risks research: the question about who could be possible agents in the creation of a global catastrophe. Here he identifies five main types of agents, and two main reasons why they will create a catastrophe (error and terror).  

He discusses the following types of agents: 

 

(1) Superintelligence. 

(2) Idiosyncratic actors.  

(3) Ecoterrorists.  

(4) Religious terrorists.  

(5) Rogue states.  

 

Inspired by his work I decided to create a map of all possible agents as well as their possible reasons for creating x-risks. During this work some new ideas appeared.  

I think that a significant addition to the list of agents should be superpowers, as they are known to have created most global risks in the 20th century; corporations, as they are now on the front line of AGI creation; and pseudo-rational agents who could create a Doomsday weapon in the future to use for global blackmail (may be with positive values), or who could risk civilization’s fate for their own benefits (dangerous experiments). 

The X-risks prevention community could also be an agent of risks if it fails to prevent obvious risks, or if it uses smaller catastrophes to prevent large risks, or if it creates new dangerous ideas of possible risks which could inspire potential terrorists.  

The more technology progresses, the more types of agents will have access to dangerous technologies, even including teenagers. (like: "Why This 14-Year-Old Kid Built a Nuclear Reactor” ) 

In this situation only the number of agents with risky tech will matter, not the exact motivations of each one. But if we are unable to control tech, we could try to control potential agents or their “medium" mood at least. 

The map shows various types of agents, starting from non-agents, and ending with types of agential behaviors which could result in catastrophic consequences (error, terror, risk etc). It also shows the types of risks that are more probable for each type of agent. I think that my explanation in each case should be self evident. 

We could also show that x-risk agents will change during the pace of technological progress. In the beginning there are no agents, and later there are superpowers, and then smaller and smaller agents, until there will be millions of people with biotech labs at home. In the end there will be only one agent - SuperAI.  

So, a lessening the number of agents, and increasing their ”morality” and intelligence seem to be the most plausible directions in lowering risks. Special organizations or social networks may be created to control the most risky type of agents. Differing agents probably need differing types of control. Some ideas of this agent-specific control are listed in the map, but a real control system should be much more complex and specific.

The map shows many agents, some of them real and exist now (but don’t have dangerous capabilities), and some are only possible in moral sense or in technical sense.

 

So there are 4 types of agents, and I show them in the map in different colours:

 

1) Existing and dangerous, that is already having technology to destroy the humanity. That is superpowers, arrogant scientists – Red

2) Existing, and willing to end the world, but lacking needed technologies. (ISIS, VHEMt) - Yellow

3) Morally possible, but don’t existing. We could imagine logically consistent value systems which may result in human extinction. That is Doomsday blackmail. - Green

4) Agents, which will pose risk only after supertechnologies appear, like AI-hackers, children biohackers. - Blue

 

Many agents types are not fit for this classification so I rest them white in the map. 

 

The pdf of the map is here: http://immortality-roadmap.com/agentrisk11.pdf

 

 

 

 

(The jpg of the map is below because side bar is closing part of it I put it higher)

 

 

 

 

 

 

 

 

(The jpg of the map is below because side bar is closing part of it I put it higher)

 

 

 

 

 

 

 

 

 

 

 

 

 

6 comments

Comments sorted by top scores.

comment by Daniel_Burfoot · 2016-10-15T15:32:41.337Z · LW(p) · GW(p)

Cool chart.

I noticed that you wrote "not soon" for the likelihood of Value Drift. I think Value Drift is a very real possibility, maybe an inevitability.

Look back a couple of hundred years from the present; you'll find that the values of the people at that time were incomprehensibly, unmistakably different. Religion was of paramount importance; it was considered acceptable or obvious that some races were superior to others; people accepted the notion of hereditary aristocracy, and so on.

Based on the historical record, I predict that the values of humans and human civilization 300 years from now will be unrecognizably different from modern Western 21st century values. The question is: is that a problem? I personally don't care that much, but I'm unlike most other LWers in that I'm not very interested in optimizing the region of spacetime outside of my own small bubble.

Replies from: turchin
comment by turchin · 2016-10-15T20:33:45.913Z · LW(p) · GW(p)

I think that natural evolution of values is part of what is to be human (and that is why I am against CEV). But here I mean some kind of disruptive revolutions in values in shorter time period, like in 20 years. And I think it will not in happen in 20 years as humans have some kind of values inertia.

But on loner time horizon new technologies could help to spread new "memes-values" quicker, and they will be like computer viruses for human brains, may be disseminating through brain implants. It could be quick and catastrophic.

comment by woodchopper · 2016-10-26T10:41:12.065Z · LW(p) · GW(p)

I really like that you mention world government as an existential risk. It's one of the biggest ones. Competition is a very good risk reduction process. It has been said before that if we all lived in North Korea, it may well be that the future of humanity would be quite bleak indeed. North Korea is less stable now than it would be if it was the world's government because all sorts of outside pressure contribute to its instability (technology created by more free nations, pressure from foreign governments, etc).

No organisation can ever get it right all the time. Even knowing what right is is pretty hard to do and the main way humans do it is with competition. We know certain things work and certain things don't simply because of policy diversity between nations - we can look and see which countries are successful and which aren't. A world government would destroy this. Under a world government I would totally write off humanity. I suspect we would all be doomed to die on this rock. People very much forget how precarious our current civilisation is. For thousands of years humanity floundered until Britain hit upon the ability to create continued progress through the chance development of certain institutions (rule of law, property rights, contracts, education, reading, writing, etc).

Replies from: g_pepper, ChristianKl, turchin
comment by g_pepper · 2016-10-27T03:58:01.351Z · LW(p) · GW(p)

I agree with your concerns regarding one world government. However, I am curious why you think that the following were "chance developments" of Britain: rule of law, property rights, contracts, education, reading, writing. Pretty much all of those things were in use in multiple times/locales throughout the ancient world. Are you arguing that Britain originated those things? Or that they were developed in Britain independently of their prior existence elsewhere?

comment by ChristianKl · 2016-10-26T19:27:50.892Z · LW(p) · GW(p)

North Korea is less stable now than it would be if it was the world's government because all sorts of outside pressure contribute to its instability (technology created by more free nations, pressure from foreign governments, etc).

The outside world also contributes to it's stability. The current leader is educated in Switzerland and he might be a less rational actor if he would simply be educated at a North Korean school

comment by turchin · 2016-10-26T19:15:02.151Z · LW(p) · GW(p)

While world government may be x-risks if it make mistake, fighting several national states could also be x-risk, and I don't know what is better.