A map: Causal structure of a global catastrophe

post by turchin · 2015-11-21T16:07:23.285Z · LW · GW · Legacy · 2 comments

Contents

2 comments

Pdf: http://immortality-roadmap.com/levelglobcat.pdf

2 comments

Comments sorted by top scores.

comment by HungryHobo · 2015-11-24T17:37:21.497Z · LW(p) · GW(p)

Where's the arrow from "Creation of Strong benevolent AI" to "Agent becomes malicious". No matter how smart, your AI can make mistakes.

Also, I'm sure someone will point out that your chart makes a large number of assumptions and assumes that ours is the only answer.

Would you find this chart convincing if it was written by someone who believed that we should switch to a low-tech agrarian society like the amish and instead of everything in yellow had a chain down the side reading "reject technology", "turn to god" etc

Or instead of most of the boxes after the current day, a chain which reads something like "culture stagnates"->"technological progress slows"-> "technological progress regresses but not all the way"

Also your "roadmap" sets some mental alarm bells ringing since such massive nests of arrows rarely accompany good positions.

You've really got the LHC in there under "unintended consequences" leading to "agent becomes malicious". Really?

Replies from: turchin
comment by turchin · 2015-11-26T18:32:15.458Z · LW(p) · GW(p)

"Where's the arrow from "Creation of Strong benevolent AI" to "Agent becomes malicious". No matter how smart, your AI can make mistakes." Yes it is true, and I have another map about risks of AI where this possibility is shown. I can't add too many arrows as the map is already too complex ))

"Also, I'm sure someone will point out that your chart makes a large number of assumptions and assumes that ours is the only answer.

Would you find this chart convincing if it was written by someone who believed that we should switch to a low-tech agrarian society like the amish and instead of everything in yellow had a chain down the side reading "reject technology", "turn to god" etc"

The part of this map which is dedicated to x-risks prevention deliberately made small as I have another map about x-risks prevention ways, where all known ideas about it presented in logical order. But it seems to me that your argument is more general, and basically you ask "Are you sure in what you are sure". I think that any map (or even article) may have a lot of assumptions, and it is impossible to list all of them in advance. Anyway it is not clear how to present assumptions in the map and keep it readable.

"You've really got the LHC in there under "unintended consequences" leading to "agent becomes malicious". Really?"

Yes. Unintended consequence of Large hadron collider may be creation of small black hole. The small black hole may become "malicious" if it starts to eat surrounding matter and grow exponentially. While term "malicious" may be strange in case of black hole, it in my opinion explains the difference between "simple" black hole and black hole that is able to eat matter. I think that to explain each arrow is needed a whole page of text (which I partly did in the book "Risks of human extinction" but its translation into English is still in draft.)