Global Catastrophe Prevention Plan Comprehensive Working Outline (wip)
post by Sailor_Vulcan · 2017-10-18T03:05:30.130Z · LW · GW · 3 commentsThis is a link post for https://docs.google.com/document/d/1b_FHAypOIB2VcrC93Im1_09qpo7vyQ84b3WCOamCdao/edit?usp=sharing
Contents
3 comments
Hi. It occurred to me to wonder if there was any unified practical step by step plan for getting our species to actually survive. I figured this is the kind of thing that would benefit from being organized into a collaborative to-do list or goal/subgoal list/map of sorts. I looked online and the closest things I was able to find to this were the Immortality Road Map website's "Existential Risks Prevention Roadmap" and a research paper by the Global Catastrophic Risk Institute titled "Towards an Integrated Assessment of Global Catastrophic Risk".
As far as I can tell, the "Existential Risks Prevention Roadmap" does not lay out a specific unified practical step by step plan for saving the world, it is instead a very general map of a variety of different endeavors that can be done to help at different points in our technological and sociocultural development, without breaking it down into a unified practical step by step plan. And GCRI's assessment that I just mentioned does not appear to lay out a unified practical step by step plan either.
I've read a lot about the subject over the past few years though, and I decided to try writing a rudimentary first draft of such a plan. I think I might have done a better job of it than I expected, though it is still far from complete. Could you guys perhaps look over what I have written so far and give some feedback?
Thanks!
https://docs.google.com/document/d/1b_FHAypOIB2VcrC93Im1_09qpo7vyQ84b3WCOamCdao/edit?usp=sharing
3 comments
Comments sorted by top scores.
comment by Raemon · 2017-10-19T04:09:04.483Z · LW(p) · GW(p)
I haven't read this in detail yet, but this post does a two things I like that I'd like to highlight:
1) Using a google doc as a WIP that people can comment on to work out a practical problem you're in the middle of thinking about (which I think is a good practice)
2) Providing context for a link post so that it's easier to start a discussion.
comment by whpearson · 2017-10-18T18:41:57.575Z · LW(p) · GW(p)
I think there is a big question of what to do when we have something closed to aligned AI. Who attempts to take it towards super intelligence? How will we make sure it is actually beneficial to humanity not just the sub group of people that are building it. If the builders of it cannot convince the rest of the world of this there will be multiple attempts to build it, which might be bad under some scenarios.
Things have been very quiet on the coherent extrapolated volition front.
comment by StefanDeYoung · 2017-10-18T18:33:34.314Z · LW(p) · GW(p)
Your plan currently only addresses ex-risk from AGI. However, there are several other problems that should be considered if your goal is to prevent global catastrophe. I have recently been reading 80000 Hours, and they have the following list of causes that may need to be included in your plan: https://80000hours.org/articles/cause-selection/
In general, I think that it's difficult to survey a wide topic like AI Alignment or Existential Risk, and, with granularity, write out a to-do list for solving them. I believe that people who work more intimately with each ex-risk would be better suited to develop the on-the-ground action plan.
It is likely that a variety of ex-risks would be helped by reaching for similar goals, in which case, high-level coordinated action plans developed by groups focused on each ex-risk would be useful to the community. If possible, try to attend events such as EA conferences where groups focusing on each of the possible global catastrophes will be present, and you can try to capture their shared action plans.