Posts

Comments

Comment by heath_rezabek on How can I reduce existential risk from AI? · 2012-11-12T18:08:10.814Z · LW · GW

Greetings. New to LessWrong, but particularly compelled by the discussion of existential risk.

It seems like one of the priorities would be to ease the path for people, once they're aware of existential risk, to move swiftly through doing meta work to doing strategy work and direct work. For myself, once I'd become aware of existential risk as a whole, it became an attractor for a whole range of prior ideas and I had to find a way towards direct work as soon as possible. That's easier said than done.

Yet it seems like the shortest path would be to catalyse prosperous industry around addressing the topic. With Bostrom's newer classification scheme, and the inclusion of outcomes such as Permanent Stagnation and Flawed Realization, the problem space is opened wider than if we were forced to deal with a simple laundry list of extinction events.

So: What of accelerating startups, hiring, career paths, and industry around minimizing Permanent Stagnation and Flawed Realization as existential risk subtypes, always with existential risk in mind? I've started an IdeaScale (additions welcomed) along these lines. ie, what activities could accelerate the growth of options for those seeking ways to pour their energy into a livelihood spent mitigating existential risk?

http://vesselcodex.ideascale.com

(The title is after my own work regarding the topic, which has to do with long-term archival and preservation of human potential. I presented this proposal for what I call Vessel Archives at the 100 Year Starship Symposium in September 2012. http://goo.gl/X4Fr9 - Though this is quite secondary to the pressing question of accelerating and incubating ER-reducing livelihood as above.)