[Link] Atlantic Interview with Nick Bostrom - "We're Understimating the Risk of Human Extinction"

post by Airedale · 2012-03-07T16:25:05.253Z · LW · GW · Legacy · 3 comments

http://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/

My apologies if I've missed this posted anywhere else (google and my scanning the sidebar didn't turn it up).  I'm not sure that there's much that will be new to those who have been following existential risk elsewhere, but it's nice to see an article like this in a relatively mainstream publication.  Bostrom discusses issues such as the concept of existential risk and certain specific types of existential risk, why we as humans underestimate that risk, possible strategies to address existential risk, the simulation argument, how Hollywood and literature don't generally portray existential risk helpfully, and other issues.

 

3 comments

Comments sorted by top scores.

comment by DanielVarga · 2012-03-08T23:54:58.324Z · LW(p) · GW(p)

Some have argued that we ought to be directing our resources toward humanity's existing problems, rather than future existential risks, because many of the latter are highly improbable. You have responded by suggesting that existential risk mitigation may in fact be a dominant moral priority over the alleviation of present suffering. Can you explain why?

Well suppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time, just as many people think that from a fundamental moral point of view, it doesn't matter where somebody is spatially---somebody isn't automatically worth less because you move them to the moon or to Africa or something. A human life is a human life. If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do.

I think that's a very ineffective way to start such an interview. I reject both these moral positions even though demographically I am at the center of their target audience. Maybe I underestimate the average newspaper reader, but I think he or she wouldn't even understand why would anyone take such positions at all.

comment by RobertLumley · 2012-03-07T16:55:45.175Z · LW(p) · GW(p)

I love that I saw this somewhere else (Instapundit) first. I was about to post it, but I'm glad you did.

comment by Jonathan_Graehl · 2012-03-11T01:57:45.517Z · LW(p) · GW(p)

So long I only now read it. Good find. Thanks.

I was relieved to hear that Bostrom also has doubts as to the validity of "if there are to be 10^12 humans before extinction, then it's unlikely to merely be the 2 billionth one, so probably there won't be that many" ("observation selection effect").

As for the simulation argument, I think it's good so long as you even feel comfortable having a prior for other base-level universes existing (I don't). I do notice that in the argument, a simulation of our universe by a containing intelligence unlike us in origin has too small a measure to matter.