Nick Bostrom's TED talk and setting priorities
post by ChrisHallquist · 2012-07-09T05:01:50.955Z · LW · GW · Legacy · 11 commentsContents
11 comments
I just watched Nick Bostrom's TED talk titled "Humanity's biggest problems aren't what you think they are." I was expecting a talk giving Bostrom's take on what he thought the three biggest existential (or at least catastrophic) risks are, but instead, "existential risk" was just one item on the list. The other two were "death" and "life isn't usually as wonderful as it could be."
Putting these other two in the same category as "existential risk" seems like a mistake. This seems especially obvious in the case of (the present, normal rate of) death and existential risk. Bostrom's talk gives an annual death rate of 56 million, whereas if you take future generations into account, a 1% reduction in existential risk could save 10^32 lives.
More importantly, if we screw up solving "death" and "life isn't usually as wonderful as it could be" in the next century, there will be other centuries where we can solve them. On the other hand, if we screw up existential risk in the next century, it means that's it, humanity's run will be over. There are no second chances when it comes to averting existential risk.
One possible counter argument is that the sooner we solve "death" and "life isn't usually as wonderful as it could be," the sooner we can start spreading our utopia throughout the galaxy and even to other galaxies, and with exponential growth a century head start on that could lead to a manyfold increase in the number of utils in the history of the universe.
However, given the difficulties of building probes that travel at even a significant fraction of the speed of light, and the fact that colonizing new star systems may be a slow process even with advanced nanotech, a century may not matter much when it comes to colonizing the galaxy. Furthermore, colonizing the galaxy (or universe) may not be the sort of thing that follows an exponential curve, it may follow a cubic curve as probes spread out in a sphere.
So I lean towards thinking that averting existential risks should be a much higher priority than creating a death-free, always wonderful utopia. Or maybe not. Either way, the answer would seem to be very important for questions like how we should focus our resources, and also whether your should push the button to turn on a machine that will allegedly create a utopia.
11 comments
Comments sorted by top scores.
comment by Lapsed_Lurker · 2012-07-09T10:21:36.171Z · LW(p) · GW(p)
If you mostly solve the 'Ageing' and 'Unnecessary Unhappiness' problems, the youthful, happy populous will probably give a lot more weight to 'Things That Might Kill Everyone'
I don't know about putting these things into proper categories, but I'm sure I'd be a lot more worried about the (more distant than a few decades) future if I had a stronger expectation of living to see it and I spent less time being depressed.
comment by moridinamael · 2012-07-09T15:36:58.835Z · LW(p) · GW(p)
TED is more about "here are some cool ideas you haven't been exposed to" rather than "take this as your singular policy prescription for the rest of your lives." Those three ideas may not all carry the same weight but they are connected ideas which most people don't hear about or take seriously.
comment by buybuydandavis · 2012-07-09T07:57:12.483Z · LW(p) · GW(p)
Putting these other two in the same category as "existential risk" seems like a mistake. This seems especially obvious in the case of (the present, normal rate of) death and existential risk. Bostrom's talk gives an annual death rate of 56 million, whereas if you take future generations into account, a 1% reduction in existential risk could save 10^32 lives.
And if you don't take people who don't exist into account, is it still a mistake?
comment by 84728E30 · 2012-07-09T11:58:41.109Z · LW(p) · GW(p)
if you take future generations into account, a 1% reduction in existential risk could save 10^32 lives.
And if you don't solve aging ever, and take future generations into account, then 100% of all human beings ever born will die, which kills at least as many people as any existential risk. (The inevitability of solving aging given enough time but no particular will or effort towards it isn't obvious - so it's not just a matter of saving a few decades worth of 56 million lives a year.)
comment by komponisto · 2012-07-09T05:23:26.210Z · LW(p) · GW(p)
See Levels of Action. Creating utopia is analogous to an object-level action, while avoiding extinction would be a meta-level action: avoiding extinction supports the creation of utopia (in fact it is, obviously, necessary). However, at some point, meta-level actions have to bottom out in an object-level action -- otherwise they were pointless.
See also Lost Purposes. If we devote too much of ourselves to avoiding extinction, our values may drift away and we may simply become survival-maximizing agents who never end up creating utopia.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-07-10T18:55:54.078Z · LW(p) · GW(p)
Some kinds of utopia may also be meta-level actions. Imagine a world without wars and starvation, where all people can spend time doing their hobbies... and some of them choose to work on existential risks. Give everyone on this planet a decent education, and you can have hundred times more people on LW.
comment by billswift · 2012-07-09T11:04:36.202Z · LW(p) · GW(p)
The biggest risk of "existential risk mitigation" is that it will be used by the "precautionary principle" zealots to shut down scientific research. There is some evidence that it has been attempted already, see the fear-mongering associated with the startup of the new collider at CERN.
A slowdown, much less an actual halt, in new science is the one thing I am certain will increase future risks, since it will undercut our ability to deal with any disasters that actually do occur.
One point Hayek made in his last book, The Fatal Conceit, is that you need to be aware of how others will use whatever policies you support. His arguments were primarily about how "well meaning" academics ended up paving the way for the Communist dictatorships, but the general argument is equally true of many other policies.
comment by NancyLebovitz · 2012-07-09T07:08:26.793Z · LW(p) · GW(p)
Anti-aging tech would free up a lot of resources for working on existential risks.
comment by VincentYu · 2012-07-09T08:22:21.086Z · LW(p) · GW(p)
Bostrom is working on a paper showing how existential risk prevention can be seen as the most important task for humanity (PDF and HTML).
From the last section ("Outlook"):
We have seen that reducing existential risk emerges as a dominant priority in many aggregative consequentialist moral theories (and as a very important concern in many other moral theories). The concept of existential risk can thus help the morally or altruistically motivated to identify actions that have the highest expected value. In particular, given certain assumptions, the problem of making the right decision simplifies to that of following the maxipok principle.
comment by TGM · 2012-07-14T20:26:44.164Z · LW(p) · GW(p)
I think it is very easy to believe that "death" and "life isn't usually as wonderful as it could be" are as important as existential risk if you weight heavily in favour of the well-being of you, people you know and people in other senses "close" to you.
Caring more about that is also very natural. If I were to tell a typical person that was going to die tomorrow, their reaction would be stronger than is going to die under the same circumstances etc.
Of course, shut up and multiply, but only if you actually care about all of the events equally.