Don't Build Fallout Shelters
post by katydee · 2013-01-07T14:38:48.275Z · LW · GW · Legacy · 126 commentsContents
126 comments
Related: Circular Altruism
One thing that many people misunderstand is the concept of personal versus societal safety. These concepts are often conflated despite the appropriate mindsets being quite different.
Simply put, personal safety is personal.
In other words, the appropriate actions to take for personal safety are whichever actions reduce your chance of being injured or killed within reasonable cost boundaries. These actions are largely based on situational factors because the elements of risk that two given people experience may be wildly disparate.
For instance, if you are currently a young computer programmer living in a typical American city, you may want to look at eating better, driving your car less often, and giving up unhealthy habits like smoking. However, if you are currently an infantryman about to deploy to Afghanistan, you may want to look at improving your reaction time, training your situational awareness, and practicing rifle shooting under stressful conditions.
One common mistake is to attempt to preserve personal safety for extreme circumstances such as nuclear wars. Some individuals invest sizeable amounts of money into fallout shelters, years worth of emergency supplies, etc.
While it is certainly true that a nuclear war would kill or severely disrupt you if it occurred, this is not necessarily a fully convincing argument in favor of building a fallout shelter. One has to consider the cost of building a fallout shelter, the chance that your fallout shelter will actually save you in the event of a nuclear war, and the odds of a nuclear war actually occurring.
Further, one must consider the quality of life reduction that one would likely experience in a post-nuclear war world. It's also important to remember that, in the long run, your survival is contingent on access to medicine and scientific progress. Future medical advances may even extend your lifespan very dramatically, and potentially provide very large amounts of utility. Unfortunately, full-scale nuclear war is very likely to impair medicine and science for quite some time, perhaps permanently.
Thus even if your fallout shelter succeeds, you will likely live a shorter and less pleasant life than you would otherwise. In the end, building a fallout shelter looks like an unwise investment unless you are extremely confident that a nuclear war will occur shortly-- and if you are, I want to see your data!
When taking personal precautionary measures, worrying about such catastrophes is generally silly, especially given the risks we all take on a regular basis-- risks that, in most cases, are much easier to avoid than nuclear wars. Societal disasters are generally extremely expensive for the individual to protect against, and carry a large amount of disutility even if protections succeed.
To make matters worse, if there's a nuclear war tomorrow and your house is hit directly, you'll be just as dead as if you fall off your bike and break your neck. Dying in a more dramatic fashion does not, generally speaking, produce more disutility than dying in a mundane fashion does. In other words, when optimizing for personal safety, focus on accidents, not nuclear wars; buy a bike helmet, not a fallout shelter.
The flip side to this, of course, is that if there is a full-scale nuclear war, hundreds of millions-- if not billions-- of people will die and society will be permanently disrupted. If you die in a bike accident tomorrow, perhaps a half dozen people will be killed at most. So when we focus on non-selfish actions, the big picture is far, far, far more important. If you can reduce the odds of a nuclear war by one one-thousandth of one percent, more lives will be saved on average than if you can prevent hundreds of fatal accidents.
When optimizing for overall safety, focus on the biggest possible threats that you can have an impact on. In other words, when dealing with societal-level risks, your projected impact will be much higher if you try to focus on protecting society instead of protecting yourself.
In the end, building fallout shelters is probably silly, but attempting to reduce the risk of nuclear war sure as hell isn't. And if you do end up worrying about whether a nuclear war is about to happen, remember that if you can reduce the risk of said war-- which might be as easy as making a movie-- your actions will have a much, much greater overall impact than building a shelter ever could.
126 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2013-01-07T20:55:22.063Z · LW(p) · GW(p)
The type of people who build fallout and other fortified shelters intuitively assign a rather high prior to at least one of the scenarios in which such a shelter is very useful unfolding in the near time frame, such as a few decades. This is far from the mainstream, but so is a similar time-frame estimate of cryonic revival or the Singularity, which are taken quite seriously here. If you give, say, 50-50 odds for a global disaster happening within 20 years or so, building a shelter becomes pretty rational. and the extra expense of making it radiation-resistant is probably small enough to be worth absorbing.
As for the "life will not be worth living" argument, having a shelter may make a difference between propagating your genes or perishing, and I suspect that the perceived importance of survival of one's bloodline correlates with other survivalist traits.
Replies from: katydee, hankx7787↑ comment by katydee · 2013-01-07T22:08:54.460Z · LW(p) · GW(p)
The type of people who build fallout and other fortified shelters intuitively assign a rather high prior to at least one of the scenarios in which such a shelter is very useful unfolding in the near time frame, such as a few decades.
Right. I argue that even if you do anticipate this, more utility is likely captured by attempting to decrease the odds of such scenarios occurring than by attempting to protect yourself (and perhaps a few friends/family) from their effects.
As for the "life will not be worth living" argument, having a shelter may make a difference between propagating your genes or perishing, and I suspect that the perceived importance of survival of one's bloodline correlates with other survivalist traits.
That seems plausible, though I'll point out that many common antinatalist arguments seem much stronger in a postapocalyptic setting.
Replies from: SaidAchmiz, MugaSofer↑ comment by Said Achmiz (SaidAchmiz) · 2013-01-07T23:22:41.352Z · LW(p) · GW(p)
It is also possible that people who subscribe to survivalist views derive less utility (both perceived and actual) from modern amenities, and would therefore have their utility reduced less in a post-apocalyptic setting than your average middle-class city-dweller.
Replies from: MaoShan↑ comment by MaoShan · 2013-01-08T04:11:40.783Z · LW(p) · GW(p)
There's also the happiness caused by imagining oneself being well-off compared to the rest of society, whether that means someone buying a lottery ticket to live like a king among commoners, or someone building a fallout shelter to live like a commoner among wretches; it's essentially the same wish, but the disaster scenario is actually statistically much more likely. So wouldn't building a fallout shelter be more rational than buying a lottery ticket?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-01-08T13:10:46.553Z · LW(p) · GW(p)
It would be if it weren't so much more expensive.
Replies from: MaoShan↑ comment by MaoShan · 2013-01-09T02:40:19.314Z · LW(p) · GW(p)
Would the best way be, then, to scale the effort as resources allow? Learning to make your own fire is much cheaper than building a fallout shelter--next in line might be a survival kit (including matches, lighter, and flint, in case you find that you are incompetent at producing fire--like me). I've grown the opinion that all people should have a basic survival skill-set; it would be pitiful to have to rediscover animal trapping and such.
↑ comment by MugaSofer · 2013-01-13T15:01:39.669Z · LW(p) · GW(p)
I'll point out that many common antinatalist arguments seem much stronger in a postapocalyptic setting.
I guess it might depend on the apocalypse, but could you provide an example?
Full disclosure: I am more skeptical of antinatalism then LW norm.
Replies from: katydee↑ comment by katydee · 2013-01-14T23:42:50.651Z · LW(p) · GW(p)
I guess it might depend on the apocalypse, but could you provide an example?
Sure. The most basic antinatalist argument is that creating a new human life on average creates more disutility (in the form of human suffering) than it creates utility (in the form of human happiness). Whether or not you accept this argument depends on what you think the prospects of suffering and happiness are for the average human life.
At present, I view human lives as involving potentially very high gains. Nuclear war would not only stop the potential for many such gains, but it would likely halt or reverse gains that have already been made. For instance, absence of access to modern medical techniques, dentistry, painkillers, etc. would likely create substantial suffering.
On the plus side, it would also make life shorter, but I have a feeling that would be cold comfort-- at least to non-antinatalists!
The View from Hell provides a solid overview of a lot of antinatalist thought if you're interested in learning more.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-01-15T10:23:32.335Z · LW(p) · GW(p)
It would create more suffering per human life, sure, but I don't see how it could be enough that I start endorsing antinatalism. Then again, I'm not sure where exactly the line falls in any case; and allowing humanity to go extinct seems like it would bring such vast disutility I'm not sure any amount of suffering could outweigh it (unless there are other sentient beings available or something.)
↑ comment by hankx7787 · 2013-01-14T23:25:19.368Z · LW(p) · GW(p)
er, it's not anything about the "perceived importance of survival of one's bloodline" - it's about rebuilding civilization and trying again at the Singularity, and hopefully preserving as many people, cryonics patients, or whatever we best can, through the rough times. In a very worst-case scenario, reproduction could be a useful way to help carry on that mission beyond your own personal capabilities (which it already is in some ways).
comment by Vladimir_Nesov · 2013-01-07T21:09:47.226Z · LW(p) · GW(p)
While it is certainly true that a nuclear war would kill or severely disrupt you if it occurred, this is not necessarily an argument in favor of building a fallout shelter.
It is clearly an argument in favor of building a fallout shelter. There are just other arguments against building a fallout shelter, such as its cost, that are stronger. The presence of those arguments doesn't stop nuclear war from being an argument for building a shelter.
Replies from: katydee↑ comment by katydee · 2013-01-07T21:50:34.920Z · LW(p) · GW(p)
Fixed, thanks.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2013-01-07T22:14:45.843Z · LW(p) · GW(p)
... this is not necessarily a fully convincing argument in favor of building a fallout shelter ...
This could still be interpreted as the same error: the argument seems clearly true, and so in this sense it's "fully convincing", there are no objections to the argument itself. Since it seems useful to call "unconvincing" those arguments that seem false, it doesn't seem like a good idea to call "not fully convincing" an argument whose fault is not in its falsity.
comment by John_Maxwell (John_Maxwell_IV) · 2013-01-08T11:28:27.302Z · LW(p) · GW(p)
This might make sense for an individual, but on a civilization level, I like the idea of there being crazy survivalists to keep humanity going if something bad happens.
Replies from: Error, TheOtherDave, CronoDAS↑ comment by Error · 2013-01-08T20:54:04.056Z · LW(p) · GW(p)
Maybe the crazy survivalists like the idea, too. Hypothesis: Some reasonable portion of the people who build shelters aren't buying nuclear-war insurance; they're buying the fantasy of being the romantic postapocalyptic survivor. Like buying the fantasy of being rich via lottery tickets, or the fantasy of being fit and pretty via exercise machines.
Replies from: hankx7787↑ comment by hankx7787 · 2013-01-14T23:19:37.110Z · LW(p) · GW(p)
What about the destruction of civilization is romantic? That's one of the worst possible outcomes...
Just because people invest in their health (exercise machines and gyms), [hygiene], health insurance, life insurance (cryonics), security, etc, doesn't mean they are crazy. A good bit of LW literature argues that some paranoia can be healthy, that we tend to be overconfident and over optimistic, etc.
↑ comment by TheOtherDave · 2013-01-08T15:03:35.419Z · LW(p) · GW(p)
What's your estimate of how much more likely a crazy survivalist is to survive something bad than a non-(crazy survivalist)? Or, put slightly differently: supposing that something bad happens and only N humans survive, what's your estimate of how many of N are crazy survivalists?
Replies from: TheLooniBomber, maxweber↑ comment by TheLooniBomber · 2013-01-27T01:58:17.862Z · LW(p) · GW(p)
It would seem that a crazy survivalist would be less likely to survive a catastrophe that would require his or her rationale than a non-crazy survivalist. Seems redundant to have to articulate.
↑ comment by maxweber · 2013-03-13T19:41:55.774Z · LW(p) · GW(p)
UN says 1.4B people don't use electricity. What happens in "modern" world doesn't affect them much. Reproduction rate for many animals is faster than cancer rate in Chyernoble from what I remember; so, even nukes might not really destroy those folks. Plus, many are outside the realms of major effect. So, clearly, North Americans who do not prep will be in trouble (just as non-NAZI's in Germany); but, aboriginees probably don't need to prep. I'd say a prepper in USA has a 75% chance of survival whereas a non-prepper a 0.5%. (Given my expectation that on-the-ground war is most probable outcome in next 0.1 to 10 years). For other scenarios such as a comet, overly strong solar flare, massive pandemic, or such, preppers have maybe a 95% chance and non-preppers have maybe a 20% chance. The probability of those happening is probably 1% in our lifespan but could be as high as 20% for specific locations.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-03-13T22:33:03.032Z · LW(p) · GW(p)
OK.
So on your account it follows that if N% of the U.S. population comprises preppers, then after a nuclear event we should expect to see ~1.4B "non-modern" people, (.005*[1-N]*us_pop) non-prepping USAers, and (.75*N*us_pop) prepping USAers, among others. After some other scenarios we should expect to see ~1.4B "non-modern" people, (.2*[1-N]*us_pop) non-prepping USAers, and (.95*N*us_pop) prepping USAers, among others.
Yes?
So, OK. If us_pop is 315486161 and N is 0% then in the first scenario we expect the survivors to include (.005*1*315486161=) ~1.6 million USAers and in the second scenario we expect the survivors to include (.2*1*315486161=) 63 million USAers.
At the other extreme, if N is 100%, then in the first scenario we expect the survivors to include (.75*1*us_pop=) 237 million USAers and in the second scenario we expect the survivors to include (.95*1*us_pop) 300 million USAers.
In all of these scenarios we also expect the survivors to include ~1.4B non-"modern" people, plus some modern survivors not from the U.S.
Yes?
Given those estimates, and assuming that the referents for your "prepper" and JMIV's "crazy survivalist" are roughly comparable, I don't find myself caring very much about Ns smaller than about 1%.
↑ comment by CronoDAS · 2013-01-08T16:10:44.786Z · LW(p) · GW(p)
For all I know, the "crazy survivalists" might make things worse.
(Yeah, yeah, fictional evidence and all that.)
comment by Desrtopa · 2013-01-08T17:44:42.543Z · LW(p) · GW(p)
When optimizing for overall safety, focus on the biggest possible threats that you can have an impact on. In other words, when dealing with societal-level risks, your projected impact will be much higher if you try to focus on protecting society instead of protecting yourself.
This sounds dubious to me. Yes, if you prevent society from being impacted by a nuclear war, that's a much bigger utility return than protecting yourself, but the odds of your efforts being decisive in preventing a nuclear war are much, much lower. Protecting society is a huge coordination problem, where everyone benefits most by the problem being solved by someone else without their participation, so they can save their time and money for other things. Building a fallout shelter is not a coordination problem.
comment by PhilGoetz · 2013-01-10T23:54:47.594Z · LW(p) · GW(p)
You're confusing two messages. One is that building a fallout shelter is not a good way to optimize personal safety. The other is that optimizing society safety is, for some unspecified reason, more-virtuous than optimizing personal safety.
The first point is historically wrong. In the time when people in the US built fallout shelters, most people who built them thought it was more likely than not that there would be a nuclear war soon. They made the correct calculation given this assumption.
The second point is simply a referral back to a set of presumptions about ethics (selfishness is bad) that should themselves be argued over, rather than the examples here.
The argument that you shouldn't build a fallout shelter because the life you'd live after civilization was destroyed wouldn't really be worth living is contrary to what we know about happiness. It is a highly-suspect argument for other reasons as well.
Replies from: katydee, MugaSofer↑ comment by katydee · 2013-01-11T23:25:43.873Z · LW(p) · GW(p)
You're confusing two messages. One is that building a fallout shelter is not a good way to optimize personal safety. The other is that optimizing society safety is, for some unspecified reason, more-virtuous than optimizing personal safety.
I consider both those arguments relevant to this post. What I'm saying is that building fallout shelters is unlikely to be optimal for personal safety because there is generally much lower-hanging fruit. Further, in the event that building fallout shelters is optimal for personal safety, your efforts would be likely better spent elsewhere because pursuing personal-level solutions for society-level hazards is highly inefficient.
I omitted the obvious third argument against fallout shelters (that they increase the odds of nuclear war, albeit only slightly) because I evaluated it as likely to make people think that this post was actually about fallout shelters.
The first point is historically wrong. In the time when people in the US built fallout shelters, most people who built them thought it was more likely than not that there would be a nuclear war soon. They made the correct calculation given this assumption.
I'm not sure that that's reasonable to say. As I pointed out, personal safety is personal, and thus your decision to build a fallout shelter is subject to a wide range of confounding factors. I believe that it is likely that most people who built fallout shelters could have purchased expected years of survival for cheaper, even on a personal level. Typically fallout shelters seem extremely unlikely to actually be the lowest-hanging fruit in someone's life.
The second point is simply a referral back to a set of presumptions about ethics (selfishness is bad) that should themselves be argued over, rather than the examples here.
I assumed, perhaps wrongly, that that was a given on this site, given previous discussions here. There's probably an argument to be made that all such actions are merely purchasing fuzzies and that protecting yourself is purchasing utilons, but I'd like to think that we're better than that.
The argument that you shouldn't build a fallout shelter because the life you'd live after civilization was destroyed wouldn't really be worth living is contrary to what we know about happiness. It is a highly-suspect argument for other reasons as well.
I'm aware of the studies and arguments used to claim that happiness will reset regardless of what happens to you, but I think that a full-scale nuclear war falls outside the outside view's domain.
Replies from: katydee, MugaSofer↑ comment by katydee · 2013-01-12T22:09:32.897Z · LW(p) · GW(p)
Can I get an explanation for the downvotes here?
Replies from: woodside↑ comment by woodside · 2013-01-13T07:47:57.074Z · LW(p) · GW(p)
I wasn't one of the downvoters, but I'll hazard a guess.
- pursuing personal-level solutions for society-level hazards is highly inefficient.
Viscerally for me, this immediately flags as not being right. I might not understand what you mean by that statement though. It's very difficult to make an impact on the probability of society-level hazards occuring, one way or the other, so if you think there's a non-trivial chance of one of them occuring a personal-level solution seems like the obvious choice.
- I assumed, perhaps wrongly, that that was a given on this site, given previous discussions here. There's probably an argument to be made that all such actions are merely purchasing fuzzies and that protecting yourself is purchasing utilons, but I'd like to think that we're better than that.
I think you're significantly overestimating the uniformity of LW readers. The high-impact posters seem to have similar ethical views but I imagine most of the readers arrive here through an interest in transhumanism. On the scale from pathological philanthropists to being indifferent to the whole world burning if it doesn't include you subjectively experiencing it I bet the average reader is a lot closer to the latter than you would like. I certainly am. I care on an abstract, intellectual level, but it's very very difficult for me to be emotionally impacted by possible futures that don't include me. I think a lot of people downvote when you make assumpions about them (that turn out to be incorrect).
That being said, I don't have a problem with anything you wrote.
Replies from: katydee, MugaSofer↑ comment by katydee · 2013-01-13T10:28:10.242Z · LW(p) · GW(p)
Thanks for the reply!
Viscerally for me, this immediately flags as not being right. I might not understand what you mean by that statement though. It's very difficult to make an impact on the probability of society-level hazards occuring, one way or the other, so if you think there's a non-trivial chance of one of them occuring a personal-level solution seems like the obvious choice.
What I am trying to say is that preparing personal defenses for society-level issues is very expensive per expected lifespan gained/dollar relative to preparing personal defenses for personal-level issues. Further, it is possible to actually remove the harm from many personal-level issues completely through personal precautions, while the same is not really likely for societal-level issues.
If you learn a better way of running and don't injure your knees, the knee injuries never happen. If you build a bomb shelter and are in your shelter when the nuclear war happens and the shelter holds up and you have sufficient supplies to wait out the radiation, society is still essentially destroyed, you just happened to live through it. Most, if not all, of the overall harm has not been mitigated.
I also think the difficulty of making an impact on the probability of society-level hazards occurring is overestimated by most, but that's a separate issue.
On the scale from pathological philanthropists to being indifferent to the whole world burning if it doesn't include you subjectively experiencing it I bet the average reader is a lot closer to the latter than you would like... I think a lot of people downvote when you make assumpions about them (that turn out to be incorrect).
I hope that you are wrong here, but it seems quite plausible that you are right.
↑ comment by MugaSofer · 2013-01-13T13:59:29.570Z · LW(p) · GW(p)
The high-impact posters seem to have similar ethical views but I imagine most of the readers arrive here through an interest in transhumanism. On the scale from pathological philanthropists to being indifferent to the whole world burning if it doesn't include you subjectively experiencing it I bet the average reader is a lot closer to the latter than you would like. I certainly am. I care on an abstract, intellectual level, but it's very very difficult for me to be emotionally impacted by possible futures that don't include me.
Really? Hmm. That seems like a problem we should be fixing.
↑ comment by MugaSofer · 2013-01-13T14:34:25.297Z · LW(p) · GW(p)
You might want to work harder on distinguishing between what is moral and what is best for the individual's happiness.
EDIT: Actually, you did so perfectly well. PhilGoetz appears to be arguing against helping other people, without providing any arguments for this position. Strange.
↑ comment by MugaSofer · 2013-01-13T14:47:06.053Z · LW(p) · GW(p)
You're confusing two messages. One is that building a fallout shelter is not a good way to optimize personal safety. The other is that optimizing society safety is, for some unspecified reason, more-virtuous than optimizing personal safety.
[...]
The second point is simply a referral back to a set of presumptions about ethics (selfishness is bad) that should themselves be argued over, rather than the examples here.
Downvoted for this. If you think that "screw humanity, I want to live!" is moral, then I would love to see you defend that claim.
EDIT:
The first point is historically wrong. In the time when people in the US built fallout shelters, most people who built them thought it was more likely than not that there would be a nuclear war soon. They made the correct calculation given this assumption.
... is also left undefended.
Replies from: KawoombaWhen taking personal precautionary measures, worrying about such catastrophes is generally silly, especially given the risks we all take on a regular basis-- risks that, in most cases, are much easier to avoid than nuclear wars. Societal disasters are generally extremely expensive for the individual to protect against, and carry a large amount of disutility even if protections succeed.
To make matters worse, if there's a nuclear war tomorrow and your house is hit directly, you'll be just as dead as if you fall off your bike and break your neck. Dying in a more dramatic fashion does not, generally speaking, produce more disutility than dying in a mundane fashion does. In other words, when optimizing for personal safety, focus on accidents, not nuclear wars; buy a bike helmet, not a fallout shelter.
↑ comment by Kawoomba · 2013-01-13T15:03:17.763Z · LW(p) · GW(p)
If you think that "screw humanity, I want to live!" is moral, then I would love to see you defend that claim.
Why would anyone need to defend moral claims, and to whom?
Replies from: MugaSofer↑ comment by MugaSofer · 2013-01-13T15:19:50.326Z · LW(p) · GW(p)
Unless he is a psychopath, PG attaches utility to other people not dying horribly with extremely high probability. The same is true of most (all?) LW members.
If he is, in fact, a psychopath, then what is "selfishly moral" for him is irrelevant to what most Lesswrongers are trying to maximize. If he wishes to claim that it is not, then I would like to see some damn evidence.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-01-13T15:28:08.259Z · LW(p) · GW(p)
I understood "screw humanity, I want to live" not to mean "no preference regarding others 'dying horribly'", but to mean "preferences regarding humanity in general outweighed by preference for one's own survival".
I, for one, would choose the survival of my family unit over that of arbitrarily many other humans, no matter the details of their demise.
Does that make me a psychopath in your eyes?
Replies from: MugaSofer↑ comment by MugaSofer · 2013-01-13T17:21:54.083Z · LW(p) · GW(p)
Wait a minute, I know that example ...
It's you, isn't it! From that argument about parents and children! Are you going to bring this up every time I talk about morality?
Psychopaths, obviously, don't care about their family unit on an emotional level, so no. It does, however, make you hopelessly biased in my eyes. You already know this.
However, I'm not sure I believe you.
Let's say your family lives on a spaceship, Lost In Space style. You encounter Galactus, the world-eating space monster, and discover to your horror that he's on a direct course for Earth! However, your ship is powerful enough to ram him, destroying you both. Would you choose to abandon Earth - which is, of course, filled with children, lovers, people - and fly off into the night? Or would you tell the children to close their eyes, hug your [insert correct gendered spouse here], grit your teeth, and ...
Hold that thought.
I would like to see you write a top-level post defending that position. If you believe that most of LW is irrational on this topic - saving the world - then it seems that you should be fixing that. If, on the other hand, you believe that I am unusually irrational in this regard, you will doubtless get lots of tasty karma for your trouble.
Full disclosure: I intend to post on this topic myself.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-01-13T19:11:07.735Z · LW(p) · GW(p)
First off, regarding your hypothetical, it would be no contest. Replacing earth with a box of ice cream would have about the same decision time. You could frame it as a more active choice, akin to the trolley problem - have me convince Galactus to change course towards Earth - I wouldn't mind.
Now where you go wrong is assuming that somehow implies that I do not value the lives of my fellow human beings, or of mankind overall. On the contrary, I am quite concerned about x-risk, and I would be too if there were no family to be affected. It is just not the priority goal.
Consider you had a choice between the life of a non-human primate, xor that of a human. Just because you (hypothetically) quickly decide to save the human does not mean you do not value the non-human primate, or that without another high-priority preference involved, you would not invest a lot into saving that animal. Do you see the point?
If you believe that most of LW is irrational on this topic - saving the world - then it seems that you should be fixing that.
No, why would you think that? I do share that value, and I obviously would as a derived value even if I only did care about my family (repercussions). But even without such considerations, I'd care about it. I'd just accept no tradeoff whatsoever compromising between other humans and "my" humans. What's to defend about that? As I wrote elsewhere, it's not a "bias" in any sense of the word as it's used around here.
Lastly, again with that curious choice of calling preferences "irrational". As if we could just argue towards which preferences to rationally choose. Which ice cream flavor we really should enjoy the most.
Replies from: ygert, Qiaochu_Yuan, Elithrion↑ comment by ygert · 2013-01-13T19:22:16.924Z · LW(p) · GW(p)
I just want to ask... Is this really your preferences? You'd commit genocide to save your family? That seems atrociously evil. How do you morally justify that to yourself? (Not a rhetorical question, I'd like to know the answer.)
...
Really? You wouldn't trade the lives of your family for the lives of billions? I have trouble getting my mind around the implications of something that, that, that... I don't have a word for it.
Replies from: TheOtherDave, Kawoomba↑ comment by TheOtherDave · 2013-01-13T19:29:31.999Z · LW(p) · GW(p)
How do you morally justify that to yourself? (Not a rhetorical question, I'd like to know the answer.)
What do you expect such an answer to look like?
Put a different way: how would you respond to the equivalent question? ("Do you really have the opposite preference? You'd kill your family to avoid genocide? That seems atrociously evil. How do you morally justify that to yourself?")
My preferences are more like yours than Kawoomba's here, but I am not sure the kind of moral justification you're asking for is anything other than a rhetorical way of claiming the primacy of our values over theirs.
Replies from: ygert↑ comment by ygert · 2013-01-13T20:20:49.345Z · LW(p) · GW(p)
No... Although I did see it could be read that way, so I added the disclaimer. I do admit that the disclaimer does not add much as there was no cost to me to write it. I'm sorry if I sounded that way.
("Do you really have the opposite preference? You'd kill your family to avoid genocide? That seems atrociously evil. How do you morally justify that to yourself?")
I will attempt show my thought processes on this the best I can. An answer like this is what my question was trying to get. Yes, I understand that drawing the line is fuzzy, but it can be good to get a somewhat deeper look.
Think of the people of the world. Think of all the things people go around doing in day to day life. The families, the enjoyment people get. I am sure that this is something you value. Of course, you might have a higher weighting of the moral value of this for certain groups rather than others, like perhaps your family. But to have a weighting that much higher on your family members would have certain implications. If you had a weighting high enough to make you commit genocide rather than have your family die, that weighting must be very high, more than a billion to one. (Of course this depends on the size of your family. If you consider half the planet your family, we are discussing something else entirely.)
Lets repeat that for emphasis. 1000000000:1 ratio. What does that actually mean? it means that you would prefer rather than a minor inconvenience to a family member, you would prefer something a billion times worse happening to a non-family member. To use an often used example, you would rather have a stranger tortured for years rather than have a dust speck get in your family member's eye. This is something very much at odds with the normal human perception of morality. That is, while it may be self consistent, it absolutely contradicts what we normally consider morality. This is a strong indicator (though not definite of course) that something fishy is going on with that argument.
(There are some more points to be said, but this post is long enough already. For example, why do I assume that you can scale things this way? In other words why is scope insensitivity bad? If you want to talk about that more I will, but that is not the point of my comment.)
So basically, what I was asking might be better be written this way: Given the vastly different moral point of view you get from such a system of ethics, how do you justify it? That is to say, you do need to be able to come up with some other factor explaining how your system does fit in with our moral intuitions, and I genuinely can not think of such an explanation.
Replies from: fubarobfusco, TheOtherDave, Kawoomba↑ comment by fubarobfusco · 2013-01-13T22:26:05.735Z · LW(p) · GW(p)
it means that you would prefer rather than a minor inconvenience to a family member, you would prefer something a billion times worse happening to a non-family member. To use an often used example, you would rather have a stranger tortured for years rather than have a dust speck get in your family member's eye.
For five years of torture, I'd estimate that as 34 trillion times worse, assuming a perception takes about 100 msec and a human can register 20 logarithmic degrees of discomfort.
Replies from: Eliezer_Yudkowsky, army1987, ygert↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-14T21:08:34.768Z · LW(p) · GW(p)
Thank you for FINALLY calculating that number. It's very likely off by a few orders of magnitude due to the 20-logarithmic-degrees part (our hearing ranges more widely than this, I think) but at least you tried to bloody calculate it.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-01-14T21:47:15.065Z · LW(p) · GW(p)
Here is a relevant paper which lets one estimate the number of bits sufficient to encode pain, by dividing the top firing rate by the baseline firing rate variability of a nociceptor and taking base 2 logarithm (the paper does not do it, but the data is there). My quick guess is that it's at most a few bits (4 to 6), not 20, which is much less sensitive than hearing.
Replies from: fubarobfusco, army1987↑ comment by fubarobfusco · 2013-01-15T03:43:12.999Z · LW(p) · GW(p)
I didn't suggest 20 bits; I suggested 20 distinguishable degrees of discomfort. Medical diagnosis sometimes uses ten, or is that six? which I thought was wrong at the low end — a dust speck is much less discomfort than anyone goes to the doctor for. 4 to 6 bits could encode 16 to 64 degrees of discomfort. I did presume that discomfort is logarithmic (since other senses are), and I conflated pain with irritation, which are not really subjectively the same.
↑ comment by A1987dM (army1987) · 2013-01-15T19:57:29.822Z · LW(p) · GW(p)
I suppose humans have more than one nociceptor each? ;-)
Replies from: shminux↑ comment by Shmi (shminux) · 2013-01-15T21:23:11.068Z · LW(p) · GW(p)
If your point is that perceived pain is aggregated, you are right, of course. The above analysis is misguided, one should really look at the brain structures that make us perceive torture pain as a long-lasting unpleasant experience. A quick search suggests that the region of the brain primarily responsible for the unpleasantness of pain (as opposed to its perception) is the nociceptive area (area 24) of the Anterior cingulate cortex. I could not find, however, a reasonable way to calculate the dynamic range of the pain affect beyond the usual 10-level scale self-assessment.
↑ comment by A1987dM (army1987) · 2013-01-14T17:39:14.968Z · LW(p) · GW(p)
It's not obvious that disutility would scale linearly with amount of torture; would you be indifferent between a 100% chance of getting a dust speck in your eye and a 1 in 34 trillion chance of being tortured for five years?
(My intuition probably doesn't work right with such small numbers, so I don't know myself.)
↑ comment by ygert · 2013-01-14T11:44:25.673Z · LW(p) · GW(p)
Thanks for pointing that out. That comment that you linked to seems a valuable post in the discussion of torture verses dust specs. I just used torture versus dust specks in my comment for familiarity value. To consider the question more formally, of course, you need to find two things, one trivial and one major, that the ratio of badness is exactly 1 to a billion. The exact details do not exactly matter to my point, but you are right that the example I gave is not technically accurate.
↑ comment by TheOtherDave · 2013-01-14T02:50:54.416Z · LW(p) · GW(p)
If I've followed your thought process correctly, you justify your moral intuitions because they are shared by most other humans, and since Kawoomba's intuitions aren't so popular, they require some other justification.
Yes?
Fair enough; that answers my question. Thanks.
For my own part, I think that's not much of a justification, but then I don't think that justifying moral intuitions is a particularly valuable exercise. They are what they are. If my moral intuitions are shared by a more powerful and influential group than yours, then our society will reflect my moral intuitions and not yours. For me to then demand that you explain how your moral intuitions "fit in" with mine makes about as much sense as demanding that a Swahili speaker explain how their grammatical intuitions "fit in" with mine.
Replies from: ygert↑ comment by ygert · 2013-01-14T11:37:06.663Z · LW(p) · GW(p)
Indeed. You summarized my point far more effectively then I did. Thank you. I was a bit unclear about what I was saying. You are right about it not being much of a justification, but that is basically the only type of moral justification possible. But I get your point about it not being a very productive task to try to give moral justifications.
↑ comment by Kawoomba · 2013-01-13T20:35:23.841Z · LW(p) · GW(p)
Doesn't follow, you don't need to grade linearly, i.e. you can consider avoiding corporeal or mental damage / anguish above a certain threshold exponentially more important than avoiding dust specks.
Think of an AI taking care of a nuclear power plant, consider it has a priority system: "Core temperature critical? If yes, always prioritize this. Else: Remote control cleaner bots to clean the facility. Else: (...)" Or a process throwing an exception which gets priority-handled.
↑ comment by Qiaochu_Yuan · 2013-01-13T19:27:28.816Z · LW(p) · GW(p)
But even without such considerations, I'd care about it. I'd just accept no tradeoff whatsoever compromising between other humans and "my" humans.
Oog. See, this is why I'm so terrified of the prospect of becoming a parent (and the idea that other humans can easily become parents). I don't know if I can trust anybody with the power to instill this kind of loyalty in anybody else.
Replies from: Kawoomba, MugaSofer↑ comment by Kawoomba · 2013-01-13T19:30:32.781Z · LW(p) · GW(p)
I know what you mean, it was on my mind when writing this comment.
↑ comment by Elithrion · 2013-01-21T00:44:38.480Z · LW(p) · GW(p)
How about this? I realize a lot of the points stretch credulity, but I think you should be able to imagine the situation.
Your family member requires a kidney transplant or they will die in 6 months. With the transplant, you can expect they will live an average of 10 additional days. Normal channels of obtaining one have completely failed. By some happenstance, you know of a 25-year-old pretty average-seeming woman who is a signed-up donor (you are not personally acquainted with her), and happen to know that if she dies, your family member will receive the transplant. Do you kill her and make it look like an accident in order to get the transplant, given that you know you would definitely for sure get away with it?
Replies from: Ahuizotl↑ comment by Ahuizotl · 2013-01-29T20:57:03.517Z · LW(p) · GW(p)
Only 10 additional days? I'm sorry but the expected utility in quality of life is far too low to make an investment. Undertaking a kidney transplant (of any kind) will result in a great deal of pain for my loved one and the time spend in preparations, surgery, and recovery would consume most of the 10 additional days gained by the surgery. To say nothing of the monetary expenses and moral problems that would result from committing murder.
In such a scenario, I would be much better off investing my resources into making my loved ones remaining days pleasant, their death as painless as possible, and perhaps investing into cryonics so that they may be revived at a later date.
A great deal of this decision is inspired by reading the Wall Street Journal article Why Doctors Die Differently which states that the majority of healthcare professionals seem to prefer dieing peacefully at home rather than undergo risky life extending treatments.
While I doubt a family member dying at home from a kidney disease would count as 'peaceful' in most definitions of the word, undergoing invasive surgery in an attempt to gain a few extra days simply isn't worth it from a quality of life standpoint.
Replies from: Elithrion↑ comment by Elithrion · 2013-01-29T22:50:37.786Z · LW(p) · GW(p)
I take your point that you could argue that the ten days would produce disutility or at least very little utility, however the point is to answer the question in the least possible world - where the ten days actually are about as good as regular days. If you're having trouble imagining that, make it twenty or thirty days, or whatever you think would be equivalent to ten regular days.
To say nothing of the monetary expenses and moral problems that would result from committing murder.
Well, the whole point is that the revealed preferences from Kawoomba's post above should easily overrule such considerations, and therefore checking whether they do or not should clarify whether he's acting under extreme scope insensitivity or some other confounding factor.
Replies from: Ahuizotl↑ comment by Ahuizotl · 2013-01-30T04:04:56.452Z · LW(p) · GW(p)
Well, the whole point is that the revealed preferences from Kawoomba's post above should easily overrule such considerations, and therefore checking whether they do or not should clarify whether he's acting under extreme scope insensitivity or some other confounding factor.
Ah, my mistake.
comment by paulfchristiano · 2013-01-08T09:18:31.817Z · LW(p) · GW(p)
A similar calculus suggests you shouldn't work on life extension, if your goal is to live longer. I think both arguments are valid and useful to remember, but they overlook some important considerations, particularly in relation to motivation and social affiliation, and particularly when the project entails a real social benefit in addition to the perceived personal benefit.
Independently, I think you may underestimate the value of building shelters (though its surely not a good play on the utilitarian calculus). On the altruistic account, it's better if I survive in worlds where others don't. And in such worlds I also stand to have more numerous descendants, though not higher quality of life. So I don't think you should invoke that particular argument---that survival is less valuable post-apocalypse---against shelters.
(edit: "if not" --> "though not")
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-01-08T10:01:42.196Z · LW(p) · GW(p)
And in such worlds I also stand to have more numerous descendants, if not higher quality of life.
Are you serious? Most worlds I can imagine in which huge numbers of people are killed but those in fallout shelters survive would have hellish quality of life for the survivors.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2013-01-08T18:14:17.422Z · LW(p) · GW(p)
I meant literally "though quality of life is not higher," forgetting that "if not" typically means "and possibly."
comment by sixes_and_sevens · 2013-01-11T00:42:42.582Z · LW(p) · GW(p)
One common mistake is to attempt to preserve personal safety for extreme circumstances such as nuclear wars.
This is a type of statement that I only ever see on Less Wrong. I have yet to come up with a good name for them.
Replies from: katydeecomment by NancyLebovitz · 2013-01-07T22:22:14.167Z · LW(p) · GW(p)
It seems to me that if you can make a reasonable estimate of where to live so as to avoid the brunt of the likely disasters and live there without much loss of utility, that's the way to go.
Replies from: Armok_GoB, Error↑ comment by Error · 2013-01-08T20:58:50.569Z · LW(p) · GW(p)
That might be possible in this age of telecommuting, though still difficult. The trouble with safe places is that part of the reason they're safe is that there's nothing there worth nuking...or living near.
I'm not sure if that generalizes to natural disasters. Are they more common in desirable areas, perhaps because geographical features that invite disaster (e.g. faultlines) correlate to features humans tend to live and build near (e.g. rivers, coastlines)?
Replies from: Nornagest, TheLooniBomber, NancyLebovitz↑ comment by Nornagest · 2013-01-08T21:10:27.776Z · LW(p) · GW(p)
Hmm. Well, earthquakes and volcanoes tend to correspond to active plate boundaries, and those often coincide with coastlines although not every plate boundary is active and not every coastline is near a plate boundary. Volcanic soil is often fertile, too, and high (i.e volcanic) islands are a lot more attractive for human habitation than low (i.e. coral) ones in places where the distinction is meaningful. Floodplains are good for farming but are also vulnerable to disaster; the clue's in the name. And of course semitropical coastlines are exactly where you'd expect to find hurricanes.
So yeah, it seems plausible that areas which are attractive for dense human settlement are also more disaster-prone on average, though the variance is pretty high.
Replies from: None↑ comment by [deleted] · 2013-01-09T01:46:23.677Z · LW(p) · GW(p)
I'm trying to think of a biome that isn't disaster-prone...
Replies from: None, None↑ comment by [deleted] · 2013-01-09T05:32:26.353Z · LW(p) · GW(p)
Once the world warms a bit over the next few centuries, with the poles warming quite a bit more than the equator, much of central Canada along the Hudson bay could have rather nice weather and pretty much zero tectonic risk of any kind. Depends on how the tornado belts shift though.
Replies from: None↑ comment by TheLooniBomber · 2013-01-27T01:51:30.113Z · LW(p) · GW(p)
Would the most logical strategy of nuclear war be to nuke the places that would be the most worth living near in a post nuclear war situation, or to destroy epicenters of civilization(cities) and strategic enemy military outposts? A major city wouldn't be a very desirable place to live, since they rely upon the complex of infrastructure to be destroyed in nuclear war. A river and a wooded area may not be worth nuking in a strategic sense, but running water and a natural food source is definitely worth living near.
↑ comment by NancyLebovitz · 2013-01-09T02:47:11.499Z · LW(p) · GW(p)
I don't think faultlines are necessarily attractive, but the harbors and rivers are, and the ocean may be becoming more of a hazard. On the other hand, you don't want to be someplace that's seriously drought-prone, either.
The big issue is that population concentration is a risk factor in itself if the infrastructure takes much damage. If you can only be happy in a big city, then you can't get much out of trying to avoid disasters, though some thought about which cities are most at risk might be in order.
comment by simplicio · 2013-01-07T21:12:04.475Z · LW(p) · GW(p)
Excellent post. If anybody is interested, the stats for causes of death can be found here; they give a pretty good idea of what to focus on vis-a-vis personal risk.
Unfortunately, they are not ranked by how cheaply one can reduce one's risk in all those categories. Paging Doctor Yvain? ;)
comment by hankx7787 · 2013-01-14T23:16:27.093Z · LW(p) · GW(p)
er, I'm not sure this is generally applicable advice. If you're doing anything serious, privacy is a very difficult issue and it helps to have as much physical security as possible (including e.g. bomb shelters).
Also this kind of thing makes sense from a general insurance perspective. I already pay for health insurance, cryonics (life insurance), etc. Security is another natural way to spend money when it comes to general insurance, and some kind of shelter or safe room could be a good option depending on the situation.
and yes I can advocate for whatever political positions I think are least likely to result in catastrophic war, but that's generally more expensive and I am anticipating slim to no change in the outcome as a result of those sort of efforts anyway.
Replies from: katydee↑ comment by katydee · 2013-01-14T23:38:26.555Z · LW(p) · GW(p)
If you're doing anything serious, privacy is a very difficult issue and it helps to have as much physical security as possible (including e.g. bomb shelters).
Can you be more specific, both with regards to "anything serious" and "physical security?"
Also this kind of thing makes sense from a general insurance perspective. I already pay for health insurance, cryonics (life insurance), etc. Security is another natural way to spend money when it comes to general insurance, and some kind of shelter or safe room could be a good option depending on the situation.
It's important to keep in mind that safe rooms aren't what I'm talking about here-- they may be practical solutions to personal threats such as armed home invasions, though I think if you anticipate such risks with high enough probability to justify a safe room, moving to a better neighborhood might be more practical.
This post primarily discusses fallout shelters in the context of attempting to build personal defenses for societal-level threats.
Replies from: hankx7787↑ comment by hankx7787 · 2013-01-14T23:40:00.921Z · LW(p) · GW(p)
I have to ask you to imagine the IP security issues of AGI and AGI development. Think both internal and external.
Also, I have to add, what the heck makes you think majoritiarian/political strategies are somehow more effective than local/in-house approaches?
In the end, building fallout shelters is probably silly, but attempting to reduce the risk of nuclear war sure as hell isn't. And if you do end up worrying about whether a nuclear war is about to happen, remember that if you can reduce the risk of said war-- which might be as easy as making a movie-- your actions will have a much, much greater overall impact than building a shelter ever could.
er, I would be happy to trade my money for something that can solve societal risks so effectively as to eliminate any necessity of this scale of personal security, but I would have to really see the details on that movie to see what could possibly justify such a hell of an extraordinary claim as, say, somehow making a significant impact on societal-level risks...
Replies from: katydee↑ comment by katydee · 2013-01-14T23:58:56.511Z · LW(p) · GW(p)
I have to ask you to imagine the IP security issues of AGI and AGI development. Think both internal and external.
Are there any non x-risks related examples you could provide? I think sustained discussion of this specific example may be mind-killing given LW local norms.
Also, I have to add, what the heck makes you think majoritiarian/political strategies are somehow more effective than local/in-house approaches?
I would have to really see the details on that movie to see what could possibly justify such a hell of an extraordinary claim as, say, somehow making a significant impact on societal-level risks...
I link to the Wikipedia page that discusses this in the article, but basically that film was seen by over 100 million people, including President Reagan. According to Reagan it was "greatly depressing," changed his mind on how he should view nuclear war, and ultimately led to new nuclear disarmament treaties that resulted in the dismantling of 2,500+ nuclear weapons.
Replies from: hankx7787↑ comment by hankx7787 · 2013-01-15T00:14:14.635Z · LW(p) · GW(p)
No, I chose my example because it's exactly relevant.
I'm not disagreeing that we need an optimal utilitarian solution. I'm arguing that your thesis here fails toward that end in general.
What makes you think dismantling the United States' nuclear weapons makes you safer?
Replies from: katydee↑ comment by katydee · 2013-01-15T01:19:43.289Z · LW(p) · GW(p)
No, I chose my example because it's exactly relevant.
I'm not willing to discuss that issue here, so unless you have another example I am withdrawing from the discussion of that point.
What makes you think dismantling the United States nuclear weapons makes you safer?
Please start reading links, they are there for a reason.
The vast majority of weapons dismantled as a result of the treaty were on the Soviet side. Besides, even if you don't believe that arms reductions make you safer, the film also produced significant outlook changes on the parts of key decision-makers.
Replies from: hankx7787↑ comment by hankx7787 · 2013-01-15T13:22:19.515Z · LW(p) · GW(p)
Please start reading links, they are there for a reason.
The vast majority of weapons dismantled as a result of the treaty were on the Soviet side. Besides, even if you don't believe that arms reductions make you safer, the film also produced significant outlook changes on the parts of key decision-makers.
Ok, thanks, but even assuming it was a significant positive impact on societal risk, what in the world makes you think you can reproduce that kind of result? It seems like you kind of left the central point of your post rather unsubstantiated/undefended, to say the least.
I want to see your data!
I can't predict nuclear war, but there are plenty of solid reasons why the risk of some major catastrophe of some sort is increasing (UFAI being one of them).
EDIT:
after actually reading your post, I think I get what you are saying now, which is this: focus your resources in an optimal utilitarian fashion, esp. e.g. focusing on more likely exisistential risks (UFAI included).
which completely makes sense to me. I'm just arguing that bomb shelters in particular are not necessarily contrary to those interests, so I don't really like your article as you've written it...
Unfortunately, full-scale nuclear war is very likely to impair medicine and science for quite some time, perhaps permanently.
meh, I think you're underestimating how doable it is to rebuild everything from the ground up. the main problems are political I think. and some planet-destroying full-scale nuclear war is pretty unlikely as far as catastrophes go anyway. Remember, most people don't actually want to destroy the world.
Replies from: hankx7787comment by DaFranker · 2013-01-07T21:21:08.331Z · LW(p) · GW(p)
I really want - emotionally - to upvote this, but I'm looking for some kind of content I haven't already read in simpler form and not finding any.
Perhaps I'm too tired to catch on to the real message? All I can see is "Maximize expected utility to the best of your knowledge and ability. No, really, do that." Then it gets refactored with reasonable-sounding categories and labels that seem useful to describe general patterns of expected utility in the more restrained domains of conceptspace that they're meant for, along with good tips of things to remember to take into account for the EU calculation.
Anyone care to enlighten me as to what I'm missing? Or perhaps I'm just missing the usefulness of the post.
Replies from: ygert, Eugine_Nier↑ comment by ygert · 2013-01-07T21:26:18.381Z · LW(p) · GW(p)
Please note that often a rephrasing and reformulation of existing knowledge can be a very good thing, almost as good as original research. If someone writes a post explaining some points in clear language, people can read it and gain a deeper understanding of those points. For that reason, posts like this one are most certainly a Good Thing, and definitely praiseworthy.
Replies from: Qiaochu_Yuan, DaFranker↑ comment by Qiaochu_Yuan · 2013-01-08T05:07:09.242Z · LW(p) · GW(p)
Agreed. In general, I think people vastly overvalue having new ideas relative to having better explanations and a better understanding of old ideas.
↑ comment by Eugine_Nier · 2013-01-08T21:10:29.418Z · LW(p) · GW(p)
Anyone care to enlighten me as to what I'm missing?
A very "clever" utility calculation, where by "clever" I mean wrong.
comment by lavalamp · 2013-01-07T15:08:28.473Z · LW(p) · GW(p)
There's also the quantum suicidish argument, "the less likely I am to survive a nuclear holocaust, the less likely it is that I will find myself in universes where a nuclear holocaust has occurred." Depending on how you weight post-apocalyptic existence, things that tend to make you experience more of it might be negative utility even if they work...
Replies from: DanArmak↑ comment by DanArmak · 2013-01-07T23:15:50.094Z · LW(p) · GW(p)
How is that different from saying, the less likely I am to survive a lethal infection, the less likely it is that I will find myself in universes where I was infected, so I won't get vaccinated?
Replies from: lavalamp↑ comment by lavalamp · 2013-01-08T01:48:55.264Z · LW(p) · GW(p)
Post-infection versions of yourself experience generally the same versions of worlds as pre-infection versions of you. Post-apocalypse yous find themselves inhabiting drastically difference worlds from your present self.
If you had an extremely large negative term for "having ever been infected" in your utility function or the infection was one that left you in horrible pain, then this reasoning applies equally well. Although with the caveat that not getting immunized makes you partly responsible for the infection of many others, so it's still not exactly the same.
Replies from: DanArmak↑ comment by DanArmak · 2013-01-08T13:18:50.825Z · LW(p) · GW(p)
[If] the infection was one that left you in horrible pain, then this reasoning applies equally well.
Then that's good enough as a reductio.
Although with the caveat that not getting immunized makes you partly responsible for the infection of many others, so it's still not exactly the same.
Not being vaccinated makes you just a little responsible for other deaths. Not building a fallout shelter big enough for five people makes you totally responsible for five deaths. It seems commensurable.
Replies from: lavalamp↑ comment by lavalamp · 2013-01-08T14:56:04.228Z · LW(p) · GW(p)
Hm. My intuition says it doesn't balance, but of course this will depend on the specifics of the disease, so you could be right. We can also assume you can construct some disease such that the possibility of surviving it without the vaccine is similar to the possibility of surviving the apocalypse without a bunker.
But that's besides the point. I think this argument doesn't really work because to be consistent, you have to prefer suicide to living in a post-(apocalypse, infection) world. Or rather, it does work, but only for really extreme situations. Vaccines are typically much less expensive than bunkers, so the cost of deferring your suicide decision is much less (although you may not want to if e.g. the disease leaves people paralyzed).
comment by Decius · 2013-01-08T04:45:32.074Z · LW(p) · GW(p)
How does this exact same logic not provide a reason not to buy fire insurance?
There are more cost-effective ways to reduce your expected disutility due to fire, like having and understand fire alarms and fire extinguishers; the monetary payment of insurance will typically not be more than the value lost; the risk of fire is absolutely fairly low.
Replies from: Nornagest↑ comment by Nornagest · 2013-01-08T06:11:11.996Z · LW(p) · GW(p)
I don't know a thing about fire insurance in absolute terms, but one big difference seems to be that constructing a fallout shelter represents a large fixed cost, while insurance pricing is (ideally; roughly) proportional to downside risk. There is of course some margin in there, but if that margin's not too high (which it may very well be; again, not an expert) then given the nonlinearity of utility in money there should exist situations where the expected loss from fire outweighs the expected loss from purchasing insurance against it.
Replies from: Decius↑ comment by Decius · 2013-01-08T06:41:17.414Z · LW(p) · GW(p)
It's not the expected loss from fire that's relevant- it's the expected gain from insurance. If you think that e.g. $100,000 is more valuable immediately after a fire which has a tiny chance of happening, then buying insurance with that payout in the event of a fire is worth more to you than buying a lottery ticket which has independent odds and the same expected $payoff.
But the fallout shelter has more value after a limited apocalypse as well. The nonlinearity of utility in available shelter is even more pronounced than that of money; it drops off very sharply at 'enough'.
We can disagree with the survivalists on the odds of a limited apocalypse as compared to other hazards, but you can't honestly put a lower value on 'shelter(given an apocalyptic event)' than you would give to shelter once an apocalyptic event occurred. (I tried to develop an example, but so many things change in value during such an event; of course you'd give all your money for shelter after the event, because the expected instrumental value of money in the absence of civilization is nil)
Replies from: Nornagest↑ comment by Nornagest · 2013-01-08T07:24:19.301Z · LW(p) · GW(p)
If you think that e.g. $100,000 is more valuable immediately after a fire which has a tiny chance of happening, then buying insurance with that payout in the event of a fire is worth more to you than buying a lottery ticket which has independent odds and the same expected $payoff.
Provided that the "valuable" in that sentence is denominated in utility rather than dollars, that seems obviously true to me. Since the utility curve over money is more or less logarithmic, a payment of $100,000 tied to a black swan that wipes out most or all of your net worth (as might be expected in the event of e.g. a fire totaling a house you haven't finished paying off) carries much more utility than an equal-odds lottery payout of $100,000 to someone with an ordinary middle-class income and savings. We can scale those windfalls down to arbitrarily low odds of happening without breaking the logic. There are various other factors that could make buying insurance a bad idea in some particular case, of course, but none of them seem especially relevant.
The nonlinearity of utility in available shelter is even more pronounced than that of money; it drops of very sharply at 'enough'.
I haven't been thinking of the utility of a fallout shelter primarily in terms of shelter, but in terms of improved survival rates during the initial apocalypse. Having shelter thereafter is nice and should probably be factored into your utility calculations if you're considering becoming a survivalist, but I'm pretty sure the project's overall utility would still be dominated by your chances of being flash-incinerated/poisoned by fallout/crushed by falling debris/et cetera. And as the OP says, there are almost certainly more cost-effective ways of improving your expected lifespan given all factors, unless you are already quite rich and very conscientious.
Replies from: Decius↑ comment by Decius · 2013-01-08T17:58:29.449Z · LW(p) · GW(p)
The cost-effectiveness of preparing for an apocalyptic event (versus preparing for a mundane event like a car crash) varies with the perceived likelihood of the apocalyptic event.
I think the most cost-effective thing to do is research the likelihood of apocalyptic events more seriously. How much would you pay right now for a magic charm which protected you and your immediate peers from all unfriendly AIs (and nothing else) permanently?
comment by [deleted] · 2013-01-07T15:00:53.614Z · LW(p) · GW(p)
Thus even if your fallout shelter succeeds, you will likely live a shorter and less pleasant life than you would otherwise.
and carry a large amount of disutility even if protections succeed
Your wording here is implying a comparison of the wrong things. With a given probability of nuclear war, we don't care about the utility difference between war and not war; we care about the difference between preparations succeed and preparations fail, which is the probability we are trying to control when buying a fallout shelter.
As you say though, a harsh post-apocalyptic world will kill you quickly, so a fallout shelter can only save you so much compared to other precautions (like a bike helmet, or anti-war measures).
Replies from: katydee↑ comment by katydee · 2013-01-07T15:18:24.621Z · LW(p) · GW(p)
With a given probability of nuclear war, we don't care about the utility difference between war and not war; we care about the difference between preparations succeed and preparations fail, which is the probability we are trying to control when buying a fallout shelter.
I'm not sure I agree. When optimizing for utility across one's lifespan, it's important to note that years of post-nuke life are both more expensive and carry less utility than years of non-nuke life. So when you evaluate the utility/dollar of building a fallout shelter and compare it to the utility/dollar of other potential investments, you need to put a discount factor on the years of life you expect your shelter to gain for you in the event of a war.
For instance, if I expected with 50% confidence a nuclear war that will certainly kill me if it occurs while I am unprotected and were presented with the following options:
Option A: Purchase a bomb shelter that will grant ten years of post-nuke life in the event of a nuclear war but will grant no benefit in the event of no nuclear war
Option B: Purchase an experimental health intervention that will grant on average five years of additional healthy life in the event of no nuclear war, but have no effect in the event of a nuclear war (as I'll die before getting to benefit)
I would probably consider option B to be superior to option A, because my intuitions suggest that the utility of post-nuclear life would be massively discounted.
Replies from: Strange7, Vaniver, Eugine_Nier↑ comment by Strange7 · 2013-01-08T02:56:44.898Z · LW(p) · GW(p)
Be careful about over-discounting, though. After a few years of post-nuke life, a lot of the utility penalties from lack of modern support would go away as you found alternatives or simply got used to it, and some people might be envigorated by the challenges of a "bad-ass horror wake."
↑ comment by Vaniver · 2013-01-07T21:13:29.160Z · LW(p) · GW(p)
nyan_sandwich is correct that your wording, specifically the "thus," is incorrect. The argument "fallout shelters need to be cost-effective compared to other preventative measures to be wise, and they probably aren't" is a good one; even the narrow "nuclear wars are unpleasant to survive, and we should discount preparations accordingly" is fine; the argument "nuclear wars are unpleasant to survive, thus we shouldn't prepare for them" isn't a good one.
Put another way, the argument reads as "Because other medicine will be destroyed, you should not provide your own medicine," which is odd; no, when other medicine is destroyed is the best time to provide my own medicine! The insurance might not be cost-effective but there's no denying that it's insurance.
Replies from: katydee, Eugine_Nier↑ comment by katydee · 2013-01-07T22:14:16.328Z · LW(p) · GW(p)
The actual quote from the original post is:
Unfortunately, full-scale nuclear war is very likely to impair medicine and science for quite some time, perhaps permanently.
Thus even if your fallout shelter succeeds, you will likely live a shorter and less pleasant life than you would otherwise.
This does not seem as if it is stating "nuclear wars are unpleasant to survive, thus we shouldn't prepare for them;" it seems as if it is stating "nuclear wars are unpleasant to survive, and we should discount preparations accordingly." What am I missing?
Replies from: Vaniver↑ comment by Vaniver · 2013-01-07T22:58:40.519Z · LW(p) · GW(p)
I think it's the combination of "thus" and "otherwise" being insufficiently clear. There are two main possible interpretations:
Conditioned on a nuclear war happening, if your fallout shelter succeeds, you will likely live a shorter and less pleasant life than if your fallout shelter fails.
If a nuclear war happens and your fallout shelter succeeds, you will likely live a shorter and less pleasant life than if nuclear war does not occur.
The first is obviously wrong; the second is incomplete, because it penalizes the act of building the shelter (the variable under control) for the occurrence of the nuclear war without penalizing the act of not building the shelter in the event of a nuclear war occurring. The full analysis is a 2x2 matrix, where the fallout shelter actually does make you better off if the war occurs, and actually does make you worse off if the war doesn't occur.
Replies from: katydee↑ comment by katydee · 2013-01-08T20:16:41.064Z · LW(p) · GW(p)
Thanks for the clarification. What do you think of the following revision to that passage?
Replies from: VaniverUnfortunately, full-scale nuclear war is very likely to impair medicine and science for quite some time, perhaps permanently.
Thus, even if your fallout shelter succeeds, it will only partially mitigate the harm done to you by nuclear war, not erase it completely. You must apply a discount factor to the years of life that you expect your fallout shelter to buy you in the event of a nuclear war.
↑ comment by Vaniver · 2013-01-08T20:31:58.351Z · LW(p) · GW(p)
That's fine; I might move the conclusion up to the introduction, like this (my edited version):
Replies from: Eugine_NierFurther, one must consider the quality of life reduction that one would likely experience in a post-nuclear war world and discount accordingly. Even if your fallout shelter succeeds, it will only partially mitigate the harm done to you by nuclear war, not erase it completely. You may have enough medicine stockpiled to prevent enough diseases that you eventually die of old age, but the prospects of curing old age or undoing death require medical and scientific progress that require large and advanced human civilization. Unfortunately, full-scale nuclear war is very likely to impair medicine and science for quite some time, perhaps permanently.
Seeking to buy QALYs by investing in a fallout shelter is buying them when they're lower quality, and unlikely to be delivered, and thus probably underperforms other investments.
↑ comment by Eugine_Nier · 2013-01-08T21:48:25.014Z · LW(p) · GW(p)
Seeking to buy QALYs by investing in a fallout shelter is buying them when they're lower quality, expensive
This is highly dubious. You probably have much cheaper low hanging fruit in the event of a disaster, than otherwise.
Replies from: Vaniver↑ comment by Eugine_Nier · 2013-01-08T21:03:47.229Z · LW(p) · GW(p)
even the narrow "nuclear wars are unpleasant to survive, and we should discount preparations accordingly" is fine
Um, no. That's like saying "people in third world countries have unpleasant lives, therefore we should discount the value of donating to the charities helping them accordingly".
Replies from: katydee↑ comment by katydee · 2013-01-08T21:06:15.170Z · LW(p) · GW(p)
Um, no. That's like saying "people in third world countries have unpleasant lives, therefore we should discount the value of donating to the charities helping them accordingly".
...but that's correct? Saving ten years of pleasant life creates/preserves more utility than saving ten years of unpleasant life, all else being equal.
Replies from: Eugine_Nier, OrphanWilde↑ comment by Eugine_Nier · 2013-01-08T21:28:23.637Z · LW(p) · GW(p)
The point is that it takes less money to increase the utility of someone living in a third-world country by a fixed amount than to increase the utility of some living in a first world country by the same amount.
Replies from: DaFranker, katydee↑ comment by DaFranker · 2013-01-08T21:36:20.109Z · LW(p) · GW(p)
Yes, that's exactly what is being said. You calculate the value of both types of lives, divide by the costs, and go for whichever provides the highest resulting payoff.
In other words, you have not naively assigned the same utility to every life saved, and you have calculated things in proportion to your best guess as to their actual expected utility. You shut up and multiply. This is exactly what the sentence you objected to was trying to say.
Perhaps you already grok this principle so well that you were assuming the sentence was meant to say something else? Otherwise I'm confused why you feel the need to make that point.
↑ comment by katydee · 2013-01-08T21:34:31.006Z · LW(p) · GW(p)
That's true, but the discount factor still applies. Helping people in the third world is cheap enough relative to helping people in the first world that it makes up for the reduced utility per year of life saved.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-01-08T21:38:19.686Z · LW(p) · GW(p)
The same discount factor applies to helping yourself in probability branches where you experience disasters, at the expense of ones where you don't.
↑ comment by OrphanWilde · 2013-01-08T21:18:42.915Z · LW(p) · GW(p)
All else isn't equal, though. It's not a comparison between pleasant life and unpleasant life, it's a comparison between a comparatively unpleasant life and oblivion.
Some people might attach negative utility to an unpleasant life, but, like people who mischaracterize how unhappy a debilitating injury will make them, they're probably overestimating the relationship between their current life and their current level of happiness.
Replies from: DaFranker↑ comment by DaFranker · 2013-01-08T21:30:56.127Z · LW(p) · GW(p)
That's not where the misunderstanding lies, though. If we take the sentence:
"people in third world countries have unpleasant lives, therefore we should discount the value of donating to the charities helping them accordingly"
It is very much true, almost trivially so. The value of the donation gets reduced by a factor proportional to the unpleasantness of the life versus some other, more pleasant life in a high-prosperity region. So if saving either life costs the same, or if the difference in cost does not cover the difference in unpleasantness, then it is better to save the pleasant life with this money.
However, what seems to be the issue here is that "discount" and "accordingly" are being charged with connotation, rather than taken as mathematical factors in an equation. It is true that in the current state of the current world we live in, the E.U. of saving a life in a third-world country is better than saving a life in a first-world one, because it is much cheaper, and because it doesn't correlate that well with life-pleasantness anyway. This may be where the objections are coming from.
So what's being said is that you should calculate the expected utility of a post-apocalypse (or third-world) life as lower than that of a modern life. Then, calculate the costs as normal. Then, calculate the probabilities as normal. Then, calculate expected utility in proper fashion, having accounted for the difference in value.
It's all very much straightforward to me and implied by most utilitarian calculus I've seen, so I'm somewhat baffled by the presence of so many objections to that claim.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-01-08T22:45:30.579Z · LW(p) · GW(p)
Suppose the fallout shelter would guarantee your survival. Suppose furthermore that the massive meteor storm or whatever it is guaranteed to save your life from is guaranteed to hit the planet (or whatever) in five years. How do you feel about your discount rate in this scenario, with the other variables stripped away?
Suppose furthermore that fallout shelters are expensive enough that you either spend the five years living a very spartan existence, which will continue after the fact, or living it up with every luxury you've ever denied yourself in the five years you're going to get.
↑ comment by Eugine_Nier · 2013-01-08T21:01:49.384Z · LW(p) · GW(p)
I'm not sure I agree. When optimizing for utility across one's lifespan, it's important to note that years of post-nuke life are both more expensive and carry less utility than years of non-nuke life. So when you evaluate the utility/dollar of building a fallout shelter and compare it to the utility/dollar of other potential investments, you need to put a discount factor on the years of life you expect your shelter to gain for you in the event of a war.
Um, utility tends to have diminishing returns in material possessions, hence the utility comparison goes the other way.
comment by lsl8303@outlook.com · 2019-01-11T00:11:29.587Z · LW(p) · GW(p)
Hmm...if you are only building a shelter for bombings than yeah it would be pretty pointless but todays shelter is typically a multi-functioning unit that can serve as a safe place from everything from intruders to tornados or just extra storage. If your going to spend 40,000 for a giant empty basement (or worse one full of garbage you'll never use and the mice that live in it) why not spend 40,000 on a smaller shelter that serves a purpose. The point is they are no longer just 'bomb-shelters' they are multi-functioning units that can be used for many different things ranging from riots to extra storage. They might also be used as a shelter that doesn't interfere with the beauty of nature or perhaps a hunting hide-away. Also I can think of many other dumb things we spend our money like going out to eat and going to the bar. Most would argue they are to be passed down through families so that 100-200 years from now, if cared for, it can be given to your family. I also would say that most people who would invest in a bomb shelter probably already invest in things like bike helmets, a healthy mind and body and common accidental death. This notion that 'preppers' are some kind of moronic breed who eats red meat, chews tabacco and drives drunk is what an article like this is built on. If you are building a bomb shelter as some sort of paranoia that we are going to be eradicated by Russians and it serves no other purpose perhaps its dumb, otherwise its a good investment.
comment by katydee · 2013-01-08T20:08:23.730Z · LW(p) · GW(p)
Relevant note: This post isn't about fallout shelters.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-01-09T03:04:55.016Z · LW(p) · GW(p)
Then what is it about?
comment by Yosarian2 · 2013-01-08T02:12:47.969Z · LW(p) · GW(p)
It does depend on what values you are maximizing for, though.
Are you maximizing for your own survival, or for the survival of the human race? If you think there's a 10% chance an nuclear war in the next 50 years large enough to wipe out the human race, and you think that our species spending a billion dollars on fallout shelters increases our species chances of surviving that scenario by 5%, then spending that money increases our chance of surviving the next fifty years by .5%. That doesn't sound like a bad deal to me; if it's not worth doing at that cost, you're valuing the "survival of the human race" at less then 200 billion dollars. (Mitigation of .5% risk is worth a billion dollars if the total value is greater then 200 billion dollars.) Of course, I did just pull those numbers out of thin air; the 10% figure is probably either reasonable or somewhat optimistic, given the last 60 years, but I don't know how to estimate the odds that fallout shelters might decrease the extinction chance.
You're probably right that if you're only worried about protecting your own life, it's probably not the best investment. But as far as existential risk mitigation goes, that might be more cost effective then in the short term, say, spending a billion dollars on asteroid impact avoidance. (Of course, efforts that actually reduce the chances of nuclear war are still better.)
Edit: math error corrected
comment by maxweber · 2013-03-13T19:30:57.481Z · LW(p) · GW(p)
intellect implication applied to economic hostile forces imply they either do not view their activities as financial nukes or they believe they will survive unscathed and not need a bunker. ;-) Based on UN forces on the ground in USA, non-military departments building up military powers, and other activities, the latter is the probable case.
comment by maxweber · 2013-03-13T19:27:49.589Z · LW(p) · GW(p)
Welp. One thing is most people built fallout shelters (build bugout bunkers) for their families; so, maybe that's not directly "personal safety". At any rate, there is also a general belief among preppers that they are a set of people, a group inspired to be prepared; this generally also defines those who do not prep as not prepared. So, you are talking about society but the preppers see two societies. The preppers have websites, books, tv shows, and more. Just as an Army prepares in units, those who built a fall-out shelter were not preparing in isolation. They did not expect they would be the only persons to survive. One can surely argue the inability of hostile forces to build and deploy a nuke is significant: seems some relationship exists between the intellect needed to make these things and the intellect needed to refuse to make or deploy these.
Replies from: bsterrett↑ comment by bsterrett · 2013-03-13T20:50:44.250Z · LW(p) · GW(p)
One can surely argue the inability of hostile forces to build and deploy a nuke is significant: seems some relationship exists between the intellect needed to make these things and the intellect needed to refuse to make or deploy these.
Could you state the relationship more explicitly? Your implication is not clear to me.
↑ comment by A1987dM (army1987) · 2013-01-09T09:55:25.680Z · LW(p) · GW(p)
Yeah, but it makes it easier to realize that the garbage is inconsistent.
Replies from: private_messaging↑ comment by private_messaging · 2013-01-09T11:02:54.405Z · LW(p) · GW(p)
Well, but there's also the issue with sums being at all times partial. The low probability high impact scenarios are inherently problematic because very huge number of such scenarios can be constructed (that's where their low probability comes from), and ultimately, your action will be dependent not on utility but on which types of scenarios you are more likely to construct or encounter.
There's also the issue with predictability of the actions. E.g. you can, with a carefully placed flap of butterfly wings, save or kill enormous number of people, but it all balances out - if you are equally able to construct arguments in favour, or against the flap. It's easy for butterfly but it is not so easy for other actions such as donating. Whereas there are clearly possible scenarios (accidental nuclear nuclear exchange) that you can save yourself from with your fallout shelter, and it does not balance out.
Ultimately, it is all up to ability to predict what happens. You can't really predict what happens out of giving money for someone to prevent robot apocalypse. Maybe they'll produce useful insights. Maybe the reason they are so concerned is that their thinking about artificial intelligence is inside a box full of particularly dangerous AIs, and that's where they do all their research, and this actually increases risk or creates even worse scenarios (AIs that torture everyone). Maybe they are promoting notion of the risk. Maybe Frankenstein and Terminator already saturated that. Maybe they look bad or act annoying (non credentiated people intruding into highly technical fields tend to have such effect, especially on cultures that hold scholarship and testing in high regard - e.g. Asians, former Soviet Union, Europe even) and discredit the concerns, making important research harder to publish. You can't evaluate all of that, nor can you produce representative and sufficiently large sample of the concerns, so the expected utility is exactly zero (minus the predictable consequences of you having less money to spend on any future deals). The fallout shelter on the other hand is not exactly zero, it may not be the best idea but you have a clear world model of it not cancelling out.