Some Considerations Against Short-Term and/or Explicit Focus on Existential Risk Reduction
post by multifoliaterose · 2011-02-27T04:31:03.693Z · LW · GW · Legacy · 22 commentsContents
22 comments
Over the past six months I've been repeatedly going back and forth on my attitude toward the value of short-term and/or exclusive focus on existential risk. Here I'll offer some reasons why a utilitarian who recognizes the upside of preventing human extinction may refrain from a direct focus on existential risk reduction. I remain undecided on my attitude toward short-term and/or exclusive focus on existential risk - this article is not rhetorical in intent; I'm just throwing some relevant issues out there.
1. On the subject of FAI research, Prase stated that:
The whole business is based on future predictions of several tens or possibly hunderts years in advance, which is historically a very unsuccessful discipline. And I can't help but include it in that reference class.
The same can be said of much of the speculation concerning existential risk in general, not so much existential risk due to asteroid strike or Venusian global warming but rather with the higher probability but much more amorphous existential risks connected with advanced technologies (general artificial intelligence, whole brain emulation, nano weapons, genetically engineered viruses, etc.).
A principle widely held by many highly educated people is that it's virtually impossible to predict the future more than a few decades out. Now, one can attempt to quantify "virtually impossible" as a small probability that one's model of the future is correct and multiply it by the numbers that emerge as outputs of one's model of the future in Fermi calculations, but the multiplier corresponding to "virtually impossible" may be considerably smaller than one might naively suppose...
2. As AnnaSalamon said in Goals for which Less Wrong does (and doesn't) help,
conjunctions are unlikely
Assuming that A and B are independent events, the probability of their conjunction is p(A)p(B). So for example, an event that's the conjunction of n independent events each with probability 0.1 occurs with probability 10-n. As humans are systematically biased toward believing that conjunctions are more likely than their conjuncts (at least in certain setting), there's a strong possibility of exponentially overestimating probabilities in the course of Fermi calculations. This is true both of the probability that one's model is correct (given the amount of uncertainty involved in the future as reflected by historical precedent) and of the individual probabilities involved assuming that one's model is correct.
Note that I'm not casting doubt on the utility of Fermi calculations as a general matter - Carl Shulman has been writing an interesting series of posts arguing that one can use Fermi calculations to draw reasonable conclusions about political advocacy as philanthropy. However, Carl's posts have been data-driven in a much stronger sense than Fermi calculations about the probabilities of technologically driven existential risks have been.
3. While the efficient market hypothesis may not hold in the context of philanthropy, it's arguable that the philanthropic world is efficient given the human resources and social institutions that are on the table. Majoritarianism is epistemically wrong, but society is quite rigid and whether or not successful advocacy of a particular cause is tenable depends in some measure on whether society is ready for it. In Public Choice and the Altruist's Burden Roko wrote
I personally have suffered, as have many, from low-level punishment from and worsening of relationships with my family, and social pressure from friends; being perceived as weird. I have also become more weird - spending one's time optimally for social status and personal growth is not at all like spending one's time in a way so as to reduce existential risks. Furthermore, thinking that the world is in grave danger but only you and a select group of people understand makes you feel like you are in a cult due to the huge cognitive dissonance it induces.
Even when epistemically justified in the abstract, focus on fringe causes may take too much of a psychological toll on serious supporters in order for serious supporters to be effective in pursuing their goals. To the extent that focus on existential risk requires radical self sacrificing altruism there are dangers of the type described in a comment by Carl Shulman:
Usually this doesn't work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person's motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one's values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.
4. Because of the upside of ensuring the survival rate is so huge, there's an implicit world view among certain people on Less Wrong that, e.g. existential risk reduction charities offer the opportunities for optimal philanthropy. I think that existential risk reduction charities may offer opportunities for optimal philanthropy, but that the premise that this is so largely independently of the quality of the work that these charities are doing is essentially parallel to the premise behind Pascal's Wager. In Making your explicit reasoning trustworthy Anna Salamon wrote
I find I hesitate when pondering Pascal’s wager, infinite ethics, the Simulation argument, and whether I’m a Boltzmann brain... because I’m afraid of losing my bearings, and believing mistaken things. [...] examples abound of folks whose theories and theorizing (as contrasted with their habits, wordless intuitions, and unarticulated responses to social pressures or their own emotions) made significant chunks of their actions worse. [...] examples abound of folks whose theories and theorizing (as contrasted with their habits, wordless intuitions, and unarticulated responses to social pressures or their own emotions) made significant chunks of their actions worse.
Use raw motivation, emotion, and behavior to determine at least part of your priorities.
I'm not able to offer a strong logical argument against the use of Pascal's wager or infinite ethics but nevertheless feel right to reject them as absurd. Similarly, though I'm unable to offer a strong logical argument for doing so (although I've listed some of the relevant intuitions above), I feel right to restrict support to existential risk reduction opportunities that meet some minimal standard for "sufficiently well-conceived and compelling" well above that of multiplying the value of ensuring human survival by a crude guess as to the probability that a given intervention will succeed.
Intuitively, the position "it doesn't matter how well executed charity X's activities are; since charity X is an existential risk reduction charity, charity X triumphs non-existential risk charities" is for me a reductio ad absurdem for adopting a conscious, explicit, single-minded focus on existential risk reduction.
Disclaimer: I do not intend for my comments about the necessity of meeting a minimal standard to apply specifically to any existential risk reduction charity on the table. I have huge uncertainties as to the significance of most of the points that I make in this post. Depending on one's assessment of their significance one could end up either in favor or against short-term and/or explicit focus on existential risk reduction
22 comments
Comments sorted by top scores.
comment by CarlShulman · 2011-02-27T05:16:46.714Z · LW(p) · GW(p)
not so much existential risk due to asteroid strike or Venusian global warming
This is fairly true of asteroids with a short enough time horizon, but there are still uncertainties over the future costs of asteroid defense and detection, military applications of asteroid defense interceptors, and so forth.
Standard cost-benefit analysis on non-Venusian global warming involves (implicit or explicit) projections of climate sensitivity, technological change, economic and population growth, risks of nuclear war and other global catastrophic risks, economic damages of climate change, and more 90 (!!!) years into the future or even centuries. There are huge areas where subjective estimates play big roles there. .
Venusian runaway surprise climate change of the kind discussed by Martin Weitzman or Broome, the sort threatening human extinction (through means other than generic social collapse non-recovery) involves working with right-tail outcomes and small probabilities of big impacts, added on to all the other social, technological, and other uncertainties. Nonetheless, one can put reasonable probability distributions on these and conclude that there are low-hanging fruit worth plucking (as part of a big enough global x-risk reduction fund). However, it can't be done without dealing with this uncertainty.
Regarding social costs and being an "odd duck": note that Weitzman, in his widely celebrated article on uncertainty about seemingly implausible hard-to-analyze high-impact climate change, also calls for work on risks of AI and engineered pathogens as some of the handful of serious x-risks demanding attention.
Likewise judge-economist Richard Posner called for preliminary work on AI extinction risk in his book Catastrophe. Philosopher John Leslie in his book on human extinction discussed AI risk at length. Bill Gates went out of his way to mention it as a possibility.
Regarding Fermi calculations, the specific argument in the post is wrong for the reasons JGWeissman mentions: Drake Equation style methods are the canonical way of combining independent probabilities of failure to get the correct (multiplicative) penalty for a conjunction (although one should also remember that some important conclusions are disjunctive, and one needs to take all the disjuncts into account). The conjunction fallacy involves failing to multiply penalties in that way.
With respect to sacrifice/demandingness, that's pretty orthogonal to efficacy. Whether the most effective cause is contributing to saving local children in your own community, deworming African children, paying for cleaning asteroid-watcher telescopes, or researching AI risks, one could make large or small sacrifices. GiveWell-type organizations can act as "utility monsters," a never-ending maw to consume all luxuries, but even most efficient charity enthusiasts are able to deal with that adequately. One can buy fuzzies and utilons and personal indulgences separately, compensating for social weirdness of more effective causes by retaining more for other pursuits.
Regarding Pascal's Mugging, that involves numbers much more extreme than show up in the area of x-risk, by many orders of magnitude.
Intuitively, the position "it doesn't matter how well executed charity X's activities are; since charity X is an existential risk reduction charity, charity X triumphs non-existential risk charities" is for me a reductio ad absurdem for adopting a conscious, explicit, single-minded focus on existential risk reduction.
Sure, most people are not unitary total utilitarians. I certainly am not one, although one of my motives is to avert "astronomical waste" style losses. But this is a bit of an irrelevant comparison: we have strong negative associations with weak execution, which are pretty well grounded, since one can usually find something trying to do the same task more efficiently. That applies to x-risk as well. The meaningful question is: "considering the best way to reduce existential risk I can find, including investing in the creation or identification of new opportunities and holding resources in hope of finding such in the future, do I prefer it to some charity that reduces existential risk less but displays more indicators of virtue and benefits current people in the near-term in conventional ways more?"
I feel right to restrict support to existential risk reduction opportunities that meet some minimal standard for "sufficiently well-conceived and compelling" well above that of multiplying the value of ensuring human survival by a crude guess as to the probability that a given intervention will succeed.
Does the NTI pass your tests? Why?
Replies from: multifoliaterose, multifoliaterose↑ comment by multifoliaterose · 2011-02-27T05:59:23.151Z · LW(p) · GW(p)
Very good question. I haven't looked closely at NTI; will let you know when I take a closer look.
↑ comment by multifoliaterose · 2011-02-27T22:55:54.029Z · LW(p) · GW(p)
Standard cost-benefit analysis on non-Venusian global warming involves (implicit or explicit) projections of climate sensitivity, technological change, economic and population growth, risks of nuclear war and other global catastrophic risks, economic damages of climate change, and more 90 (!!!) years into the future or even centuries. There are huge areas where subjective estimates play big roles there.
Right, so maybe reference to global warming was a bad example because there too, one is dealing with vast uncertainties. Note that global warming passes the "test" (3) above.
Nonetheless, one can put reasonable probability distributions on these and conclude that there are low-hanging fruit worth plucking (as part of a big enough global x-risk reduction fund).
I'm curious about this.
Regarding social costs and being an "odd duck": note that Weitzman, in his widely celebrated article on uncertainty about seemingly implausible hard-to-analyze high-impact climate change, also calls for work on risks of AI and engineered pathogens as some of the handful of serious x-risks demanding attention.
Likewise judge-economist Richard Posner called for preliminary work on AI extinction risk in his book Catastrophe. Philosopher John Leslie in his book on human extinction discussed AI risk at length. Bill Gates went out of his way to mention it as a possibility.
These are pertinent examples but I think it's still fair to say that interest in reducing AI risks marks one as an odd duck at present and that this gives rise to an equilibrating force against successful work on preventing AI extinction risk (how large I don't know). I can imagine this changing in the near future.
Regarding Fermi calculations, the specific argument in the post is wrong for the reasons JGWeissman mentions:
I attempted to clarify in the comments.
With respect to sacrifice/demandingness, that's pretty orthogonal to efficacy
What I was trying to get at here was that in part for social reasons and in part because of the inherent openendedness/uncertainty spanning over many orders of magnitude being conducive to psychological instability, the minimum sacrifice needed to usefully consciously reduce existential risk may be too great for people to effectively work to reduce existential risk by design.
Regarding Pascal's Mugging, that involves numbers much more extreme than show up in the area of x-risk, by many orders of magnitude.
This is true for the Eliezer/Bostrom case study, but my intuition is that the same considerations apply. Even if the best estimates that the probability that Christianity is true aren't presently > 10^(-50), there was some point in the past when in Europe the best estimates that the probability that Christianity is true were higher than some of the probabilities that show up in the area of x-risk.
I guess I would say that humans are sufficiently bad at reasoning about small probability events when the estimates are not strongly data driven that acting based on such estimates without having somewhat independent arguments for the same action is likely a far mode failure. I'm particularly concerned about the Availability heuristic here.
Sure, most people are not unitary total utilitarians.
I'm unclear on whether my intuition here is coming from a deviation from unitary total utilitarianism or whether my intuition is coming from, e.g. game theoretic considerations that I don't understand explicitly but which are compatible with unitary total utilitarianism.
we have strong negative associations with weak execution, which are pretty well grounded, since one can usually find something trying to do the same task more efficiently.
Agree.
That applies to x-risk as well.
Except insofar as the there's relatively little interest in x-risk and few organizations involved (again, not making a judgment about particular organizations here).
The meaningful question is: "considering the best way to reduce existential risk I can find, including investing in the creation or identification of new opportunities and holding resources in hope of finding such in the future, do I prefer it to some charity that reduces existential risk less but displays more indicators of virtue and benefits current people in the near-term in conventional ways more?"
My own intuition points me toward favoring a focus on existential risk reduction but I have uncertainty as to whether it's right (at least for me personally) because:
(i) I've found thinking about existential risk reduction on account of the poor quality of the information available and the multitude of relevant considerations. As Anna says in Making your explicit reasoning trustworthy:
Some people find their beliefs changing rapidly back and forth, based for example on the particular lines of argument they're currently pondering, or the beliefs of those they've recently read or talked to. Such fluctuations are generally bad news for both the accuracy of your beliefs and the usefulness of your actions.
(ii) Most of the people who I know are not in favor of near-term overt focus on existential risk reduction. I don't know whether this is because I have implicit knowledge that they don't have, because they have implicit knowledge that I don't have or because they're motivated to be opposed to such near-term overt focus for reasons unrelated to global welfare. I lean toward thinking that the situation is some combination of the latter two of the three. I'm quite confused about this matter.
Replies from: timtyler↑ comment by timtyler · 2011-03-02T16:56:57.328Z · LW(p) · GW(p)
Most of the people who I know are not in favor of near-term overt focus on existential risk reduction. I don't know whether this is because I have implicit knowledge that they don't have, because they have implicit knowledge that I don't have or because they're motivated to be opposed to such near-term overt focus for reasons unrelated to global welfare. I lean toward thinking that the situation is some combination of the latter two of the three.
I think you would normally expect genuine concern about saving the world to be rare among evolved creatures. It is a problem that our ancestor's rarely faced. It is also someone else's problem.
Saving the world may make sense as a superstimulus to the human desire for a grand cause, though. Humans are attracted to such causes for reasons that appear to be primarily to do with social signalling. I think a signalling perspective makes reasonable sense of the variation in the extent to which people are interested in the area.
comment by JGWeissman · 2011-02-27T04:50:03.978Z · LW(p) · GW(p)
As humans are systematically biased toward believing that conjunctions are more likely than their conjuncts (at least in certain setting),
The experimental evidence for the conjunction fallacy does not support "systematically". It looks to me like the effect comes from one of the conjoined events acting as an argument in favor of the other that the subjects had not considered when asked about it alone. If the experiment had asked for the probability that diplomatic relations break down between the Soviet Union and America and that there will be major flooding in California, I predict you would not see the effect. Really, I think that "conjunction fallacy" is a bad name, because it describes how the researchers can see that something must have gone wrong in the subjects' reasoning, but it doesn't describe what went wrong.
there's a strong possibility of exponentially overestimating probabilities in the course of Fermi calculations.
Fermi calculations are done explicitly to account for the conjunction of individual events in the correct way. This again is not supported by the experiment which did not involve the subjects using Fermi calculations to prevent errors in probability theory.
Replies from: CarlShulman, multifoliaterose↑ comment by CarlShulman · 2011-02-27T05:14:30.937Z · LW(p) · GW(p)
there's a strong possibility of exponentially overestimating probabilities in the course of Fermi calculations.
Fermi calculations are done explicitly to account for the conjunction of individual events in the correct way. This again is not supported by the experiment which did not involve the subjects using Fermi calculations to prevent errors in probability theory.
One way is to apply an affective bias or motivated reasoning to each of several parameters, knowing in each case which direction would boost the favored conclusion. This makes more difference with more variables to mess with, each with multiplicative effect.
Replies from: JGWeissman↑ comment by JGWeissman · 2011-02-27T05:21:24.442Z · LW(p) · GW(p)
Well, yes, if you start by writing on the bottom line the probability you want, and then fudge the individual events' probabilities to get that desired outcome, you totally ruin the point of Fermi calculations. But I wouldn't accuse anyone presenting a Fermi calculation of having done that unless there was some particular reason to suspect it in that case.
↑ comment by multifoliaterose · 2011-02-27T04:54:16.181Z · LW(p) · GW(p)
Fermi calculations are done explicitly to account for the conjunction of individual events in the correct way. This again is not supported by the experiment which did not involve the subjects using Fermi calculations to prevent errors in probability theory.
Yes, but sometimes one overestimates the probability of one of the given individual events on account of failing to recognize that it's implicitly a conjunction.
Replies from: JGWeissman↑ comment by JGWeissman · 2011-02-27T05:07:38.665Z · LW(p) · GW(p)
Yes, but sometimes one overestimates the probability of one of the given individual events on account of failing to recognize that it's implicitly a conjunction.
First of all, that is a completely separate argument that does not rescue or excuse your previous invalid argument.
Secondly, absent evidence that the overwhelming majority of Fermi calculations have this mistake, you should be pointing to a specific event in a specific Fermi calculation that is assigned too high a probability because it is implicitly a conjunction, not attempting a fully general counterargument against all Fermi calculations.
Replies from: multifoliaterose↑ comment by multifoliaterose · 2011-02-27T05:58:04.380Z · LW(p) · GW(p)
First of all, that is a completely separate argument that does not rescue or excuse your previous invalid argument.
What I was trying to say is that getting any of the factors exponentially wrong greatly affects the outcome and that this can easily occur on account of a hidden conjunction.
Secondly, absent evidence that the overwhelming majority of Fermi calculations have this mistake, you should be pointing to a specific event in a specific Fermi calculation that is assigned too high a probability because it is implicitly a conjunction, not attempting a fully general counterargument against all Fermi calculations.
I was not attempting to give incontrovertible argument; rather I was raising some points for consideration. Nor was I attempting giving a fully general counterargument against all Fermi calculations; as I said above
Note that I'm not casting doubt on the utility of Fermi calculations as a general matter
Most of the probabilities used in Fermi calculations about existential risk are unfalsifiable which makes it difficult to point to an indisputable example of the phenomenon that I have in mind.
Replies from: JGWeissman↑ comment by JGWeissman · 2011-02-27T06:19:27.363Z · LW(p) · GW(p)
Nor was I attempting giving a fully general counterargument against all Fermi calculations
You may not have been attempting to make a fully general counterargument, but you did in fact make an argument against Fermi calculations, without referring to any specific Fermi calculation, that fails to distinguish between good Fermi calculations and bad Fermi calculations.
comment by Wei Dai (Wei_Dai) · 2011-03-01T23:34:15.241Z · LW(p) · GW(p)
Multifoliaterose, if you weren't explicitly focused on existential risk reduction in the short term, what would you be doing instead?
comment by paulfchristiano · 2011-03-01T22:53:27.985Z · LW(p) · GW(p)
I am interested in the trade-off between directing funds/energy towards explicitly addressing existential risk and directing funds/energy towards education. In anything but the very near term, the number of altruistic, intelligent rationalists appears to be an extremely important determinant of prosperity, chance of survival, etc. There also appears to be a lot of low hanging fruit, both related to improving the rationality of exceptionally intelligent individuals and increasing the number of moderately intelligent individuals who become exceptionally intelligent.
Right now, investment (especially of intelligent rationalist's time) in education seems much more valuable than direct investment in existential risk reduction.
Eliezer's assessment seems to be that the two projects have approximately balanced payoffs, so that spending time on either at the expense of the other is justified. Is this correct? How do other people here feel?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-03-01T23:31:30.684Z · LW(p) · GW(p)
It seems to me that increasing the number of altruistic, intelligent rationalists via education is just a means of explicitly addressing existential risk, so your comment, while interesting, is not directly relevant to multifoliaterose's post.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-03-02T00:03:18.355Z · LW(p) · GW(p)
The question in the post is whether we should direct our energies explicitly towards risk reduction. I suspect the answer may be irrelevant at the moment, because the best way to reduce existential risk in the long term and the best way to achieve our other goals may both be through education / outreach.
My uncertainty also bears on the question: should I donate to risk reduction charities? I question whether risk reduction charities are the best approach to reducing risk.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2011-03-02T01:17:28.587Z · LW(p) · GW(p)
I suspect the answer may be irrelevant at the moment, because the best way to reduce existential risk in the long term and the best way to achieve our other goals may both be through education / outreach.
Possibly, but the people you want to target for education/outreach may depend on what you'd like them to eventually do, so it still seems useful to work that out first.
My uncertainty also bears on the question: should I donate to risk reduction charities? I question whether risk reduction charities are the best approach to reducing risk.
The people running such charities have surely already thought of the idea that education/outreach is currently the best way to reduce risk. For example, SIAI is apparently already spending almost all of its money and volunteer time on education and outreach (such as LW, Eliezer's rationality book, the visiting fellows program, the Singularity Summit).
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-03-02T02:24:07.079Z · LW(p) · GW(p)
The people running such charities have surely already thought of the idea that education/outreach is currently the best way to reduce risk. For example, SIAI is apparently already spending almost all of its money and volunteer time on education and outreach (such as LW, Eliezer's rationality book, the visiting fellows program, the Singularity Summit).
If you believe that education has a significant effect on existential risk, then charities not explicitly concerned with existential risk may nevertheless be more effectively mitigating it as a byproduct than, say, the SIAI. In particular, you shouldn't dismiss non risk-reducing charities out of hand because of a supposed difference of scale.
At face value, you should still expect someone with a reasonable ultimate goal to have a more effective focus. But this effect may be counteracted if there is a large difference in competence, or other mitigating factors such as social influence.
comment by James_Miller · 2011-02-27T05:34:50.341Z · LW(p) · GW(p)
I feel right to restrict support to existential risk reduction opportunities that meet some minimal standard for "sufficiently well-conceived and compelling" well above that of multiplying the value of ensuring human survival by a crude guess as to the probability that a given intervention will succeed.
You don't need a "crude guess" but rather a "crude lower-bound".
Replies from: multifoliaterose↑ comment by multifoliaterose · 2011-02-27T05:51:47.056Z · LW(p) · GW(p)
This is true.
comment by rwallace · 2011-02-27T14:49:35.481Z · LW(p) · GW(p)
The biggest problem with attempting to work on existential risk isn't the expenditure of resources. The problem is that our beliefs on the matter are - more or less necessarily - far mode. And the problem with far mode beliefs is that they have no connection to reality. To us smart, rational people it seems as though intelligence and rationality should be sufficient to draw true conclusions about reality itself even in the absence of immediate data, but when this is actually tried outside narrow technical domains like physics, the accuracy of the results is generally worse than random chance.
The one thing that protects us from the inaccuracy of far mode beliefs is compartmentalization. We believe all sorts of stories that have no connection to reality, but this is usually quite harmless because we instinctively keep them in a separate compartment, and only allow near mode beliefs (which are usually reasonably accurate, thanks to feedback from the real world) to influence our actions.
Unfortunately this can break down for intellectuals, because we import the idea of consistency - which certainly sounds like a good thing on the face of it - and sometimes allow it to persuade us to override our compartmentalization instincts, so that we start actually behaving in accordance with our far mode beliefs. The results range from a waste of resources (e.g. SETI) to horrific (e.g. 9/11).
I've come to the conclusion that the take-home message probably isn't so much hold accurate far mode beliefs - I have no evidence, after all, that this is even possible (and yes, I've looked hard, with strong motivation to find something). The take-home message, I now think, is compartmentalization evolved for good reason, it is a vital safeguard, and whatever else we do we cannot afford to let it fail.
In short, the best advice I can give is, whatever you do, do it in a domain where there is some real world feedback on the benefits, a domain where you can make decisions on the basis of probably reasonably accurate near mode beliefs. Low-leverage benefit is better than high-leverage harm.
Replies from: wedrifid, TheOtherDave↑ comment by TheOtherDave · 2011-02-27T15:24:11.844Z · LW(p) · GW(p)
I agree!
I also disagree.
But the parts of me that disagree are kept separate in my mind from the parts that agree, so that's all right.
But parts of me endorse consistency, so I suppose it's only a matter of time before this ends badly.