# Cookies vs Existential Risk

post by FrankAdamek · 2009-08-30T03:56:31.701Z · LW · GW · Legacy · 23 comments## Contents

ΔP(x-risk survival) = ΔE(U|existential disaster) / Upost-risk-future Static World Conservative Transhuman World Nick Bostrom’s Utopia Summary None 23 comments

I've been thinking for a while now about the possible trade-offs between present recreation and small reductions in existential risk, and I've finally gotten around to a (consequentialist) utilitarian analysis.

**ETA: **Most of the similar mathematical treatments I've seen assume a sort of duty to unrealized people, such as Bostrom's "Astronomical Waste" paper. In addition to avoiding that assumption, my aim was to provide a more general formula for someone to use, in which they can enter differing beliefs and hypotheses. Lastly I include 3 examples using widely varying ideas, and explore the results.

Let's say that you've got a mind to make a batch of cookies. That action has a certain amount of utility, from the process itself and/or the delicious cookies. But it might lessen (or increase) the chances of you reducing existential risk, and hence affect the chance of existential disaster itself. Now if these cookies will help x-risk reduction efforts (networking!) and be enjoyable, the decision is an easy one. Same thing if they'll hurt your efforts and you hate making, eating, and giving away cookies. Any conflict arises when cookie making/eating is in opposition to x-risk reduction. If you were sufficiently egoist then risk of death would be comparable to existential disaster, and you should consider the two risks together. For readability I’ll refer simply to existential risk.

The question I'll attempt to answer is: what reduction in the probability of existential disaster makes refraining from an activity an equally good choice in terms of expected utility? If you think that by refraining and doing something else you would reduce the risk at least that much, then rationally you should pursue the alternative. If refraining would cut risk by less than this value, then head to the kitchen.

*** ASSUMPTIONS**: For simplicity I'll treats existential disaster as an abstract singular event, which we’ll survive or not. If we do, it is assumed that we do so in a way such that there are no further x-risks. Further I'll assume the utility realized past that point is not dependent on the cookie-making decision in question, and that the utility realized before that point is not dependent on whether existential disaster will occur.

*The utility calculation is also unbounded*, being easier to specify. It is hoped that those not seeking to approximate having such a utility function can modify the treatment to serve their needs. *

*E(U|cookies) = E(U|cookies, existential disaster) + U _{post-risk-future} • P(x-risk survival | cookies)*

*E(U|alternative) = E(U|alternative, existential disaster) + U _{post-risk-future} • P(x-risk survival | alternative)*

Setting these two expected utilities to be equal we get:

*E(U|cookies, existential disaster) - E(U|alternative, existential disaster) = U _{post-risk-future }• ( P(x-risk survival | alternative) - P(x-risk survival | cookies))*

or

*ΔP(x-risk survival) = ΔE(U|existential disaster) / U _{post-risk-future}*

Where *ΔP(x-risk survival) = P(x-risk survival | alternative) - P(x-risk survival | cookies) *

and *ΔE(U|existential disaster) = E(U|cookies, existential disaster) - E(U|alternative, existential disaster)*

*I’m assuming both of these quantities are positive. Otherwise, there’s no conflict.*

Now to determine the utilities:

*base value _{(utility/time)}* is a constant for normalizing to

*ΔE(U|existential disaster)*and factors out of our ratio, but it can give us a scale of comparison. Obviously you should use the same time scale for the integral limits.

*s*(range ≥ 0) is the multiplier for the change in subjective time due to faster cognition,

_{i}(t)*h*(range = all real numbers) is the multiplier for the change in the

_{i}(t)*base value*, and

_{(utility/time)}*Di(t)*(0 ≤ range ≤ 1) is your discount function. All of these functions are with reference to each morally relevant entity

*i*, assuming yourself as

*i = 1*.

There are of course a variety of ways to do this kind of calculation. I felt the multiplication of a discount function with increases in both subjective time quantity and quality, integrated over the time period of interest and summed across conscious entitites, was both general and intuitive.

There're far too many variables here to summarize all possibilities with examples, but I'll do a few, with both pure egoist and agent-neutral utilitarian perspectives (equal consideration of yours and others' wellbeing). I'll assume the existential disaster would occur in 30 years, keeping in mind that the prior/common probability of disaster doesn't actually affect the calculation. I’ll also set most of the functions to constants to keep it straightforward.

**Static World**

Here we assume that life span does not increase, nor does cognitive speed or quality of life. You're contemplating making cookies, which will take 1 hour. *base value _{(utility/time)}* of current life is 1 utility/hour, you expect to receive 2 extra utility by making cookies and will also obtain 1 utility/hour you live in a post-risk-future, which will be 175,200 hours over an assumed extra 20 years. For simplicity we'll assume no discounting, and start off with a pure egoist perspective. Then:

*ΔP(x-risk survival) = ΔE(U|existential disaster) / Upost-risk-future = 2/175,200 = 0.00114%*, which might be too much to expect from working for one hour instead.

For an agent-neutral utilitarian, we'll assume there's another 2 utility that others gain from your cookies. We'll include only the ≈6.7 billion currently existing people, who have a current world life expectancy of 67 years and average age of 28.4, which would give them each 75,336 utility over 8.6 years in a post-risk-future. Then:

*ΔP(x-risk survival) = ΔE(U|existential disaster) / Upost-risk-future = 4/(75,336 • 6,700,000,000) =0.000000000000792%*. You can probably reduce existential risk this much with one hour of work, but then you’re probably not a pure agent-neutral utilitarian with no time discounting. I’m certainly not.

**Conservative Transhuman World**

In this world we’ll assume that people live about a thousand years, a little over 10 times conventional expectancy. We’ll also assume they think 10 times as fast and each subjective moment has 10 times higher utility. I’m taking that kind of increase from the hedonistic imperative idea, but you’d get the same effect by just thinking 100 times faster than we do now. Keeping it simple I’ll treat these improvements as happening instantaneously upon entering a post-risk-future. On a conscious level I don’t discount posthuman futures, but I’ll set *D _{i}(t) = e^{-t/20}* anyway. For those who want to check my math, the integral of that function from 30 to 1000 is 4.463.

Though I phrased the equations in terms of baked goods, they of course apply to any decision of both greater existential risk and enjoyment. Let’s assume you’re able to forgo all pleasure now in terms of the greatest future pleasure, through existential risk reduction. In our calculation, this course of action is “*alternative*”, and living like a person unaware of existential risk is “*cookie*”. Our *base value _{(utility/time)}* is an expected 1 utility/year of “normal” life (a very different scale from the last example), and your total focus would realize a flat 0 utility for those first 30 years. For a pure egoist:

*ΔP(x-risk survival) = ΔE(U|existential disaster) / Upost-risk-future = 30/446.26 =6.72%*. This might be possible with 30 years of the total dedication we’re considering, especially with so few people working on this, but maybe it wouldn’t.

For our agent-neutral calculation, we’ll assume that your total focus on the large scale results in 5 fewer utility for those who won’t end up having as much fun with the “next person” as with you, subtracted by the amount you might uniquely improve the lives of those you meet while trying to save the world. If they all realize utility similar to yours in a post-risk world, then:

*ΔP(x-risk survival) = ΔE(U|existential disaster) / Upost-risk-future = 35/(446.26*6,700,000,000) = 0.00000000117%*. With 30 years of dedicated work this seems extremely feasible.

And if you hadn’t used a discount rate in this example, the *ΔP(x-risk survival)* required to justify those short-term self-sacrifices would be over 217 times smaller.

**Nick Bostrom’s Utopia**

Lastly I’ll consider the world described in Bostrom’s “Letter From Utopia”. We’ll use the same *base value _{(utility/time)}* of 1 utility/year of “normal” life as the last example. Bostrom writes from the perspective of your future self: “

*And yet, what you had in your best moment is not close to what I have now – a beckoning scintilla at most. If the distance between base and apex for you is eight kilometers, then to reach my dwellings would take a million light-year ascent.*” Taken literally this translates to

*h*. I won’t bother treating

_{i}(t) = 1.183 • 10^{18}*s*as more than unity; though likely to be greater, that seems like overkill for this calculation. We’ll assume people live till most stars burn out, approximately

_{i}(t)*10*years from now (if we find a way during that time to stop or meaningfully survive the entire heat death of the universe, it may be difficult to assign any finite bound to your utility). I’ll start by assuming no discount rate.

^{14}Assuming again that you’re considering focusing entirely on preventing existential risk, then ΔP*(x-risk survival) = ΔE(U|existential disaster) / Upost-risk-future =30/(1.183 • 1032) = 0.0000000000000000000000000000254%*. Even if you were almost completely paralyzed, able only to blink your eyes, you could pull this off. For an agent-neutral utilitarian, the change in existential risk could be about 7 billion times smaller and still justify such dedication. While I don’t believe in any kind of obligation to create new people, if our civilization did seed the galaxy with eudaimonic lives, you might sacrifice unnecessary daily pleasures for a reduction in risk 1,000,000,000,000,000,000,000 smaller still. Even with the discount function specified in the last example, a pure egoist would still achieve the greatest expected utility or enjoyment from an extreme dedication that achieved an existential risk reduction of only *0.000000000000000568%*.

**Summary**

The above are meant only as illustrative examples. As long as we maintain our freedom to improve and change, and do so wisely, I put high probability on post-risk-futures gravitating in the direction of Bostrom’s Utopia. But if you agree with or can tolerate my original assumptions, my intention is for you to play around, enter values you find plausible, and see whether or how much your beliefs justify short term enjoyment for its own sake.

Lastly, keep in mind that maximizing your ability to reduce existential risk almost certainly does not include forgoing all enjoyment. For one thing, you'll have at least a *little* fun fighting existential risk. Secondly, we aren’t (yet) robots and we generally need breaks, *some* time to relax and rejuvenate, and some friendship to keep our morale up (as well as stimulated or even sane). Over time, habit-formation and other self-optimizations can reduce some of those needs, and that will only be carried through if you don’t treat short term enjoyment as much more than an element of reducing existential risk (assuming your analysis suggests you avoid doing so). But everyone requires “balance”, by definition, and raw application of willpower won’t get you nearly far enough. It’s an exhaustible resource, and while it can carry you through several hours or a few days, it’s not going to carry you through several decades.

The absolute *worst* thing you could do, assuming once again that your analysis justifies a given short term sacrifice for greater long term gain, is to *give up*. If your resolve is about to fail, or already has, just take a break to *really relax*, however long you honestly need (and you will *need* some time). Anticipating how effective you’ll be in different motivational states (which can’t be represented by a single number), and how to best balance motivation and direct application, is an incredibly complex problem which is difficult or impossible to quantify. Even the best solutions are approximations, people usually apply themselves too little and sometimes too much. But to do so and suffer burnout provides no rational basis for throwing up your hands in desperation and calling it quits, at least for longer than you need to. To an extent that we might not yet be able to imagine, someday billions or trillions of future persons, including yourself, may express gratitude that you didn’t.

## 23 comments

Comments sorted by top scores.

## comment by Johnicholas · 2009-08-30T12:50:31.964Z · LW(p) · GW(p)

As I perceive it, the structure of your argument is:

How do utilitarians choose between options such as cookies and reduction of existential risk?

An unreadably dense block of assumptions, mathematization of assumptions, mathematical reasoning, and interpretation of mathematical conclusions

A reasonably clear conclusion - you feel that reduction of existential risk should be a high priority for many utilitarians.

First, there is no claim at the beginning of your article of what is novel or interesting about your analysis - why should the reader continue?

Second, in order for your argument to convince me (or even provoke substantive counterarguments), I need the middle block to be more readable. I'm not a particularly math-phobic reader, but each of your assumptions should have an informal reason for why you feel it is reasonable to make. Each of your mathematization steps (from english text to symbols) needs to be separate from the assumptions and each other. The mathematical operations don't need to be expanded (indeed, they should be as terse as possible without compromising verifiability), but the interpretation steps (from symbols back to english text) also need to be clear and separate from each other.

Replies from: FrankAdamek## ↑ comment by FrankAdamek · 2009-08-30T13:44:17.734Z · LW(p) · GW(p)

Thank you for the detailed criticism, I appreciate it. I've tried to improve some of the elements, though I don't see obvious improvements to most of the mathematical treatment and explanation; feel free to point out specific things. The values in the examples are somewhat arbitrary, meant to cover a wide spectrum of possibilities, and are placeholders for your own assumptions. As long as I've made the underlying variables sufficiently general and their explanations sufficiently clear, my hope is for this to be straightforward. The final result is the percent reduction in existential disaster that would have to be expected in order to justify sacrificing a recreation in order to work on existential risk reduction, which is noted way up in the 4th paragraph. Please let me know if there is something I can do beyond this interpretation.

## comment by MichaelVassar · 2009-09-01T00:20:21.978Z · LW(p) · GW(p)

I think that this isn't very good "decision theory for humans, a project I have been working on informally for years.
The best decision theory for humans for a particular person probably usually amounts to something very much like a virtue ethic, though not always the same virtue ethic across people.

Whatever you do, the more closely you adhere to a model rather than to tradition, the more confident you must be that your model is exactly correct, and "I am a unitary uncaused decision-making process" isn't very close.

## ↑ comment by SforSingularity · 2009-09-01T23:13:42.009Z · LW(p) · GW(p)

Michael, I'd like to hear more about virtue ethics as effective decision theories for humans.

## ↑ comment by anonym · 2009-09-01T02:58:52.661Z · LW(p) · GW(p)

I think that this isn't very good "decision theory for humans, a project I have been working on informally for years.

FYI: if you mean that it isn't a good "decision theory for humans", which happens to be something you've been working on informally for years, you picked a very confusing way to say that.

## comment by RichardKennaway · 2009-08-30T14:34:41.885Z · LW(p) · GW(p)

I'm reminded of the proverb: "He who would be Pope must think of nothing else." So, "He who would save the world must think of nothing else."

Replies from: Vladimir_Nesov## ↑ comment by Vladimir_Nesov · 2009-08-30T16:40:02.742Z · LW(p) · GW(p)

But since saving the world is outsourced to the "heroes", people have a rationalization for not worrying too much about this issue. After all, if you don't know the relevant math or even where to begin, it's beyond your personal abilities to proceed and actually learn the stuff. Have a cookie.

Replies from: FrankAdamek## ↑ comment by FrankAdamek · 2009-08-31T15:11:13.600Z · LW(p) · GW(p)

Though I'm taking rationalization in the pejorative sense, it does seem a real concern that people would think that way.

As for finding out where the begin, I've recently been in a lot of discussion with Anna Salamon, Carl Shulman and others about this, which has been very useful. If they are free and a person is interested in making a real effort, I'd expect they'd be very happy to discuss ideas and strategies. Anna suggested a thread to focus such a discussion, getting ideas from various LW readers, which should be going up instanter.

## comment by Angela · 2015-02-11T09:35:04.924Z · LW(p) · GW(p)

I can concentrate much better after I've spent time running around outdoors, watching sunsets or listening to good music. I do not believe that the pleasure of being outside is more important than my other goals, but when I force myself to stay indoors and spend more time working I become too moody to concentrate and I get less work done in total than I would if I had 'wasted' more time. Cookies are different though, because the tedium of baking them outweighs the pleasure of eating them.

## comment by rwallace · 2009-08-30T22:59:38.594Z · LW(p) · GW(p)

The biggest problem with trying to weigh the two is that most work aimed at "mitigating existential risks" focuses on imaginary ones and actually exacerbates the danger from the real ones, while most of the work that improves humanity's long-term chances of survival is actually done for purely short-term reasons. So it's a false dichotomy.

Replies from: FrankAdamek## ↑ comment by FrankAdamek · 2009-08-31T15:06:44.512Z · LW(p) · GW(p)

This is an interesting idea that seems worth my looking into. Do you have sources, links, etc? It certainly could be helpful to draw attention to risk mitigation that is done for short term reasons, might be easier to get people to work on.

Replies from: rwallace## ↑ comment by rwallace · 2009-08-31T18:22:03.057Z · LW(p) · GW(p)

I don't have sources to hand, but here's a post I wrote about the negative side: http://lesswrong.com/lw/10n/why_safety_is_not_safe/

On the positive side, consider playing video games, an activity certainly carried out for short-term reasons, yet one of the major sources of funding for development of higher performance computers, an important ingredient in just about every kind of technological progress today.

Or consider how much research in medicine (another key long-term technology) is paid for by individual patients in the present day with the very immediate concern that they don't want to suffer and die right now.

Replies from: FrankAdamek## ↑ comment by FrankAdamek · 2009-09-02T04:09:19.977Z · LW(p) · GW(p)

I don't think lack of hardware progress is a major problem in avoiding existential disaster.

I read your post, but I don't see a reason that a lack of understanding for certain past events should bring us to devalue our current best estimates for ways to reduce danger. I wouldn't be remotely surprised if there are dangers we don't (yet?) understand, but why presume an unknown danger isn't localized in the same areas as known dangers? Keep in mind that reversed stupidity is not intelligence.

Replies from: rwallace## ↑ comment by rwallace · 2009-09-02T11:47:44.385Z · LW(p) · GW(p)

Because it has empirically turned out not to be. Reversed stupidity is not intelligence, but it is avoidance of stupidity. When we know a particular source gives wrong answers, that doesn't tell us the right answers, but it does tell us what to avoid.

## comment by steven0461 · 2009-08-30T08:49:11.989Z · LW(p) · GW(p)

But everyone requires “balance”, by definition, and raw application of willpower won’t get you nearly far enough. It’s an exhaustible resource, and while it can carry you through several hours or a few days, it’s not going to carry you through several decades.

And you just drained mine by talking about cookies so much. Would it kill you to say "celery" instead?!

Replies from: FrankAdamek## ↑ comment by FrankAdamek · 2009-08-30T13:12:18.213Z · LW(p) · GW(p)

I actually have a very rare verbo-medical condition, funny you should ask...

Did the mention of cookies get truly tiresome? Besides a cluster in one of the first paragraphs it doesn't seem bad to me, and I'm not sure how serious your comment is.

Replies from: CarlShulman## ↑ comment by CarlShulman · 2009-08-31T00:10:18.452Z · LW(p) · GW(p)

Steven is referring to data showing that resisting the temptation to eat cookies drains willpower and mental acuity. However, I don't think the effect would be strong if there were no actual cookies nearby to tempt him.

## comment by pnrjulius · 2012-05-22T18:24:51.716Z · LW(p) · GW(p)

From your examples, the decision seems to be too sensitive to parameterization---i.e. if I vary slightly the different variables, I can come out with completely different results. Since I don't trust myself to have a precise enough estimate of even the present utility of baking cookies versus working for the Singularity Institute, I can't even begin to assign values to things as uncertain as the future population of transhuman society, the proper discount rate for events 10 billion years in the future, or the future rate of computation of a self-modifying AI.

Also, it's not clear to me that I CAN affect even these very tiny increments of probability, simply because I am a butterfly that doesn't know which way to flap its wings. Suppose I am paralyzed and can only blink; when should I blink? Could blinking now versus 10 seconds from now have some 10^-18 increment in probability for the singularity? Yes, I think so---but here's the rub: Which one is which? Is blinking now better, or worse, than blinking ten seconds from now? I simply don't have access to that information.

There's a third problem, perhaps the worst of all: Taken seriously, this would require us all to devote our full energy, 24/7/52, to working for the singularity. We are only allowed to eat and sleep insofar as it extends the time we may work. There is no allowance for leisure, no pleasure, no friendship, not even justice or peace in the ordinary sense---no allowance for a life actually worth living at all. We are made instruments, tools for the benefit of some possible future being who may in fact never exist. I think you just hit yourself with a Pascal's Mugging.

## comment by SforSingularity · 2009-08-31T02:32:45.539Z · LW(p) · GW(p)

I found this a bit dense; I think that you could say what you wanted to say more concisely and relegate the integrals, math etc to a footnote.

I think that the essential point is that the universe is a *damn* big place, full of energy and matter and ways to use it to produce gazillions of highly-optimized, pleasurable, worthwhile (post)human lives.

Therefore, if you are an aggregative consequentialist who sums up the utility of each individual life without really aggressive time-discounting, a small reduction in existential risk creates a massive increase in expected utility.

The summary is good. I'd like to hear a bit more analysis of what it actually feels like to have that weight on one's shoulders and how to deal with the scope insensitivty. Also, one should question the aggregative assumption. Does it really capture our intuition? How does this all relate to Pascal's Mugging?

I'm a fan of your blog too!

Replies from: FrankAdamek## ↑ comment by FrankAdamek · 2009-08-31T15:17:39.478Z · LW(p) · GW(p)

Huh, I had gained the impression my last posts were not mathematical enough. If so, at least it implies that I have the ability to strike a happy medium.

I'm very glad to hear of people finding the blog worthwhile, thanks for the thumbs up!

Replies from: PhilGoetz## comment by John_Maxwell (John_Maxwell_IV) · 2009-08-30T04:00:01.575Z · LW(p) · GW(p)

Might want to make the font size a bit bigger.