Exterminating life is rational

post by PhilGoetz · 2009-08-06T16:17:43.983Z · LW · GW · Legacy · 279 comments

Contents

  We can outrun the danger.
  Technology will stabilize in a safe state.
  People will stop having conflicts.
  Rational agents incorporate the benefits to others into their utility functions.
  Rational agents with long lifespans will protect the future for themselves.
  A benevolent singleton will save us all.
  In conclusion
None
279 comments

Followup to This Failing Earth, Our society lacks good self-preservation mechanisms, Is short term planning in humans due to a short life or due to bias?

I don't mean that deciding to exterminate life is rational.  But if, as a society of rational agents, we each maximize our expected utility, this may inevitably lead to our exterminating life, or at least intelligent life.

Ed Regis reports on p 216 of “Great Mambo Chicken and the TransHuman Condition,” (Penguin Books, London, 1992):

Edward Teller had thought about it, the chance that the atomic explosion would light up the surrounding air and that this conflagration would then propagate itself around the world. Some of the bomb makers had even calculated the numerical odds of this actually happening, coming up with the figure of three chances in a million they’d incinerate the Earth. Nevertheless, they went ahead and exploded the bomb.

Was this a bad decision?  Well, consider the expected value to the people involved.  Without the bomb, there was a much, much greater than 3/1,000,000 chance that either a) they would be killed in the war, or b) they would be ruled by Nazis or the Japanese.  The loss to them if they ignited the atmosphere would be another 30 or so years of life.  The loss to them if they lost the war and/or were killed by their enemies would also be another 30 or so years of life.  The loss in being conquered would also be large.  Easy decision, really.

Suppose that, once a century, some party in a conflict chooses to use some technique to help win the conflict that has a p=3/1,000,000 chance of eliminating life as we know it.  Then our expected survival time is 100 times the sum from n=1 to infinity of np(1-p)n-1.  If I've done my math right, that's ≈ 33,777,000 years.

This supposition seems reasonable to me.  There is a balance between offensive and defensive capability that shifts as technology develops.  If technology keeps changing, it is inevitable that, much of the time, a technology will provide the ability to destroy all life before the counter-technology to defend against it has been developed.  In the near future, biological weapons will be more able to wipe out life than we are able to defend against them.  We may then develop the ability to defend against biological attacks; we may then be safe until the next dangerous technology.

If you believe in accelerating change, then the number of important events in a given time interval increases exponentially, or, equivalently, the time intervals that should be considered equivalent opportunities for important events shorten exponentially.  The 34M years remaining to life is then in subjective time, and must be mapped into realtime.  If we suppose the subjective/real time ratio doubles every 100 years, this gives life an expected survival time of 2000 more realtime years.  If we instead use Ray Kurzweil's figure of about 2 years, this gives life about 40 remaining realtime years.  (I don't recommend Ray's figure.  I'm just giving it for those who do.)

Please understand that I am not yet another "prophet" bemoaning the foolishness of humanity.  Just the opposite:  I'm saying this is not something we will outgrow.  If anything, becoming more rational only makes our doom more certain.  For the agents who must actually make these decisions, it would be irrational not to take these risks.  The fact that this level of risk-tolerance will inevitably lead to the snuffing out of all life does not make the expected utility of these risks negative for the agents involved.

I can think of only a few ways that rationalilty can not inevitably exterminate all life in the cosmologically (even geologically) near future:

Let's look at these one by one:

We can outrun the danger.

We will colonize other planets; but we may also  figure out how to make the Sun go nova on demand.  We will colonize other star systems; but we may also figure out how to liberate much of the energy in the black hole at the center of our galaxy in a giant explosion that will move outward at near the speed of light.

One problem with this idea is that apocalypses are correlated; one may trigger another.  A disease may spread to another planet.  The choice to use a planet-busting bomb on one planet may lead to its retaliatory use on another planet.  It's not clear whether spreading out and increasing in population actually makes life more safe.  If you think in the other direction, a smaller human population (say ten million) stuck here on Earth would be safer from human-instigated disasters.

But neither of those are my final objection.  More important is that our compression of subjective time can be exponential, while our ability to escape from ever-broader swaths of destruction is limited by lightspeed.

Technology will stabilize in a safe state.

Maybe technology will stabilize, and we'll run out of things to discover.  If that were to happen, I would expect that conflicts would increase, because people would get bored.  As I mentioned in another thread, one good explanation for the incessant and counterproductive wars in the middle ages - a reason some of the actors themselves gave in their writings - is that the nobility were bored.  They did not have the concept of progress; they were just looking for something to give them purpose while waiting for Jesus to return.

But that's not my final rejection.  The big problem is that by "safe", I mean really, really safe.  We're talking about bringing existential threats to chances less than 1 in a million per century.  I don't know of any defensive technology that can guarantee a less than 1 in a million failure rate.

People will stop having conflicts.

That's a nice thought.  A lot of people - maybe the majority of people - believe that we are inevitably progressing along a path to less violence and greater peace.

They thought that just before World War I.  But that's not my final rejection.  Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts.  Those that avoid conflict will be out-competed by those that do not.

But that's not my final rejection either.  The bigger problem is that this isn't something that arises only in conflicts.  All we need are desires.  We're willing to tolerate risk to increase our utility.  For instance, we're willing to take some unknown, but clearly greater than one in a million chance, of the collapse of much of civilization due to climate warming.  In return for this risk, we can enjoy a better lifestyle now.

Also, we haven't burned all physics textbooks along with all physicists.  Yet I'm confident there is at least a one in a million chance that, in the next 100 years, some physicist will figure out a way to reduce the earth to powder, if not to crack spacetime itself and undo the entire universe.  (In fact, I'd guess the chance is nearer to 1 in 10.)1  We take this existential risk in return for a continued flow of benefits such as better graphics in Halo 3 and smaller iPods.  And it's reasonable for us to do this, because an improvement in utility of 1% over an agent's lifespan is, to that agent, exactly balanced by a 1% chance of destroying the Universe.

The Wikipedia entry on Large Hadcon Collider risk says, "In the book Our Final Century: Will the Human Race Survive the Twenty-first Century?, English cosmologist and astrophysicist Martin Rees calculated an upper limit of 1 in 50 million for the probability that the Large Hadron Collider will produce a global catastrophe or black hole."  The more authoritative "Review of the Safety of LHC Collisions" by the LHC Safety Assessment Group concluded that there was at most a 1 in 1031 chance of destroying the Earth.

The LHC conclusions are criminally low.  Their evidence was this: "Nature has already conducted the LHC experimental programme about one billion times via the collisions of cosmic rays with the Sun - and the Sun still exists."  There followed a couple of sentences of handwaving to the effect that if any other stars had turned to black holes due to collisions with cosmic rays, we would know it - apparently due to our flawless ability to detect black holes and ascertain what caused them - and therefore we can multiply this figure by the number of stars in the universe.

I believe there is much more than a one-in-a-billion chance that our understanding in one of the steps used in arriving at these figures is incorrect.  Based on my experience with peer-reviewed papers, there's at least a one-in-ten chance that there's a basic arithmetic error in their paper that no one has noticed yet.  I'm thinking more like one-in-a-million, once you correct for the anthropic principle and for the chance that there is a mistake in the argument.  (That's based on a belief that priors for anything likely enough that smart people even thought of the possibility should be larger than one in a billion, unless they were specifically trying to think of examples of low-probability possibilities such as all of the air molecules in the room moving to one side.)

The Trinity test was done for the sake of winning World War II.  But the LHC was turned on for... well, no practical advantage that I've heard of yet.  It seems that we are willing to tolerate one-in-a-million chances of destroying the Earth for very little benefit.  And this is  rational, since the LHC will probably improve our lives by more than one part in a million.

Rational agents incorporate the benefits to others into their utility functions.

"But," you say, "I wouldn't risk a 1% chance of destroying the universe for a 1% increase in my utility!"

Well... yes, you would, if you're a rational expectation maximizer.  It's possible that you would take a much higher risk, if your utility is at risk of going negative; it's not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility.  (This seems difficult, but is worth exploring.)  If you still think that you wouldn't, it's probably because you're thinking a 1% increase in your utility means something like a 1% increase in the pleasure you experience.  It doesn't.  It's a 1% increase in your utility.  If you factor the rest of your universe into your utility function, then it's already in there.

The US national debt should be enough to convince you that people act in their self-interest.  Even the most moral people - in fact, especially the "most moral" people - do not incorporate the benefits to others, especially future others, into their utility functions.  If we did that, we would engage in massive eugenics programs.  But eugenics is considered the greatest immorality.

But maybe they're just not as rational as you.  Maybe you really are a rational saint who considers your own pleasure no more important than the pleasure of everyone else on Earth.  Maybe you have never, ever bought anything for yourself that did not bring you as much benefit as the same amount of money would if spent to repair cleft palates or distribute vaccines or mosquito nets or water pumps in Africa.  Maybe it's really true that, if you met the girl of your dreams and she loved you, and you won the lottery, put out an album that went platinum, and got published in Science, all in the same week, it would make an imperceptible change in your utility versus if everyone you knew died, Bernie Madoff spent all your money, and you were unfairly convicted of murder and diagnosed with cancer.

It doesn't matter.  Because you would be adding up everyone else's utility, and everyone else is getting that 1% extra utility from the better graphics cards and the smaller iPods.

But that will stop you from risking atmospheric ignition to defeat the Nazis, right?  Because you'll incorporate them into your utility function?  Well, that is a subset of the claim "People will stop having conflicts."  See above.

And even if you somehow worked around all these arguments, evolution, again, thwarts you.2  Even if you don't agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents.  The claim that rational agents are not selfish implies that rational agents are unfit.

Rational agents with long lifespans will protect the future for themselves.

The most familiar idea here is that, if people expect to live for millions of years, they will be "wiser" and take fewer risks with that time.  But the flip side is that they also have more time to lose.  If they're deciding whether to risk igniting the atmosphere in order to lower the risk of being killed by Nazis, lifespan cancels out of the equation.

Also, if they live a million times longer than us, they're going to get a million times the benefit of those nicer iPods.  They may be less willing to take an existential risk for something that will benefit them only temporarily.  But benefits have a way of increasing, not decreasing, over time.  The discovery of the law of gravity and of the invisible hand benefit us in the 21st century more than they did the people of the 17th century.

But that's not my final rejection.  More important is time-discounting.  Agents will time-discount, probably exponentially, due to uncertainty.  If you considered benefits to the future without exponential time-discounting, the benefits to others and to future generations would outweigh any benefits to yourself so much that in many cases you wouldn't even waste time trying to figure out what you wanted.  And, since future generations will be able to get more utility out of the same resources, we'd all be obliged to kill ourselves, unless we reasonably think that we are contributing to the development of that capability.

Time discounting is always (so far) exponential, because non-asymptotic functions don't make sense.  I supposed you could use a trigonometric function instead for time discounting, but I don't think it would help.

Could a continued exponential population explosion outweigh exponential time-discounting?  Well, you can't have a continued exponential population explosion, because of the speed of light and the Planck constant.  (I leave the details as an exercise to the reader.)

Also, even if you had no time-discounting, I think that a rational agent must do identity-discounting.  You can't stay you forever.  If you change, the future you will be less like you, and weigh less strongly in your utility function.  Objections to this generally assume that it makes sense to trace your identity by following your physical body.  Physical bodies will not have a 1-1 correspondence with personalities for more than another century or two, so just forget that idea.  And if you don't change, well, what's the point of living?

Evolutionary arguments may help us with self-discounting.  Evolutionary forces encourage agents to emphasize continuity or ancestry over resemblance in an agent's selfness function.  The major variable is reproduction rate over lifespan.  This applies to genes or memes.  But they can't help us with time-discounting.

I think there may be a way to make this one work.  I just haven't thought of it yet.

A benevolent singleton will save us all.

This case takes more analysis than I am willing to do right now.  My short answer is that I place a very low expected utility on singleton scenarios.  I would almost rather have the universe eat, drink, and be merry for 34 million years, and then die.

I'm not ready to place my faith in a singleton.  I want to work out what is wrong with the rest of this argument, and how we can survive without a singleton.

(Please don't conclude from my arguments that you should go out and create a singleton.  Creating a singleton is hard to undo.  It should be deferred nearly as long as possible.  Maybe we don't have 34 million years, but this essay doesn't give you any reason not to wait a few thousand years at least.)

In conclusion

I think that the figures I've given here are conservative.  I expect existential risk to be much greater than 3/1,000,000 per century.  I expect there will continue to be externalities that cause suboptimal behavior, so that the actual risk will be greater even than the already-sufficient risk that rational agents would choose.  I expect population and technology to continue to increase, and existential risk to be proportional to population times technology.  Existential risk will very possibly increase exponentially, on top of the subjective-time exponential.

Our greatest chance for survival is that there's some other possibility I haven't thought of yet.  Perhaps some of you will.

 

1 If you argue that the laws of physics may turn out to make this impossible, you don't understand what "probability" means.

2 Evolutionary dynamics, the speed of light, and the Planck constant are the three great enablers and preventers of possible futures, which enable us to make predictions farther into the future and with greater confidence than seem intuitively reasonable.

279 comments

Comments sorted by top scores.

comment by PhilGoetz · 2009-08-06T21:09:06.602Z · LW(p) · GW(p)

Here's a possible problem with my analysis:

Suppose Omega or one of its ilk says to you, "Here's a game we can play. I have an infinitely large deck of cards here. Half of them have a star on them, and one-tenth of them have a skull on them. Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."

How many cards do you draw?

I'm pretty sure that someone who believes in many worlds will keep drawing cards until they die. But even if you don't believe in many worlds, I think you do the same thing, unless you are not maximizing expected utility. (Unless chance is quantized so that there is a minimum possible possibility. I don't think that would help much anyway.)

So this whole post may boil down to "maximizing expected utility" not actually being the right thing to do. Also see my earlier, equally unpopular post on why expectation maximization implies average utilitarianism. If you agree that average utilitarianism seems wrong, that's another piece of evidence that maximizing expected utility is wrong.

Replies from: HopeFox, Vladimir_Nesov, taw, conchis, None, army1987, Jonathan_Graehl, orthonormal, None, tut, Aurini, Alicorn, CannibalSmith
comment by HopeFox · 2011-05-17T10:47:38.643Z · LW(p) · GW(p)

"Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."

Sorry if this question has already been answered (I've read the comments but probably didn't catch all of it), but...

I have a problem with "double your utility for the rest of your life". Are we talking about utilons per second? Or do you mean "double the utility of your life", or just "double your utility"? How does dying a couple of minutes later affect your utility? Do you get the entire (now doubled) utility for those few minutes? Do you get pro rata utility for those few minutes divided by your expected lifespan?

Related to this is the question of the utility penalty of dying. If your utility function includes benefits for other people, then your best bet is to draw cards until you die, because the benefits to the rest of the universe will massively outweigh the inevitability of your death.

If, on the other hand, death sets your utility to zero (presumably because your utility function is strictly only a function of your own experiences), then... yeah. If Omega really can double your utility every time you win, then I guess you keep drawing until you die. It's an absurd (but mathematically plausible) situation, so the absurd (but mathematically plausible) answer is correct. I guess.

comment by Vladimir_Nesov · 2009-08-06T22:46:31.483Z · LW(p) · GW(p)

Reformulation to weed out uninteresting objections: Omega knows expected utility according to your preference if you go on without its intervention U1 and utility if it kills you U0U1.

My answer: even in a deterministic world, I take the lottery as many times as Omega has to offer, knowing that the probability of death tends to certainty as I go on. This example is only invalid for money because of diminishing returns. If you really do possess the ability to double utility, low probability of positive outcome gets squashed by high utility of that outcome.

Replies from: TimFreeman, PhilGoetz
comment by TimFreeman · 2011-05-16T20:28:58.151Z · LW(p) · GW(p)

There's an excellent paper by Peter le Blanc indicating that under reasonable assumptions, if you utility function is unbounded, then you can't compute finite expected utilities. So if Omega can double your utility an unlimited number of times, you have other problems that cripple you in the absence of involvement from Omega. Doubling your utility should be a mathematical impossibility at some point.

That demolishes "Shut up and Multiply", IMO.

SIAI apparently paid Peter to produce that. It should get more attention here.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-16T20:55:29.948Z · LW(p) · GW(p)

So if Omega can double your utility an unlimited number of times

This was not assumed, I even explicitly said things like "I take the lottery as many times as Omega has to offer" and "If you really do possess the ability to double utility". To the extent doubling of utility is actually provided (and no more), we should take the lottery.

Replies from: Larks
comment by Larks · 2011-05-16T21:06:13.888Z · LW(p) · GW(p)

Also, if your utility function's scope is not limited to perception-sequences, Peter's result doesn't directly apply. If your utility function is linear in actual, rather than perceived, paperclips, Omega might be able to offer you the deal infinitely many times.

Replies from: TimFreeman
comment by TimFreeman · 2011-05-16T21:14:40.586Z · LW(p) · GW(p)

Also, if your utility function's scope is not limited to perception-sequences, Peter's result doesn't directly apply.

How can you act upon a utility function if you cannot evaluate it? The utility function needs inputs describing your situation. The only available inputs are your perceptions.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-16T21:35:45.472Z · LW(p) · GW(p)

The utility function needs inputs describing your situation. The only available inputs are your perceptions.

Not so. There's also logical knowledge and logical decision-making where nothing ever changes and no new observations ever arrive, but the game still can be infinitely long, and contain all the essential parts, such as learning of new facts and determination of new decisions.

(This is of course not relevant to Peter's model, but if you want to look at the underlying questions, then these strange constructions apply.)

comment by PhilGoetz · 2009-08-06T23:09:55.818Z · LW(p) · GW(p)

Does my entire post boil down to this seeming paradox?

(Yes, I assume Omega can actually double utility.)

The use of U1 and U0 is needlessly confusing. And it changes the game, because now, U0 is a utility associated with a single draw, and the analysis of doing repeated draws will give different answers. There's also too much change in going from "you die" to "you get utility U0". There's some semantic trickiness there.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-07T00:37:56.561Z · LW(p) · GW(p)

Pretty much. And I should mention at this point that experiments show that, contrary to instructions, subjects nearly always interpret utility as having diminishing marginal utility.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T03:52:55.244Z · LW(p) · GW(p)

Well, that leaves me even less optimistic than before. As long as it's just me saying, "We have options A, B, and C, but I don't think any of them work," there are a thousand possible ways I could turn out to be wrong. But if it reduces to a math problem, and we can't figure out a way around that math problem, hope is harder.

comment by taw · 2009-08-07T00:20:59.655Z · LW(p) · GW(p)

Can utility go arbitrarily high? There are diminishing returns on almost every kind of good thing. I have difficulty imagining life with utility orders of magnitude higher than what we have now. Infinitely long youth might be worth a lot, but even that is only so many doublings due to discounting.

I'm curious why it's getting downvoted without reply. Related thread here. How high do you think "utility" can go?

Replies from: PhilGoetz, Wei_Dai
comment by PhilGoetz · 2009-08-07T14:53:14.477Z · LW(p) · GW(p)

I would guess you're being downvoted by someone who is frustrated not by you so much as by all the other people before you who keep bringing up diminishing returns even though the concept of "utility" was invented to get around that objection.

"Utility" is what you have after you've factored in diminishing returns.

We do have difficulty imagining orders of magnitude higher utility. That doesn't mean it's nonsensical. I think I have orders of magnitude higher utility than a microbe, and that the microbe can't understand that. One reason we develop mathematical models is that they let us work with things that we don't intuitively understand.

If you say "Utility can't go that high", you're also rejecting utility maximization. Just in a different way.

Replies from: taw
comment by taw · 2009-08-07T16:54:13.261Z · LW(p) · GW(p)

Nothing about utility maximization model says utility function is unbounded - the only mathematical assumptions for a well behaved utility function are U'(x) >= 0, U''(x) <= 0.

If the function is let's say U(x) = 1 - 1/(1+x), U'(x) = (x+1)^-2, then it's a properly behaving utility function, yet it never even reaches 1.

And utility maximization is just a model that breaks easily - it can be useful for humans to some limited extent, but we know humans break it all the time. Trying to imagine utilities orders of magnitude higher than current gets it way past its breaking point.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-08-07T17:35:44.599Z · LW(p) · GW(p)

Nothing about utility maximization model says utility function is unbounded

Yep.

the only mathematical assumptions for a well behaved utility function are U'(x) >= 0, U''(x) <= 0

Utility functions aren't necessarily over domains that allow their derivatives to be scalar, or even meaningful (my notional u.f., over 4D world-histories or something similar, sure isn't). Even if one is, or if you're holding fixed all but one (real-valued) of the parameters, this is far too strong a constraint for non-pathological behavior. E.g., most people's (notional) utility is presumably strictly decreasing in the number of times they're hit with a baseball bat, and non-monotonic in the amount of salt on their food.

comment by Wei Dai (Wei_Dai) · 2009-08-07T00:55:37.117Z · LW(p) · GW(p)

Can utility go arbitrarily high?

We could have a contest, where each contestant tries to describe a scenario that has the largest utility to a judge. I bet that after a few rounds of this, we'll converge on some scenario of maximum utility, no matter who the judge is.

Does this show that utility can't go arbitrarily high?

ETA: The above perhaps only shows the difficulty of not getting stuck in a local maximum. Maybe a better argument is that a human mind can only consider a finite subset of configuration space. The point in that subset with the largest utility must be the maximum utility for that mind.

comment by conchis · 2009-08-11T22:05:13.562Z · LW(p) · GW(p)

Sorry for coming late to this party. ;)

Much of this discussion seems to me to rest on a similar confusion to that evidenced in "Expectation maximization implies average utilitarianism".

As I just pointed out again, the vNM axioms merely imply that "rational" decisions can be represented as maximising the expectation of some function mapping world histories into the reals. This function is conventionally called a utility function. In this sense of "utility function", your preferences over gambles determine your utility (up to an affine transform), so when Omega says "I'll double your utility" this is just a very roundabout (and rather odd) way of saying something like "I will do something sufficiently good that it will induce you to accept my offer".* Given standard assumptions about Omega, this pretty obviously means that you accept the offer.

The confusion seems to arise because there are other mappings from world histories into the reals that are also conventionally called utility functions, but which have nothing in particular to do with the vNM utility function. When we read "I'll double your utility" I think we intuitively parse the phrase as referring to one of these other utility functions, which is when problems start to ensue.

Maximising expected vNM utility is the right thing to do. But "maximise expected vNM utility" is not especially useful advice, because we have no access to our vNM utility function unless we already know our preferences (or can reasonably extrapolate them from preferences we do have access to). Maximising expected utilons is not necessarily the right thing to do. You can maximize any (potentially bounded!) positive monotonic transform of utilons and you'll still be "rational".

* There are sets of "rational" preferences for which such a statement could never be true (your preferences could be represented by a bounded utility function where doubling would go above the bound). If you had such preferences and Omega possessed the usual Omega-properties, then she would never claim to be able to double your utility: ergo the hypothetical implicitly rules out such preferences.

NB: I'm aware that I'm fudging a couple of things here, but they don't affect the point, and unfudging them seemed likely to be more confusing than helpful.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-11T22:16:47.874Z · LW(p) · GW(p)

so when Omega says "I'll double your utility" this is just a very roundabout (and rather odd) way of saying something like "I will do something sufficiently good that it will induce you to accept my offer"

It's not that easy. As humans are not formally rational, the problem is about whether to bite this particular bullet, showing a form that following the decision procedure could take and asking if it's a good idea to adopt a decision procedure that forces such decisions. If you already accept the decision procedure, of course the problem becomes trivial.

Replies from: conchis
comment by conchis · 2009-08-11T23:16:53.354Z · LW(p) · GW(p)

Which decision procedure are you talking about? Maximising expected vNM utility and maximizing (e.g.) expected utilons are quite different procedures - which was basically my point.

The former doesn't force such decisions at all. That's precisely why I said that it's not useful advice: all it says is that you should take the gamble if you prefer to take the gamble.* (Moreover, if you did not prefer to take the gamble, the hypothetical doubling of vNM utility could never happen, so the set up already assumes you prefer the gamble. This seems to make the hypothetical not especially useful either.)

On the other hand "maximize expected utilons" does provide concrete advice. It's just that (AFAIK) there's no reason to listen to that advice unless you're risk-neutral over utilons. If you were sufficiently risk averse over utilons then a 50% chance of doubling them might not induce you to take the gamble, and nothing in the vNM axioms would say that you're behaving irrationally. The really interesting question then becomes whether there are other good reasons to have particular risk preferences with respect to utilons, but it's a question I've never heard a particularly good answer to.

* At least provided doing so would not result in an inconsistency in your preferences. [ETA: Actually, if your preferences are inconsistent, then they won't have a vNM utility representation, and Omega's claim that she will double your vNM utility can't actually mean anything. The set-up therefore seems to imply that you preferences are necessarily consistent. There sure seem to be a lot of surreptitious assumptions built in here!]

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-12T14:10:42.463Z · LW(p) · GW(p)

Which decision procedure are you talking about? Maximising expected vNM utility and maximizing (e.g.) expected utilons are quite different procedures - which was basically my point.

[...] you should take the gamble if you prefer to take the gamble

The "prefer" here isn't immediate. People have (internal) arguments about what should be done in what situations precisely because they don't know what they really prefer. There is an easy answer to go with the whim, but that's not preference people care about, and so we deliberate.

When all confusion is defeated, and the preference is laid out explicitly, as a decision procedure that just crunches numbers and produces a decision, that is by construction exactly the most preferable action, there is nothing to argue about. Argument is not a part of this form of decision procedure.

In real life, argument is an important part of any decision procedure, and it is the means by which we could select a decision procedure that doesn't involve argument. You look at the possible solutions produced by many tools, and judge which of them to implement. This makes the decision procedure different from the first kind.

One of the tools you consider may be a "utility maximization" thingy. You can't say that it's by definition the right decision procedure, as first you have to accept it as such through argument. And this applies not only to the particular choice of prior and utility, but also to the algorithm itself, to the possibility of representing your true preference in this form.

The "utilons" of the post linked above look different from the vN-M expected utility because their discussion involved argument, informal steps. This doesn't preclude the topic the argument is about, the "utilons", from being exactly the same (expected) utility values, approximated to suit more informal discussion. The difference is that the informal part of decision-making is considered as part of decision procedure in that post, unlike what happens with the formal tool itself (that is discussed there informally).

By considering the double-my-utility thought experiment, the following question can be considered: assuming that the best possible utility+prior are chosen within the expected utility maximization framework, do the decisions generated by the resulting procedure look satisfactory? That is, is this form of decision procedure adequate, as an ultimate solution, for all situations? The answer can be "no", which would mean that expected utility maximization isn't a way to go, or that you'd need to apply it differently to the problem.

Replies from: conchis
comment by conchis · 2009-08-12T16:02:27.191Z · LW(p) · GW(p)

I'm struggling to figure out whether we're actually disagreeing about anything here, and if so, what it is. I agree with most of what you've said, but can't quite see how it connects to the point I'm trying to make. It seems like we're somehow managing to talk past each other, but unfortunately I can't tell whether I'm missing your point, you're missing mine, or something else entirely. Let's try again... let me know if/when you think I'm going off the rails here.

If I understand you correctly, you want to evaluate a particular decision procedure "maximize expected utility" (MEU) by seeing whether the results it gives in this situation seem correct. (Is that right?)

My point was that the result given by MEU, and the evidence that this can provide, both depend crucially on what you mean by utility.

One possibility is that by utility, you mean vNM utility. In this case, MEU clearly says you should accept the offer. As a result, it's tempting to say that if you think accepting the offer would be a bad idea, then this provides evidence against MEU (or equivalently, since the vNM axioms imply MEU, that you think it's ok to violate the vNM axioms). The problem is that if you violate the vNM axioms, your choices will have no vNM utility representation, and Omega couldn't possibly promise to double your vNM utility, because there's no such thing. So for the hypothetical to make sense at all, we have to assume that your preferences conform to the vNM axioms. Moreover, because the vNM axioms necessarily imply MEU, the hypothetical also assumes MEU, and it therefore can't provide evidence either for or against it.*

If the hypothetical is going to be useful, then utility needs to mean something other than vNM utility. It could mean hedons, it could mean valutilons,** it could mean something else. I do think that responses to the hypothetical in these cases can provide useful evidence about the value of decision procedures such as "maximize expected hedons" (MEH) or "maximize expected valutilons" (MEV). My point on this score was simply that there is no particular reason to think that either MEH or MEV were likely to be an optimal decision procedure to begin with. They're certainly not implied by the vNM axioms, which require only that you should maximise the expectation of some (positive) monotonic transform of hedons or valutilons or whatever.*** [ETA: As a specific example, if you decide to maximize the expectation of a bounded concave function of hedons/valutilons, then even if hedons/valutilons are unbounded, you'll at some point stop taking bets to double your hedons/valutilons, but still be an expected vNM utility maximizer.]

Does that make sense?

* This also means that if you think MEU gives the "wrong" answer in this case, you've gotten confused somewehere - most likely about what it means to double vNM utility.

** I define these here as the output of a function that maps a specific, certain, world history (no gambles!) into the reals according to how well that particular world history measures up against my values. (Apologies for the proliferation of terminology - I'm trying to guard against the possibility that we're using "utilons" to mean different things without inadvertently ending up in a messy definitional argument. ;))

*** A corollary of this is that rejecting MEH or MEV does not constitute evidence against the vNM axioms.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-12T16:38:33.387Z · LW(p) · GW(p)

You are placing on a test the following well-defined tool: expected utility maximizer with a prior and "utility" function, that evaluates the events on the world. By "utility" function here I mean just some function, so you can drop the word "utility". Even if people can't represent their preference as expected some-function maximization, such tool could still be constructed. The question is whether such a tool can be made that always agrees with human preference.

An easy question is what happens when you use "hedons" or something else equally inadequate in the role of utility function: the tool starts to make decisions with which we disagree. Case closed. But maybe there are other settings under which the tool is in perfect agreement with human judgment (after reflection).

Utility-doubling thought experiment compares what is better according to the judgment of the tool (to take the card) with what is better according to the judgment of a person (maybe not take the card). As the tool's decision in this thought experiment is made invariant on the tool's settings ("utility" and prior), showing that the tool's decision is wrong according to a person't preference (after "careful" reflection), proves that there is no way to set up "utility" and prior so that the "utility" maximization tool represents that person's preference.

Replies from: conchis
comment by conchis · 2009-08-12T18:08:16.950Z · LW(p) · GW(p)

As the tool's decision in this thought experiment is made invariant on the tool's settings ("utility" and prior), showing that the tool's decision is wrong according to a person's preference (after "careful" reflection), proves that there is no way to set up "utility"

My argument is that, if Omega is offering to double vNM utility, the set-up of the thought experiment rules out the possibility that the decision could be wrong according to a person's considered preference (because the claim to be doubling vNM utility embodies an assumption about what a person's considered preference is). AFAICT, the thought experiment then amounts to asking: "If I should maximize expected utility, should I maximize expected utility?" Regardless of whether I should actually maximize expected utility or not, the correct answer to this question is still "yes". But the thought experiment is completely uninformative.

Do you understand my argument for this conclusion? (Fourth para of my previous comment.) If you do, can you point out where you think it goes astray? If you don't, could you tell me what part you don't understand so I can try to clarify my thinking?

On the other hand, if Omega is offering to double something other than vNM utility (hedons/valutilons/whatever) then I don't think we have any disagreement. (Do we? Do you disagree with anything I said in para 5 of my previous comment?)

My point is just that the thought experiment is underspecified unless we're clear about what the doubling applies to, and that people sometimes seem to shift back and forth between different meanings.

Replies from: PhilGoetz, Vladimir_Nesov
comment by PhilGoetz · 2009-08-12T18:33:13.671Z · LW(p) · GW(p)

What you just said seems correct.

What was originally at issue is whether we should act in ways that will eventually destroy ourselves.

I think the big-picture conclusion from what you just wrote is that, if we see that we're acting in ways that will probably exterminate life in short order, that doesn't necessarily mean it's the wrong thing to do.

However, in our circumstances, time discounting and "identity discounting" encourage us to start enjoying and dooming ourselves now; whereas it would probably be better to spread life to a few other galaxies first, and then enjoy ourselves.

(I admit that my use of the word "better" is problematic.)

Replies from: conchis
comment by conchis · 2009-08-13T09:15:03.061Z · LW(p) · GW(p)

if we see that we're acting in ways that will probably exterminate life in short order, that doesn't necessarily mean it's the wrong thing to do.

Well, I don't disagree with this, but I would still agree with it if you substituted "right" for "wrong", so it doesn't seem like much of a conclusion. ;)

Replies from: orthonormal
comment by orthonormal · 2009-08-15T21:27:23.323Z · LW(p) · GW(p)

it doesn't seem like much of a conclusion.

Moving back toward your ignorance prior on a topic can still increase your log-score if the hypothesis was concentrating probability mass in the wrong areas (failing to concentrate a substantial amount in a right area).

comment by Vladimir_Nesov · 2009-08-13T07:38:49.453Z · LW(p) · GW(p)

You argue that the thought experiment is trivial and doesn't solve any problems. In my comments above I described a specific setup that shows how to use (interpret) the thought experiment to potentially obtain non-trivial results.

Replies from: conchis
comment by conchis · 2009-08-13T08:52:01.728Z · LW(p) · GW(p)

I argue that the thought experiment is ambiguous, and that for a certain definition of utility (vNM utility), it is trivial and doesn't solve any problems. For this definition of utility I argue that your example doesn't work. You do not appear to have engaged with this argument, despite repeated requests to point out either where it goes wrong, or where it is unclear. If it goes wrong, I want to know why, but this conversation isn't really helping.

For other definitions of utility, I do not, and have never claimed that the thought experiment is trivial. In fact, I think it is very interesting.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-13T09:02:11.005Z · LW(p) · GW(p)

I argue that the thought experiment is ambiguous, and that for a certain definition of utility (vNM utility), it is trivial and doesn't solve any problems. For this definition of utility I argue that your example doesn't work.

If by "your example" you refer to the setup described in this comment, I don't understand what you are saying here. I don't use any "definition of utility", it's just a parameter of the tool.

Replies from: conchis
comment by conchis · 2009-08-13T09:10:50.094Z · LW(p) · GW(p)

it's just a parameter of the tool.

It's also an entity in the problem set-up. When Omega says "I'll double your utility", what is she offering to double? Without defining this, the problem isn't well-specified.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-13T09:19:31.500Z · LW(p) · GW(p)

Certainly, you need to resolve any underspecification. There are ways to do this usefully (or not).

Replies from: conchis
comment by conchis · 2009-08-13T09:22:08.917Z · LW(p) · GW(p)

Agreed. My point is simply that one particular (tempting) way of resolving the underspecification is non-useful. ;)

comment by [deleted] · 2009-08-06T22:48:13.753Z · LW(p) · GW(p)

It seems like you are assuming that the only effect of dying is that it brings your utility to 0. I agree that after you are dead your utility is 0, but before you are dead you have to die, and I think that is a strongly negative utility event. When I picture my utility playing this game, I think that if I start with X, then I draw a start and have 2X. Then I draw a skull, I look at the skull, my utility drops to -10000X as I shit my pants and beg omega to let me live, and then he kills me and my utility is 0.

I don't know how much sense that makes mathematically. But it certainly feels to me like fear of death makes dying a more negative event than just a drop to utility 0.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T23:11:15.663Z · LW(p) · GW(p)

The skull cards are electrocuted, and will kill you instantly and painlessly as soon as you touch them.

(Be careful to touch only the cards you take.)

comment by A1987dM (army1987) · 2012-05-29T15:49:39.485Z · LW(p) · GW(p)

But even if you don't believe in many worlds, I think you do the same thing, unless you are not maximizing expected utility. (Unless chance is quantized so that there is a minimum possible possibility. I don't think that would help much anyway.)

Or unless your utility function is bounded above, and the utility you assign to the status quo is more than the average of the utility of dying straight away and the upper bound of your utility function, in which case Omega couldn't possibly double your utility. (Indeed, I can't think of any X right now such that I'd prefer {50% X, 10% I die right now, 40% business as usual} to {100% business as usual}.)

comment by Jonathan_Graehl · 2009-08-06T23:36:13.972Z · LW(p) · GW(p)

Assuming the utility increase holds my remaining lifespan constant, I'd draw a card every few years (if allowed). I don't claim to maximize "expected integral of happiness over time" by doing so (substitute utility for happiness if you like; but perhaps utility should be forward-looking and include expected happiness over time as just one of my values?). Of course, by supposing my utility can be doubled, I'll never be fully satisfied.

Replies from: dclayh
comment by dclayh · 2009-08-07T07:08:30.042Z · LW(p) · GW(p)

The "justified expectation of pleasant surprises", as someone or other said.

comment by orthonormal · 2009-08-06T21:25:50.795Z · LW(p) · GW(p)

I'd wondered why nobody brought up MWI and anthropic probabilities yet.

As for this, it reminds me of a Dutch book argument Eliezer discussed some time ago. His argument was that in cases where some kind of infinity is on the table, aiming to satisfice rather than optimize can be the better strategy.

In my case (assuming I'm quite confident in Many-Worlds), I might decide to take a card or two, go off and enjoy myself for a week, come back and take another card or two, et cetera.

Replies from: Vladimir_Nesov, PhilGoetz
comment by Vladimir_Nesov · 2009-08-06T22:42:46.239Z · LW(p) · GW(p)

Many worlds have nothing to do with validity of suicidal decisions. If you have an answer that maximizes expected utility but gives almost-certain probability of total failure, you still take it in a deterministic world. There is no magic by which deterministic world declares that the decision-theoretic calculation is invalid in this particular case, while many-worlds lets it be.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T22:48:13.609Z · LW(p) · GW(p)

I think you're right. Would you agree that this is a problem with following the policy of maximizing expected utility? Or would you keep drawing cards?

Replies from: Cyan
comment by Cyan · 2009-08-06T23:13:28.806Z · LW(p) · GW(p)

This is a variant on the St. Petersburg paradox, innit? My preferred resolution is to assert that any realizable utility function is bounded.

Replies from: PhilGoetz, Vladimir_Nesov
comment by PhilGoetz · 2009-08-07T00:26:42.915Z · LW(p) · GW(p)

Thanks for the link - this is another form of the same paradox orthnormal linked to, yes. The Wikipedia page proposes numerous "solutions", but most of them just dodge the question by taking advantage of the fact that the paradox was posed using "ducats" instead of "utility". It seems like the notion of "utility" was invented in response to this paradox. If you pose it again using the word "utility", these "solutions" fail. The only possibly workable solution offered on that Wikipedia page is:

Rejection of mathematical expectation

Various authors, including Jean le Rond d'Alembert and John Maynard Keynes, have rejected maximization of expectation (even of utility) as a proper rule of conduct. Keynes, in particular, insisted that the relative risk of an alternative could be sufficiently high to reject it even were its expectation enormous.

Replies from: Cyan
comment by Cyan · 2009-08-07T00:30:06.362Z · LW(p) · GW(p)

The page notes the reformulation in terms of utility, which it terms "super St. Petersberg paradox". (It doesn't have its own section, or I'd have linked directly to that.) I agree that there doesn't seem to be a workable solution -- my last refuge was just destroyed by Vladimir Nesov.

Replies from: Wei_Dai, John_Maxwell_IV
comment by Wei Dai (Wei_Dai) · 2009-08-07T01:38:39.863Z · LW(p) · GW(p)

I agree that there doesn't seem to be a workable solution -- my last refuge was just destroyed by Vladimir Nesov.

I'm afraid I don't understand the difficulty here. Let's assume that Omega can access any point in configuration space and make that the reality. Then either (A) at some point it runs out of things with which to entice you to draw another card, in which case your utility function is bounded or (B) it never runs out of such things, in which case your utility function in unbounded.

Why is this so paradoxical again?

Replies from: PhilGoetz, Wei_Dai, Cyan
comment by PhilGoetz · 2009-08-07T04:00:32.211Z · LW(p) · GW(p)

If it's not paradoxical, how many cards would you draw?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-08-07T09:09:59.041Z · LW(p) · GW(p)

I guess no more than 10 cards. That's based on not being able to imagine a scenario such that I'd prefer .999 probability of death + .001 probability of scenario to the status quo. But it's just a guess because Omega might have better imagination that I do, or understand my utility function better than I do.

Replies from: Eliezer_Yudkowsky, PhilGoetz
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-07T19:10:51.271Z · LW(p) · GW(p)

Omega offers you the healing of all the rest of Reality; every other sentient being will be preserved at what would otherwise be death and allowed to live and grow forever, and all unbearable suffering not already in your causal past will be prevented. You alone will die.

You wouldn't take a trustworthy 0.001 probability of that reward and a 0.999 probability of death, over the status quo? I would go for it so fast that there'd be speed lines on my quarks.

Really, this whole debate is just about people being told "X utilons" and interpreting utility as having diminishing marginal utility - I don't see any reason to suppose there's more to it than that.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-08-07T19:48:14.783Z · LW(p) · GW(p)

You alone will die.

There's no reason for Omega to kill me in the winning outcome...

You wouldn't take a trustworthy 0.001 probability of that reward and a 0.999 probability of death, over the status quo?

Well, I'm not as altruistic as you are. But there must be some positive X such that even you wouldn't take a trustworthy X probability of that reward and a 1-X probability of death, over the status quo, right? Suppose you've drawn enough cards to win this prize, what new prize can Omega offer you to entice you to draw another card?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-07T23:51:54.684Z · LW(p) · GW(p)

There's no reason for Omega to kill me in the winning outcome...

Omega's a bastard. So what?

Well, I'm not as altruistic as you are.

WHAT? Are you honestly sure you're THAT not as altruistic as I am?

But there must be some positive X such that even you wouldn't take a trustworthy X probability of that reward and a 1-X probability of death, over the status quo, right?

There's the problem of whether the scenario I described which involves a "forever" and "over all space" actually has infinite utility compared to increments in my own life which even if I would otherwise live forever would be over an infinitesimal fraction of all space, but if we fix that with a rather smaller prize that I would still accept, then yes of course.

Suppose you've drawn enough cards to win this prize, what new prize can Omega offer you to entice you to draw another card?

Heal this Reality plus another three?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-08-08T23:25:01.967Z · LW(p) · GW(p)

Omega's a bastard. So what?

That's fine, I just didn't know if that detail had some implication that I was missing.

WHAT? Are you honestly sure you're THAT not as altruistic as I am?

Yes, I'm pretty sure, although I leave open the possibility that I may encounter an argument in the future that would persuade me to change my mind. My understanding is that most people have preferences like mine, so I'm surprised that you're so surprised.

It seems that I had missed the earlier posts on bounded vs. unbounded utility functions. I'll follow up there to avoid retreading old ground.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-09T18:20:39.970Z · LW(p) · GW(p)

Yes, I'm pretty sure, although I leave open the possibility that I may encounter an argument in the future that would persuade me to change my mind. My understanding is that most people have preferences like mine, so I'm surprised that you're so surprised.

I'm shocked, and I hadn't thought that most people had preferences like yours - at least would not verbally express such preferences; their "real" preferences being a whole separate moral issue beyond that. I would have thought that it would be mainly psychopaths, the Rand-damaged, and a few unfortunate moral philosophers with mistaken metaethics, who would decline that offer.

I guess I would follow up with these questions: (1) When you see someone else hurting, or attend a friend's funeral, do you feel sad; (2) are you more viscerally afraid of your own death than the strength of that emotion, if comparing two single cases; (3) do you decline to multiply out of a deliberate belief that all events after your own death ought to have zero utility to you, even if they feel sad when you think about them now; or (4) do you just generally want to leave the intuitive judgment (2) with its innate lack of multiplication undisturbed?

Or if I'm asking the wrong questions here, then what is going on? I would expect most humans to instinctively feel that their whole tribe, to say nothing of the entire rest of reality, was worth something; and I would expect a rationalist to understand that if their own life does not literally have lexicographic priority (i.e., lives of others have infinitesimal=0 value in the utility function) then the multiplication factor here is overwhelming; and I would also expect you, Wei Dai, to not mistakenly believe that you were rationally forced to be lexicographically selfish regardless of your feelings... so I'm really not clear on what could be going on here.

I guess my most important question would be: Do you feel that way, or are you deciding that way? In the former case, I might just need to make a movie showing one individual after another being healed, and after you'd seen enough of them, you would agree - the visceral emotional force having become great enough. In the latter case I'm not sure what's going on.

PS again: Would you accept a 60% probability of death in exchange for healing the rest of reality?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-08-09T21:57:02.476Z · LW(p) · GW(p)

I guess I would follow up with these questions: (1) When you see someone else hurting, or attend a friend's funeral, do you feel sad; (2) are you more viscerally afraid of your own death than the strength of that emotion, if comparing two single cases; (3) do you decline to multiply out of a deliberate belief that all events after your own death ought to have zero utility to you, even if they feel sad when you think about them now; or (4) do you just generally want to leave the intuitive judgment (2) with its innate lack of multiplication undisturbed?

1: Yes. 2: Yes. 3: No. 4: I see a number of reasons not to do straight multiplication:

  • Straight multiplication leads to an absurd degree of unconcern for oneself, given that the number of potential people is astronomical. It means, for example, that you can't watch a movie for enjoyment, unless that somehow increases your productivity for saving the world. (In the least convenient world, watching movies uses up time without increasing productivity.)
  • No one has proposed a form of utilitarianism that is free from paradoxes (e.g., the Repugnant Conclusion).
  • My current position resembles the "Proximity argument" from Revisiting torture vs. dust specks:

Proximity argument: don't ask me to value strangers equally to friends and relatives. If each additional person matters 1% less than the previous one, then even an infinite number of people getting dust specks in their eyes adds up to a finite and not especially large amount of suffering.

This agrees with my intuitive judgment and also seems to have relatively few philosophical problems, compared to valuing everyone equally without any kind of discounting.

I guess my most important question would be: Do you feel that way, or are you deciding that way?

My last bullet above already answered this, but I'll repeat for clarification: it's both.

PS again: Would you accept a 60% probability of death in exchange for healing the rest of reality?

This should be clear from my answers above as well, but yes.

Replies from: cousin_it, conchis
comment by cousin_it · 2009-08-12T13:57:26.128Z · LW(p) · GW(p)

Oh, 'ello. Glad to see somebody still remembers the proximity argument. But it's adapted to our world where you generally cannot kill a million distant people to make one close relative happy. If we move to a world where Omegas regularly ask people difficult questions, a lot of people adopting proximity reasoning will cause a huge tragedy of the commons.

About Eliezer's question, I'd exchange my life for a reliable 0.001 chance of healing reality, because I can't imagine living meaningfully after being offered such a wager and refusing it. Can't imagine how I'd look other LW users in the eye, that's for sure.

Replies from: Wei_Dai, Vladimir_Nesov
comment by Wei Dai (Wei_Dai) · 2009-08-13T09:23:26.889Z · LW(p) · GW(p)

Can't imagine how I'd look other LW users in the eye, that's for sure.

I publicly rejected the offer, and don't feel like a pariah here. I wonder what is the actual degree of altruism among LW users. Should we set up a poll and gather some evidence?

comment by Vladimir_Nesov · 2009-08-12T14:34:19.396Z · LW(p) · GW(p)

Cooperation is a different consideration from preference. You can prefer only to keep your own "body" in certain dynamics, no matter what happens to the rest of the world, and still benefit the most from, roughly speaking, helping other agents. Which can include occasional self-sacrifice a la counterfactual mugging.

comment by conchis · 2009-08-12T14:22:10.546Z · LW(p) · GW(p)

No one has proposed a form of utilitarianism that is free from paradoxes (e.g., the Repugnant Conclusion).

I'd be interested to know what you think of Critical-Level Utilitarianism and Population-Relative Betterness as ways of avoiding the repugnant conclusion and other problems.

comment by PhilGoetz · 2009-08-07T14:54:45.174Z · LW(p) · GW(p)

So does your answer change once you've drawn 10 cards and are still alive?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-08-07T18:16:56.263Z · LW(p) · GW(p)

No, if my guess is correct, then some time before I'm offered the 11th card, Omega will say "I can't double your utility again" or equivalently, "There is no prize I can offer you such that you'd prefer a .5 probability of it to keeping what you have."

comment by Wei Dai (Wei_Dai) · 2009-08-07T21:52:57.161Z · LW(p) · GW(p)

After further thought, I see that case (B) can be quite paradoxical. Consider Eliezer's utility function, which is supposedly unbounded as a function of how many years he lives. In other words, Omega can increase Eliezer's utility without bound just by giving him increasingly longer lives. Expected utility maximization then dictates that he keeps drawing cards one after another, even though he knows that by doing so, with probability 1 he won't live to enjoy his rewards.

Replies from: Vladimir_Nesov, Alicorn
comment by Vladimir_Nesov · 2009-08-07T22:16:11.961Z · LW(p) · GW(p)

When you go to infinity, you'd need to define additional mathematical structure that answers your question. You can't just conclude that the correct course of action is to keep drawing cards for eternity, doing nothing else. Even if at each moment the right action is to draw one more card, when you consider the overall strategy, the strategy of drawing cards for all time may be a wrong strategy.

For example, consider the following preference on infinite strings. A string has utility 0, unless it has the form 11111.....11112222...., that is a finite number of 1 followed by infinite number of 2, in which case its utility is the number of 1s. Clearly, a string of this form with one more 1 has higher utility than a string without, and so a string with one more 1 should be preferred. But a string consisting only of 1s doesn't have the non-zero-utility form, because it doesn't have the tail of infinite number of 2s. It's a fallacy to follow an incremental argument to infinity. Instead, one must follow a one-step argument that considers the infinite objects as whole.

Replies from: Nick_Tarleton, Nick_Tarleton, Wei_Dai, DanArmak
comment by Wei Dai (Wei_Dai) · 2009-08-07T22:35:09.390Z · LW(p) · GW(p)

What you say sounds reasonable, but I'm not sure how I can apply it in this example. Can you elaborate?

Consider Eliezer's choice of strategies at the beginning of the game. He can either stop after drawing n cards for some integer n, or draw an infinite number of cards. First, (supposing it takes 10 seconds to draw a card)

EU(draw an infinite number of cards) = 1/2 U(live 10 seconds) + 1/4 U(live 20 seconds) + 1/8 U(live 30 seconds) ...

which obviously converges to a small number. On the other hand, EU(stop after n+1 cards) > EU(stop after n cards) for all n. So what should he do?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-07T23:46:07.564Z · LW(p) · GW(p)

This exposes a hole in the problem statement: what does the Omega's prize measure? We determined that U0 is the counterfactual where Omega kills you, U1 is the counterfactual where it does nothing, but what is U2=U1+3*(U1-U0)? This seems to be the expected utility of the event where you draw the lucky card, in which case this event contains, in particular, your future decisions to continue drawing cards. But if it's so, it places a limit on how your utility can be improved further during the latter rounds, since if your utility continues to increase, it contradicts the statement in the first round that your utility is going to be only U2, and no more. Utility can't change, as each utility is a valuation of a specific event in the sample space.

So, the alternative formulation that removes this contradiction is for Omega to only assert that the expected utility given that you receive a lucky card is no less than U2. In this case the right strategy seems to be continue drawing cards indefinitely, since the utility you receive could be in something other than your own life, now spent drawing cards only.

This however seems to sidestep the issue. What if the only utility you see is in the future actions you do, which don't include picking cards, and you can't interleave cards with other actions, that is you must allot a given amount of time to picking cards.

You can recast the problem of choosing each of the infinite number of decisions (or one among all available in some sense infinite sequences of decisions) to the problem of choosing a finite "seed" strategy for making decisions. Say, only a finite number of strategies is available, for example only what fits in the memory of the computer that starts the enterprise, that could since the start of the experiment be expanded, but the first version has a specified limit. In this case, the right program is as close to Busy Beaver is you can get, that is you draw cards as long as possible, but only finitely long, and after that you stop and go on to enjoy the actual life.

comment by DanArmak · 2009-08-07T22:28:24.915Z · LW(p) · GW(p)

Why are you treating time as infinite? Surely it's finite, just taking unbounded values?

Even if at each moment the right action is to draw one more card, when you consider the overall strategy, the strategy of drawing cards for all time may be a wrong strategy.

But you're not asked to decide a strategy for all of time. You can change your decision at every round freely.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-07T23:26:16.188Z · LW(p) · GW(p)

But you're not asked to decide a strategy for all of time. You can change your decision at every round freely.

You can't change any fixed thing, you can only determine it. Change is a timeful concept. Change appears when you compare now and tomorrow, not when you compare the same thing with itself. You can't change the past, and you can't change the future. What you can change about the future is your plan for the future, or your knowledge: as the time goes on, your idea about a fact in the now becomes a different idea tomorrow.

When you "change" your strategy, what you are really doing is changing your mind about what you're planning. The question you are trying to answer is what to actually do, what decisions to implement at each point. A strategy for all time is a generator of decisions at each given moment, an algorithm that runs and outputs a stream of decisions. If you know something about each particular decision, you can make a general statement about the whole stream. If you know that each next decision is going to be "accept" as opposed to "decline", you can prove that the resulting stream is equivalent to an infinite stream that only answers "accept", at all steps. And at the end, you have a process, the consequences of your decision-making algorithm consist in all of the decisions. You can't change that consequence, as the consequence is what actually happens, if you changed your mind about making a particular decision along the way, the effect of that change is already factored in in the resulting stream of actions.

The consequentialist preference is going to compare the effect of the whole infinite stream of potential decisions, and until you know about the finiteness of the future, the state space is going to contain elements corresponding to the infinite decision traces. In this state space, there is an infinite stream corresponding to one deciding to continue picking cards for eternity.

Replies from: DanArmak, PhilGoetz
comment by DanArmak · 2009-08-07T23:48:40.849Z · LW(p) · GW(p)

Thanks, I understand now.

comment by PhilGoetz · 2009-08-07T23:34:29.460Z · LW(p) · GW(p)

Whoa.

Is there something I can take that would help me understand that better?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-08T00:03:48.037Z · LW(p) · GW(p)

I'm more or less talking just about infinite streams, which is a well-known structure in math. You can try looking at the following references. Or find something else.

P. Cousot & R. Cousot (1992). `Inductive definitions, semantics and abstract interpretations'. In POPL '92: Proceedings of the 19th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pp. 83-94, New York, NY, USA. ACM. http://www.di.ens.fr/~cousot/COUSOTpapers/POPL92.shtml

J. J. M. M. Rutten (2003). `Behavioural differential equations: a coinductive calculus of streams, automata, and power series'. Theor. Comput. Sci. 308(1-3):1-53. http://www.cwi.nl/~janr/papers/files-of-papers/tcs308.pdf

comment by Alicorn · 2009-08-07T21:55:54.107Z · LW(p) · GW(p)

Does Omega's utility doubling cover the contents of the as-yet-untouched deck? It seems to me that it'd be pretty spiffy re: my utility function for the deck to have a reduced chance of killing me.

Replies from: randallsquared, Alicorn
comment by randallsquared · 2009-08-09T00:17:22.822Z · LW(p) · GW(p)

At first I thought this was pretty funny, but even if you were joking, it may actually map to the extinction problem, since each new technology has a chance of making extinction less likely, as well. As an example, nuclear technology had some probability of killing everyone, but also some probability of making Orion ships possible, allowing diaspora.

comment by Alicorn · 2009-08-11T19:09:42.037Z · LW(p) · GW(p)

While I'm gaming the system, my lifetime utility function (if I have one) could probably be doubled by giving me a reasonable suite of superpowers, some of which would let me identify the rest of the cards in the deck (X-ray vision, precog powers, etc.) or be protected from whatever mechanism the skull cards use to kill me (immunity to electricity or just straight-up invulnerability). Is it a stipulation of the scenario that nothing Omega does to tweak the utility function upon drawing a star affects the risks of drawing from the deck, directly or indirectly?

Replies from: orthonormal
comment by orthonormal · 2009-08-11T19:23:26.704Z · LW(p) · GW(p)

It should be, especially since the existential-risk problems that we're trying to model aren't known to come with superpowers or other such escape hatches.

comment by Cyan · 2009-08-07T02:14:38.457Z · LW(p) · GW(p)

Yeesh. I'm changing my mind again tonight. My only excuse is that I'm sick, so I'm not thinking as straight as I might.

I was originally thinking that Vladimir Nesov's reformulation showed that I would always accept Omega's wager. But now I see that at some point U1+3*(U1-U0) must exceed any upper bound (assuming I survive that long).

Given U1 (utility of refusing initial wager), U0 (utility of death), U_max, and U_n (utility of refusing wager n assuming you survive that long), it might be possible that there is a sequence of wagers that (i) offer positive expected utility at each step; (ii) asymptotically approach the upper bound if you survive; and (iii) have a probability of survival approaching zero. I confess I'm in no state to cope with the math necessary to give such a sequence or disprove its existence.

Replies from: pengvado, PhilGoetz
comment by pengvado · 2009-08-07T03:23:55.433Z · LW(p) · GW(p)

There is no such sequence. Proof:

In order for wager n to be nonnegative expected utility, P(death)*U_0 + (1-P(death))*U_(n+1) >= U_n. Equivalently, P(death this time | survived until n) <= (U_(n+1)-U_n) / (U_(n+1)-U0).

Assume the worst case, equality. Then the cumulative probability of survival decreases by exactly the same factor as your utility (conditioned on survival) increases. This is simple multiplication, so it's true of a sequence of borderline wagers too.

With a bounded utility function, the worst sequence of wagers you'll accept in total is P(death) <= (U_max-U0)/(U1-U0). Which is exactly what you'd expect.

Replies from: Cyan
comment by Cyan · 2009-08-07T04:55:42.931Z · LW(p) · GW(p)

When there's an infinite number of wagers, there can be a distinction between accepting the whole sequence at one go and accepting each wager one after another. (There's a paradox associated with this distinction, but I forget what it's called.) Your second-last sentence seems to be a conclusion about accepting the whole sequence at one go, but I'm worried about accepting each wager one after another. Is the distinction important here?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T05:02:32.220Z · LW(p) · GW(p)

there can be a distinction between accepting the whole sequence at one go and accepting each wager one after another.

Are you thinking of the Riemann series theorem? That doesn't apply when the payoff matrix for each bet is the same (and finite).

Replies from: Cyan, Douglas_Knight
comment by Cyan · 2009-08-07T23:07:10.927Z · LW(p) · GW(p)

No, it was this thing. I just couldn't articulate it.

comment by Douglas_Knight · 2009-08-07T06:49:37.578Z · LW(p) · GW(p)

A bounded utility function probably gets you out of all problems along those lines.

Certainly it's good in the particular case: your expected utility (in the appropriate sense) is an increasing function of bets you accept and increasing sequences don't have convergence issues.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T14:56:55.261Z · LW(p) · GW(p)

How would you bound your utility function? Just pick some arbitrary converging function f, and set utility' = f(utility)? That seems arbitrary. I suspect it might also make theorems about expectation maximization break down.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-08-07T15:26:29.118Z · LW(p) · GW(p)

No, I'm not advocating changing utility functions. I'm just saying that if your utility function is bounded, you don't have either of these problems with infinity. You don't have the convergence problem nor the original problem of probability of the good outcome going to zero. Of course, you still have the result that you keep making bets till your utility is maxed out with very low probability, which bothers some people.

comment by PhilGoetz · 2009-08-07T04:01:27.054Z · LW(p) · GW(p)

How would it help if this sequence existed?

Replies from: Cyan
comment by Cyan · 2009-08-07T04:40:13.158Z · LW(p) · GW(p)

If the sequence exists, then the paradox* persists even in the face of bounded utility functions. (Or possibly it already persists, as Vladimir Nesov argued and you agreed, but my cold-virus-addled wits aren't sharp enough to see it.)

* The paradox is that each wager has positive expected utility, but accepting all wagers leads to death almost surely.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T04:59:55.614Z · LW(p) · GW(p)

Ah. So you don't want the sequence to exist.

Replies from: Cyan
comment by Cyan · 2009-08-07T12:45:29.914Z · LW(p) · GW(p)

In the sense that if it exists, then it's a bullet I will bite.

comment by John_Maxwell (John_Maxwell_IV) · 2009-08-07T20:37:05.350Z · LW(p) · GW(p)

Why is rejection of mathematical expectation an unworkable solution?

This isn't the only scenario where straight expectation is problematic. Pascal's Mugging, timeless decision theory, and maximization of expected growth rate come to mind. That makes four.

In my opinion, LWers should not give expected utility maximization the same axiomatic status that they award consequentialism. Is this worth a top level post?

Replies from: Pfft, Cyan
comment by Pfft · 2009-08-11T02:48:52.491Z · LW(p) · GW(p)

This is exactly my take on it also.

There is a model which is standard in economics which say "people maximize expected utility; risk averseness arises because utility functions are concave". This has always struck me as extremely fishy, for two reasons: (a) it gives rise to paradoxes like this, and (b) it doesn't at all match what making a choice feels like for me: if someone offers me a risky bet, I feel inclined to reject it because it is risky, not because I have done some extensive integration of my utility function over all possible outcomes. So it seems a much safer assumption to just assume that people's preferences are a function from probability distributions of outcomes, rather than making the more restrictive assumption that that function has to arise as an integral over utilities of individual outcomes.

So why is the "expected utility" model so popular? A couple of months ago I came across a blog-post which provides one clue: it pointed out that standard zero-sum game theory works when players maximize expected utility, but does not work if they have preferences about probability distributions of outcomes (since then introducing mixed strategies won't work).

So an economist who wants to apply game theory will be inclined to assume that actors are maximizing expected utility; but we LWers shouldn't necessarily.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-08-11T18:01:53.347Z · LW(p) · GW(p)

There is a model which is standard in economics which say "people maximize expected utility; risk averseness arises because utility functions are convex".

Do you mean concave?

A couple of months ago I came across a blog-post which provides one clue: it pointed out that standard zero-sum game theory works when players maximize expected utility, but does not work if they have preferences about probability distributions of outcomes (since then introducing mixed strategies won't work).

Technically speaking, isn't maximizing expected utility a special case of having preferences about probability distributions about outcomes? So maybe you should instead say "does not work elegantly if they have arbitrary preferences about probability distributions."

This is what I tend to do when I'm having conversations in real life; let's see how it works online :-)

Replies from: Pfft, Vladimir_Nesov
comment by Pfft · 2009-08-12T17:55:45.589Z · LW(p) · GW(p)

Do you mean concave?

Yes, thanks. I've fixed it.

comment by Vladimir_Nesov · 2009-08-11T18:06:01.793Z · LW(p) · GW(p)

What does it mean, technically, to have a preference "about" probability distributions?

Replies from: Pfft, John_Maxwell_IV
comment by Pfft · 2009-08-12T17:50:30.219Z · LW(p) · GW(p)

I think I and John Maxwell IV mean the same thing, but here is the way I would phrase it. Suppose someone is offering me the pick a ticket for one of a range of different lotteries. Each lottery offers the same set of prizes, but depending on which lottery I participate in, the probability of winning them is different.

I am an agent, and we assume I have a preference order on the lotteries -- e.g. which ticket I want the most, which ticket I want the least, and which tickets I am indifferent between. The action that will be rational for me to take depends on which ticket I want.

I am saying that a general theory of rational action should deal with arbitrary preference orders for the tickets. The more standard theory restricts attention to preference orders that arise from first assigning a utility value to each prize and then computing the expected utility for each ticket.

comment by John_Maxwell (John_Maxwell_IV) · 2009-08-11T20:47:00.748Z · LW(p) · GW(p)

Let's define an "experiment" as something that randomly changes an agent's utility based on some probability density function. An agent's "desire" for a given experiment is the amount of utility Y such that the agent is indifferent between the experiment occurring and having their utility changed by Y.

From Pfft we see that economists assume that for any given agent and any given experiment, the agent's desire for the experiment is equal to dx), where x is an amount of utility and f(x) gives the probability that the experiment's outcome will be changing the agent's utility by x. In other words, economists assume that agents desire experiments according to their expectation, which is not necessarily a good assumption.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-11T21:05:28.318Z · LW(p) · GW(p)

Hmm... I hope you interpret your own words so that what you write comes out correct, your language is imprecise and at first I didn't see a way to read what you wrote that made sense.

When I reread your comment to which I asked my question with this new perspective, the question disappeared. By "preference about probability distributions" you simply mean preference over events, that doesn't necessarily satisfy expected utility axioms.

ETA: Note that in this case, there isn't necessarily a way of assigning (subjective) probabilities, as subjective probabilities follow from preferences, but only if the preferences are of the right form. Thus, saying that those not-expected-utility preferences are over probability distributions is more conceptually problematic than saying that they are over events. If you don't use probabilities in the decision algorithm, probabilities don't mean anything.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-08-11T23:51:44.977Z · LW(p) · GW(p)

Hmm... I hope you interpret your own words so that what you write comes out correct, your language is imprecise and at first I didn't see a way to read what you wrote that made sense.

I am eager to improve. Please give specific suggestions.

By "preference about probability distributions" you simply mean preference over events, that doesn't necessarily satisfy expected utility axioms.

Right.

Note that in this case, there isn't necessarily a way of assigning (subjective) probabilities, as subjective probabilities follow from preferences, but only if the preferences are of the right form.

Hm? I thought subjective probabilities followed from prior probabilities and observed evidence and stuff. What do preferences have to do with them?

Thus, saying that those not-expected-utility preferences are over probability distributions is more conceptually problematic than saying that they are over events.

Are you using my technical definition of event or the standard definition?

Probably I should not have redefined "event"; I now see that my use is nonstandard. Hopefully I can clarify things. Let's say I am going to roll a die and give you a number of dollars equal to the number of spots on the face left pointing upward. According to my (poorly chosen) use of the word "event", the process of rolling the die is an "event". According to what I suspect the standard definition is, the die landing with 4 spots face up would be an "event". To clear things up, I suggest that we refer to the rolling of the die as an "experiment", and 4 spots landing face up as an "outcome". I'm going to rewrite my comment with this new terminology. I'm also replacing "value" with "desire", for what it's worth.

If you don't use probabilities in the decision algorithm, probabilities don't mean anything.

The way I want to evaluate the desirability of an experiment is more complicated than simply computing its expected value. But I still use probabilities. I would not give Pascal's mugger any money. I would think very carefully about an experiment that had a 99% probability of getting me killed and a 1% probability of generating 101 times as much utility as I expect to generate in my lifetime, whereas a perfect expected utility maximizer would take this deal in an instant. Etc.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-12T00:08:26.503Z · LW(p) · GW(p)

Roughly speaking, event is a set of alternative possibilities. So, the whole roll of a die is an event (set of all possible outcomes of a roll), as well as specific outcomes (sets that contain a single outcome). See probability space for a more detailed definition.

One way of defining prior and utility is just by first taking a preference over the events of sample space, and then choosing any pair prior+utility such that expected utility calculated from them induces the same order on events. Of course, the original order on events has to be "nice" in some sense for it to be possible to find prior+utility that have this property.

Any observations and updating consist in choosing what events you work with. Once prior is fixed, it never changes.

(Of course, you should read up on the subject in greater detail than I hint at.)

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-08-12T01:13:32.814Z · LW(p) · GW(p)

One way of defining prior and utility is just by first taking a preference over the events of sample space, and then choosing any pair prior+utility such that expected utility calculated from them induces the same order on events.

Um, isn't that obviously wrong? It sounds like your are suggesting that we say "I like playing blackjack better than playing the lottery, so I should choose a prior probability of winning each and a utility associated with winning each so that that preference will remain consistent when I switch from 'preference mode' to 'utilitarian mode'." Wouldn't it be better to choose the utilities of winning based on the prizes they give? And choose the priors for each based on studying the history of each game carefully?

Any observations and updating consist in choosing what events you work with. Once prior is fixed, it never changes.

Events are sets of outcomes, right? It sounds like you are suggesting that people update their probabilities by reshuffling which outcomes go with which events. Aren't events just a layer of formality over outcomes? Isn't real learning what happens when you change your estimations of the probabilities of outcomes, not when you reclassify them?

It almost seems to me as if we are talking past each other... I think I need a better background on this stuff. Can you recommend any books that explain probability for the layman? I already read a large section of one, but apparently it wasn't very good...

Although I do think there is a chance you are wrong. I see you mixing up outcome-desirability estimates with chance-of-outcome estimates, which seems obviously bad.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-12T01:40:25.006Z · LW(p) · GW(p)

If you don't want the choice of preference to turn out bad for you, choose good preference ;-) There is no freedom in choosing your preference, as the "choice" is itself a decision-concept, defined in terms of preference, and can't be a party to the definition of preference. When you are speaking of a particular choice of preference being bad or foolish, you are judging this choice from the reference frame of some other preference, while with preference as foundation of decision-making, you can't go through this step. It really is that arbitrary. See also: Priors as Mathematical Objects, Probability is Subjectively Objective.

You are confusing probability space and its prior (the fundamental structure that bind the rest together) with random variables and their probability distributions (things that are based on probability space and that "interact" with each other through the definition in terms of the common probability space, restricted to common events). Informally, when you update a random variable given evidence (event) X, it means that you recalculate the probability distribution of that variable only based on the remaining elements of the probability space within event X. Since this can often be done using other probability distributions of various variables lying around, you don't always see the probability space explicitly.

comment by Cyan · 2009-08-07T23:09:28.027Z · LW(p) · GW(p)

Why is rejection of mathematical expectation an unworkable solution?

Well, rejection's not a solution per se until you pick something justifiable to replace it with.

I'd be interested in a top-level post on the subject.

comment by Vladimir_Nesov · 2009-08-06T23:22:02.944Z · LW(p) · GW(p)

If this condition makes a difference to you, your answer must also be to take as many cards as Omega has to offer.

Replies from: Cyan
comment by Cyan · 2009-08-06T23:29:58.671Z · LW(p) · GW(p)

I don't follow.

(My assertion implies that Omega cannot double my utility indefinitely, so it's inconsistent with the problem as given.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-06T23:35:43.406Z · LW(p) · GW(p)

You'll just have to construct a less convenient possible world where Omega has merely trillion cards and not an infinite amount of them, and answer the question about taking a trillion cards, which, if you accept the lottery all the way, leaves you with 2 to the trillionth power odds of dying. Find my reformulation of the topic problem here.

Replies from: PhilGoetz, Cyan
comment by PhilGoetz · 2009-08-07T00:27:40.354Z · LW(p) · GW(p)

Agreed.

comment by Cyan · 2009-08-07T00:24:19.390Z · LW(p) · GW(p)

Gotcha. Nice reformulation.

comment by PhilGoetz · 2009-08-06T22:34:32.691Z · LW(p) · GW(p)

His argument was that in cases where some kind of infinity is on the table, aiming to satisfice rather than optimize can be the better strategy.

Can we apply that to decisions about very-long-term-but-not-infinitely-long times and very-small-but-not-infinitely-small risks?

Hmm... it appears not. So I don't think that helps us.

Where did you get the term "satisfice"? I just read that dutch-book post, and while Eliezer points out the flaw in demanding that the Bayesian take the infinite bet, I didn't see the word 'satisficing' in their anywhere.

Replies from: orthonormal, Jonathan_Graehl
comment by orthonormal · 2009-08-07T03:31:13.770Z · LW(p) · GW(p)

Huh, I must have "remembered" that term into the post. What I mean is more succinctly put in this comment.

Can we apply that to decisions about very-long-term-but-not-infinitely-long times and very-small-but-not-infinitely-small risks?

Hmm... it appears not. So I don't think that helps us.

This question still confuses me, though; if it's a reasonable strategy to stop at N in the infinite case, but not a reasonable strategy to stop at N if there are only N^^^N iterations... something about it disturbs me, and I'm not sure that Eliezer's answer is actually a good patch for the St. Petersburg Paradox.

comment by Jonathan_Graehl · 2009-08-07T00:11:21.594Z · LW(p) · GW(p)

It's an old AI term meaning roughly "find a solution that isn't (likely) optimal, but good enough for some purpose, without too much effort". It implies that either your computer is too slow for it to be economical to find the true optimum under your models, or that you're too dumb to come up with the right models, thus the popularity of the idea in AI research.

You can be impressed if someone starts with a criteria for what "good enough" means, and then comes up with a method they can prove meets the criteria. Otherwise it's spin.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-08-07T04:50:32.078Z · LW(p) · GW(p)

I'm more used to it as a psychology (or behavior econ) term for a specific, psychologically realistic, form of bounded rationality. In particular, I'm used to it being negative! (that is, a heuristic which often degenerates produces a bias)

comment by [deleted] · 2009-08-07T02:26:17.089Z · LW(p) · GW(p)

If I draw cards until I die, my expected utility is positive infinity. Though I will almost surely die and end up with utility 0, it is logically possible that I will never die, and end up with a utility of positive infinity. In this case, 10 + 0(positive infinity) = positive infinity.

The next paragraph requires that you assume our initial utility is 1.

If you want, warp the problem into an isomorphic problem where the probabilities are different and all utilities are finite. (Isn't it cool how you can do that?) In the original problem, there's always a 5/6 chance of utility doubling and a 1/6 chance of it going to 1/2. (Being dead isn't THAT bad, I guess.) Let's say that where your utility function was U(w), it is now f(U(w)), where f(x) = 1 - 1/(2 + log_2 x). In this case, the utilities 1/2, 1, 2, 4, 8, 16, . . . become 0, 1/2, 2/3, 3/4, 4/5, 5/6, . . . . So, your initial utility is 1/2, and Omega will either lower your utility to 0 or raise it by applying the function U' = U/(U + 1). Your expected utility after drawing once was previously U' = 5/3U + 1/2; it's now... okay, my math-stamina has run out. But if you calculate expected utility, and then calculate the probability that results in that expected utility, I'm betting that you'll end up with a 1/2 probability of ever* dying.

(The above paragraph surrounding a nut: any universe can be interpreted as one where the probabilities are different and the utility function has been changed to match... often, probably.)

comment by tut · 2009-08-07T07:53:49.809Z · LW(p) · GW(p)

I don't believe in quantifyable utility (and thus not in doubled utility) so I take no cards. But yeah, that looks like a way to make utilitarian equivalent to suicidal.

comment by Aurini · 2009-08-06T22:56:07.560Z · LW(p) · GW(p)

This is completely off topic (and maybe I'm just not getting the joke) but does Many Worlds necessarily imply many human worlds? Star Trek tropes aside, I was under the impression that Many Worlds only mattered to gluons and Shrodinger's Cat - that us macro creatures are pretty much screwed.

...

You were joking, weren't you? I like jokes.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T23:14:01.936Z · LW(p) · GW(p)

"Many worlds" here is shorthand for "every time some event happens that has more than one possible outcome, for every possible outcome, there is (or comes into being) a world in which that was the outcome."

As far as the truth or falsity of Many Worlds mattering to us - I don't think it can matter, if you maximize expected utility (over the many worlds).

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-08T11:19:00.158Z · LW(p) · GW(p)

That is not what Many Wolds says. It is only about quantum outcomes, not "possible" outcomes.

comment by Alicorn · 2009-08-06T21:28:54.818Z · LW(p) · GW(p)

Double your utility for the rest of your life compared to what? If you draw cards until you die, that sounds like it just means you have twice as much fun drawing cards as you would have without help. I guess that could be lots of fun if you're the kind of person who gets a rush off of Russian roulette under normal circumstances, but if you're not, you'd probably be better off flipping off Omega and watching some TV.

What if your utility would have been negative? Doesn't doubling it make it twice as bad?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T21:48:12.124Z · LW(p) · GW(p)

Good point. Better not draw a card if you have negative utility.

Just trust that Omega can double your utility, for the sake of argument. If you stop before you die, you get all those doublings of utility for the rest of your life.

I'd certainly draw one card. But would I stop drawing cards?

Thinking about this in commonsense terms is misleading, because we can't imagine the difference between 8x utility and 16x utility. But we have a mathematical theory about rationality. Just apply that, and you find the results seem unsatisfactory.

Replies from: HopeFox
comment by HopeFox · 2011-05-29T11:47:37.778Z · LW(p) · GW(p)

Thinking about this in commonsense terms is misleading, because we can't imagine the difference between 8x utility and 16x utility

I can't even imagine doubling my utility once, if we're only talking about selfish preferences. If I understand vNM utility correctly, then a doubling of my personal utility is a situation which I'd be willing to accept a 50% chance of death in order to achieve (assuming that my utility is scaled so that U(dead) = 0, and without setting a constant level, we can't talk about doubling utility). Given my life at the moment (apartment with mortgage, two chronically ill girlfriends, decent job with unpleasantly long commute, moderate physical and mental health), and thinking about the best possible life I could have (volcano lair, catgirls), I wouldn't be willing to take that bet. Intuition has already failed me on this one. If Omega can really deliver on his promise, then either he's offering a lifestyle literally beyond my wildest dreams, or he's letting me include my preferences for other people in my utility function, in which case I'll probably have cured cancer by the tenth draw or so, and I'll run into the same breakdown of intuition after about seventy draws, by which time everyone else in the world should have their own volcano lairs and catgirls.

With the problem as stated, any finite number of draws is the rational choice, because the proposed utility of N draws outweighs the risk of death, no matter how high N is. The probability of death is always less than 1 for a finite number of draws. I don't think that considering the limit as N approaches infinity is valid, because every time you have to decide whether or not to draw a card, you've only drawn a finite number of cards so far. Certainty of death also occurs in the same limit as infinite utility, and infinite utility has its own problems, as discussed elsewhere in this thread. It might also leave you open to Pascal's Scam - give me $5 and I'll give you infinite utility!

But we have a mathematical theory about rationality. Just apply that, and you find the results seem unsatisfactory.

I agree - to keep drawing until you draw a skull seems wrong. However, to say that something "seems unsatisfactory" is a statement of intuition, not mathematics. Our intuition can't weigh the value of exponentially increasing utility against the cost of an exponentionally diminishing chance of survival, so it's no wonder that the mathematically derived answer doesn't sit well with intuition.

comment by CannibalSmith · 2009-08-08T08:58:03.964Z · LW(p) · GW(p)

This is known as the Deck of Many Things in the D&D community. What you do is make commoners each draw a card and rob those that survive.

Edit: Haha, disregard that, I suck cocks.

Edit, edit: Seriously though, I assign minus infinity to my death. Thus I never knowingly endanger myself. Thus I draw no cards. I also round tiny probabilities down to zero so I can go outside despite the risk of meteors.

comment by Alicorn · 2009-08-06T17:27:28.221Z · LW(p) · GW(p)

if you met the girl of your dreams and she loved you

cough

Replies from: PhilGoetz, PhilGoetz
comment by PhilGoetz · 2009-08-06T18:17:05.023Z · LW(p) · GW(p)

How about, "the thing of your dreams"?

Replies from: MichaelVassar
comment by MichaelVassar · 2009-08-07T15:12:53.355Z · LW(p) · GW(p)

What's that Precious? f you found The Precious and loved it. Much tastier.

comment by PhilGoetz · 2009-08-06T17:31:12.714Z · LW(p) · GW(p)

Yeah, I should always have to say "man|woman he|she". Because my writing just isn't wordy enough yet.

How about "the thing of your dreams"?

Replies from: Alicorn
comment by Alicorn · 2009-08-06T17:33:34.998Z · LW(p) · GW(p)

Thing? That's the best you can do? "Person".

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T18:59:36.324Z · LW(p) · GW(p)

I'm shocked at your insensitivity towards lovers of animals and inanimate objects.

Replies from: homunq
comment by homunq · 2009-08-07T10:17:54.226Z · LW(p) · GW(p)

You can love them, but they can't love you in the same way a person can. (Obviously animals can feel love under any reasonable definition, but they can't act on that love in the same way a person can).

But really this comment is to note that, as I write, Alicorn got 3 karma for helpfully pointing out (yet) an(other) instance of harmful bias, while PhilGoetz got 4 karma for a relatively flip answer. (Not that his answer is stupid, just that I think we could all come up with such clever quibbles in response to most of what anybody said ever, and this would clearly not be productive overall, therefore I'd argue that the quibbles are mainly used to signal "I'm not really taking you seriously".)

Replies from: cousin_it
comment by cousin_it · 2009-08-07T10:20:38.003Z · LW(p) · GW(p)

For what it's worth, I downvoted Alicorn's comment when it was at 7 because I didn't want to see yet another gender war at the top of a potentially interesting comment page. Honestly, at this point I wish she would stop doing what she's doing: it's more painful to my perception of LW than any hidden gender bias Phil may have overlooked in the post.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T20:37:34.619Z · LW(p) · GW(p)

I was aware that I was writing a gender-biased description when I wrote the original post. I decided to write it from one gender's point of view, and trust the reader to interpret it intelligently. I find gender-neutral text is usually stilted and distracting.

Replies from: DanArmak, Alicorn
comment by DanArmak · 2009-08-07T21:41:35.299Z · LW(p) · GW(p)

Whatever gender-neutral language may usually be like, in this case I don't think the correction to 'person' is very stilted or distracting, IMO. (Would be even better to replace 'he or she or it' with 'they', but I realize some people dislike this style.) There were also other possible modifications to the text.

Assuming you agree (since you changed your post) - part of the problem is that even in a case where a good solution was relatively easily available, you didn't look for it, even though you knew your phrasing might be offensive to some readers (or distracting or whatever you choose to call it). This implies, to those readers who are distracted by your phrasing and spend a few seconds thinking about the issue, that you didn't bother not to give offense. And that's what (some of them may be) really offended at, I think. Continuing this, your comment implies that any reader who takes offense is behaving "unintelligently".

While you say,

I decided to write it from one gender's point of view, and trust the reader to interpret it intelligently.

What you mean is, you trust readers of the "wrong" gender to interpret. Readers who are "like you" in this aspect, which ought to be completely irrelevant to the discussion at hand, don't need to interpret anything at all. And gender-biased text distracts the "wrong" readers a lot more than most gender-neutral text distracts you or most other readers.

While "intelligently" means here "whatever I meant even if the text I write doesn't express it well". I wish this kind of communication worked. But it doesn't. When people repeatedly tell you that some not-quite-literal turn of phrase you're using is misinterpreted compared to what you mean, I think you should stop using it.

Replies from: PhilGoetz, Alicorn
comment by PhilGoetz · 2009-08-07T21:59:09.783Z · LW(p) · GW(p)

Dude. I could have used "person", but would be left with a "he or she". Stilted.

Assuming you agree (since you changed your post)

I didn't realize that changing my post so as not to offend someone implies I agree with them. I will change it back.

you didn't look for it

I didn't? Funny, I thought I did. But I guess you know better.

What you mean is, you trust readers of the "wrong" gender to interpret.

Maybe I am a better source on what I mean than your malicious imagination.

If you look over previous things I've written, you'll see that sometimes I say "he", and sometimes I say "she". I have been conscious of every single time I wrote "he" or "she" probably since before you were born. But I write one post, over 3000 words long, in which I have exactly one case of gendered speech, and the coin flip comes up so that I write "he" instead of she, and you're all over me for being an insensitive sexist pig.

If all that my 20+ years of carefully writing gender-balanced text has done is to encourage people like you feel entitled to lecture me from your moral high horse on any occasion when I don't measure up completely to your standards, then I'm done being gender-neutral. Apparently it just makes things worse.

I'm sorry that I originally replied flippantly. This whole exchange wouldn't have happened if I'd just quietly changed the text.

Let's hear from other readers. 2 readers are offended by non-gender-neutral language. If any of you think that authors should be allowed to use gendered language, let him or her speak, or forever hold his or her peace.

Replies from: DanArmak, AlanCrowe, SilasBarta, Alicorn, None
comment by DanArmak · 2009-08-07T22:21:49.028Z · LW(p) · GW(p)

Maybe I am a better source on what I mean than your malicious imagination.

My comment was precisely about the fact that people can misunderstand what you actually mean because your words are open to another interpretation.

I hope my imagination isn't particularly malicious (though as befits this site I won't assume such a thing). I intended to comment not about your actual meaning but about the way others, like Alicorn, appear to perceive it.

As for the part about "you trust readers of the "wrong" gender to interpret", I'm sure you didn't mean to think about only some readers; in fact you didn't think about only some of the readers. I was talking about the separate fact that hetero-male readers wouldn't need to interpret your words in any but the literal way.

Please, let others comment. Even if there's no consensus it's better to reach a status quo to avoid hashing this out again every few days. (Going by what I've read in LW before I started commenting.)

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T22:33:30.692Z · LW(p) · GW(p)

My comment was precisely about the fact that people can misunderstand what you actually mean because your words are open to another interpretation.

No; your comment claimed to know what I think and what I mean.

in fact you didn't think about only some of the readers.

You're doing it again! And you're wrong, again.

Replies from: DanArmak
comment by DanArmak · 2009-08-07T22:42:00.833Z · LW(p) · GW(p)

First I said that you said, or your words implied, that you thought only about some of the readers. And you said I was wrong:

What you mean is, you trust readers of the "wrong" gender to interpret. Maybe I am a better source on what I mean than your malicious imagination.

Then I said, OK, I believe you, you did think about all of the readers. And you say I'm wrong again:

in fact you didn't think about only some of the readers. You're doing it again! And you're wrong, again.

Now I'm just confused. Possibly it's my mistake/misunderstanding.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T22:55:10.158Z · LW(p) · GW(p)

Then I said, OK, I believe you, you did think about all of the readers. And you say I'm wrong again:

Sorry. I parsed your sentence to mean something else.

This is my karmic payback for things I said to Eliezer.

comment by AlanCrowe · 2009-08-07T23:34:33.033Z · LW(p) · GW(p)

I find gender-neutral language nit-picking off-putting. This thread persuades me that LessWrong is a waste of my time and I should stay away.

Replies from: homunq
comment by homunq · 2010-07-18T22:34:01.674Z · LW(p) · GW(p)

That is valid logic if you're looking for pleasure from LessWrong. It is not valid if you are interested in being less wrong.

comment by SilasBarta · 2009-08-07T23:40:37.549Z · LW(p) · GW(p)

lrn 2 they

comment by Alicorn · 2009-08-07T22:20:30.412Z · LW(p) · GW(p)

If all that my 20+ years of carefully writing gender-balanced text has done is to encourage people like you feel entitled to lecture me from your moral high horse on any occasion when I don't measure up completely to your standards, then I'm done being gender-neutral.

Does that mean that you are only gender neutral because you like approval from the gender neutrality cops, and if they stop approving of you, you have no reason to continue to pursue/improve your decades-long policy of trying to do the right thing?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T22:36:46.127Z · LW(p) · GW(p)

It means that being gender neutral has encouraged you to feel like you have the right to tell other people how to write, and to look down on anyone who uses the word "he".

Replies from: Alicorn
comment by Alicorn · 2009-08-07T22:41:05.722Z · LW(p) · GW(p)

I would not have objected to your use of a male-specific phrase if you had not written in the second person. I'd be willing to take your word for it that your choice was random and I wouldn't care - if it were about some hypothetical person who was male. It was about a "you" addressed in the post, and I, as a reader, was therefore excluded.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T22:59:03.859Z · LW(p) · GW(p)

I can understand that a little better.

I'd like to delete this conversation from Less Wrong. I'd rather have done this by email. Nobody else seems to be reading it anyway. You can reach me at @yahoo.

In my experience, disagreements get more heated when done in public posts than in private emails.

Replies from: Alicorn
comment by Alicorn · 2009-08-08T16:22:30.244Z · LW(p) · GW(p)

I don't like to delete things that have gone on for this long. In the future, you could PM people who make comments you'd like to reply to but think may develop into "heated disagreements". But if no one else is reading it, then some of the votes on the comments are unaccounted for.

comment by [deleted] · 2011-05-16T19:10:26.619Z · LW(p) · GW(p)

Hooray thread necromancy!

One option is to always use female-gendered language. Then women won't feel slighted by male-privileging language, and men will almost entirely not care or feel slighted.

Replies from: steven0461, AdeleneDawner, ArisKatsaris
comment by steven0461 · 2011-05-16T20:03:19.940Z · LW(p) · GW(p)

I voted this comment down because it wasn't good enough to justify digging up an old flame war.

Replies from: None
comment by [deleted] · 2011-05-16T20:18:27.443Z · LW(p) · GW(p)

What is it some folks have against additions to "old" discussions?

If you don't want to re-join, don't. The only negative effect that is apparent to me seems to be a piddlingly small amount of screen real estate under "Recent Comments" for a short period of time.

Also, I didn't actually contribute any flaming (no tearing down of anyone else's suggestions or behaviors). Only an attempt at a constructive solution to a recurrent problem that no one else seems to have suggested yet.

Replies from: steven0461, rhollerith_dot_com
comment by steven0461 · 2011-05-16T20:25:32.054Z · LW(p) · GW(p)

I agree that adding to old discussions isn't in itself bad, and that you didn't contribute to any flaming. What bothers me is there's a chance that others will read the thread and feel the need to respond, and then things might balloon.

Replies from: None
comment by [deleted] · 2011-05-16T20:30:48.811Z · LW(p) · GW(p)

I guess you fear other folks wasting their time via flaming each other more than I do.

I would hope we get to worry less about that sort of thing on this site.

Upvoted. Thanks for the explanation.

comment by RHollerith (rhollerith_dot_com) · 2011-05-17T05:06:57.863Z · LW(p) · GW(p)

If you don't want to re-join, don't. The only negative effect that is apparent to me seems to be a piddlingly small amount of screen real estate under "Recent Comments" for a short period of time.

So if I post a "Make Money Fast" ad every day, that is OK because the only negative effeect is a piddlingly small amount of screen real estate for a short period of time every day?

Replies from: lessdazed, None
comment by lessdazed · 2011-05-17T09:04:30.378Z · LW(p) · GW(p)

Depends. Can I do it from home, part time?

Replies from: Alicorn
comment by Alicorn · 2011-05-17T09:06:01.723Z · LW(p) · GW(p)

Yes. And you can be your own boss and set your own hours, too.

comment by [deleted] · 2011-05-17T13:06:09.500Z · LW(p) · GW(p)

You don't see a distinction between attempts at constructive posting and spam? Or you just felt like being snarky?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2011-05-17T16:07:31.112Z · LW(p) · GW(p)

Yes, I see a difference. And agree with you that GGGGGP was an attempt at constructive posting.

When I saw your argument of the second paragraph of GGGP, I became worried that it would encourage people to lower their posting standards and consequently over time drive busy thoughtful readers away. And the first refutation of your argument that occured to me was to point out that the argument could be used to justify spam as well as to justify your comment.

What I neglected to notice is that the tone of my comment and the fact that I implicitly compare you to a spammer had a high probability of making you conclude that I do not welcome you here. That's not true, and please forgive my clumsiness.

Replies from: None
comment by [deleted] · 2011-05-17T16:18:15.858Z · LW(p) · GW(p)

To paragraph 2: Point taken. To paragraph 3: Forgiven-and-forgotten and appreciated. Thanks for clarifying.

comment by AdeleneDawner · 2011-05-16T23:45:47.923Z · LW(p) · GW(p)

Bad idea, donwnvoted.

Men shouldn't be considered obligated not to care any more than women should.

Replies from: None
comment by [deleted] · 2011-05-17T02:34:45.358Z · LW(p) · GW(p)

It's not a preference or a 'should' on my part; it's a fact about the vast majority of men. They won't feel slighted. (I didn't downvote you.)

comment by ArisKatsaris · 2011-05-17T09:18:59.231Z · LW(p) · GW(p)

I'm not downvoting you for adding the comment to an old thread, I'm downvoting you for the "hooray thread necromancy" sentence which completely distracted from anything actually meaningful you had to say in your comment and turned the whole subthread into a discussion of whether it's okay to add comments to old thread.

Yes, it's okay to add comments to old threads. If your comment has utility enough to be part of the thread, then it'll have utility enough for people rereading the old thread after a few years too. Opposition to such is the product of the mechanics of other forums where revived old threads get boosted up thus drowning the newer ones -- this isn't the case here, so it doesn't apply.

But it's NOT OKAY to waste space patting yourself on the back about how you added a comment to an old thread. That causes distraction and disutility, as this subthread clearly proves.

Replies from: None
comment by [deleted] · 2011-05-17T13:15:20.480Z · LW(p) · GW(p)

The first time I posted on an old thread, the reply (from CronoDAS) was "Hooray thread necromancy" in sarcasm tags. So I was just pre-emptively recognizing that some people seem to think posting on old threads is a action not to be taken. Not patting myself on the back.

comment by Alicorn · 2009-08-07T21:51:42.577Z · LW(p) · GW(p)

Bravo! Thank you :)

comment by Alicorn · 2009-08-07T20:46:16.457Z · LW(p) · GW(p)

Your audience consists mostly of people other than you. You may write solely for your own preferences without annoying anyone when the venue is your diary.

comment by Psychohistorian · 2009-08-06T21:19:31.129Z · LW(p) · GW(p)

And even if you somehow worked around all these arguments, evolution, again, thwarts you. Even if you don't agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents. The claim that rational agents are not selfish implies that rational agents are unfit.

This is not how evolution works. Evolution cares about how many of your offspring survive. Selfishness need not be conducive to this. Also, evolution can't really thwart you. You're done evolving; you can check it off your to-do list.

It's entirely plausible that being unselfish is adaptive; from a personal (non-gene, i.e. the perspective we actually have) perspective, having children is extremely unselfish.

Selfishness and unselfishness are arational. Rationality is about maximizing the output of your utility function (in this context). Selfishness is about what that utility function actually is.

Replies from: Aurini, PhilGoetz
comment by Aurini · 2009-08-06T22:50:49.022Z · LW(p) · GW(p)

Honestly, isn't this nitpicking? It's true that Lord Azatoth stopped selecting for genes in our species ten thousand years ago, but when that game stopped working for him he switched to making our memes compete against eachother (in any sane world we'd be having this conversation in Chinese, and my mother's 'Scottish' surname wouldn't be Nordic).

You're absolutely right, and he did simplify this portion, but it doesn't undermine the weight of his argument any more than my saying "I'm not sexist, I'm a fully evolved male!" is rendered irrelevant by the fact that current social mores have little to nothing to do with evolutionary biology.

It's one thing to correct Phil's statement, or offer a suggested rewording that would improve the strength of the point he was trying to make, but if feels as if you're pin pointing this one poor choice of wording, and using it to imply that the entire premise is flawed.

Replies from: Psychohistorian
comment by Psychohistorian · 2009-08-06T23:29:43.066Z · LW(p) · GW(p)

as if you're pin pointing this one poor choice of wording, and using it to imply that the entire premise is flawed.

Argumentum ad evolutionum is both common enough and horribly wrong enough that I would not call it "nitpicking." The claim that unselfish agents will be outcompeted by selfish agents is complex, context-dependent, and requires support. The idea that there will somehow be an equilibrium in which unselfish agents get crowded out seems absurd, and this is what "evolution" seems intended to evoke, because evolution is (in significant part) about competitively crowding out the sub-optimal.

He also makes a much bigger mistake, and I should have addressed that in greater detail. Utility curves are arational, and term "selfish" gets confused way more than it should. It seems clear from context that he means it hedonistically, i.e. my own hedonistic experience is my only concern if I'm selfish; I don't care about what other people want or think. If my actual utility curve involves other people's utility, or it involves maximizing the number of paper clips in existence, there is absolutely no reason to believe I could better accomplish goals if I were "selfish" by this definition.

Utility curves are strictly arational. A rational paperclip maximizer is an entirely possible being. Any statement of the kind "Rational agents are/are not selfish" is a type error; selfishness is entirely orthogonal to rationality.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T01:03:54.136Z · LW(p) · GW(p)

It seems clear from context that he means it hedonistically, i.e. my own hedonistic experience is my only concern if I'm selfish; I don't care about what other people want or think.

Instead of trying to interpret the context, you should believe that I mean what I say literally. I repeat:

If you still think that you wouldn't, it's probably because you're thinking a 1% increase in your utility means something like a 1% increase in the pleasure you experience. It doesn't. It's a 1% increase in your utility. If you factor the rest of your universe into your utility function, then it's already in there.

In fact, I have already explained my usage of the word "selfish" to you in this same context, repeatedly, in a different post.

Psychohistorian wrote:

Utility curves are strictly arational. A rational paperclip maximizer is an entirely possible being. Any statement of the kind "Rational agents are/are not selfish" is a type error; selfishness is entirely orthogonal to rationality.

I quote myself again:

If you act in the interest of others because it's in your self-interest, you're selfish. Rational "agents" are "selfish", by definition, because they try to maximize their utility functions. An "unselfish" agent would be one trying to also maximize someone else's utility function. That agent would either not be "rational", because it was not maximizing its utiltity function; or it would not be an "agent", because agenthood is found at the level of the utility function.

Replies from: Psychohistorian, Nick_Tarleton
comment by Psychohistorian · 2009-08-07T02:43:44.941Z · LW(p) · GW(p)

Rational agents incorporate the benefits to others into their utility functions.

as a section header may have thrown me off there.

That aside, I do understand what you're saying, and I did notice the original contrast between the 1%/1%. Though I'd note it doesn't follow that a rational agent would be willing to take a 1% chance of destroying the universe in exchange for a 1% increase in his utility function; the universe being destroyed would probably output a negative, i.e. greater than 100% loss, so that's not an even bet.

The whole arational point is my mistake; the whole paragraph:

But maybe they're just not as rational as you...

reads very much like it is using selfish in the strict rather than holistic utility sense, and that was what I was focusing on in this response. I was focusing specifically on that section and did not reread the whole post, so I got the wrong idea. My point on evolution remains, and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail. But this doesn't matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T03:42:04.107Z · LW(p) · GW(p)

and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail

That's why what I wrote in that section was:

it's not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility.

You wrote:

But this doesn't matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.

I am supposing that. That's why it's in the title of the post. I don't mean that I am certain that is how things will turn out to be. I mean that this post says that rational behavior leads to these consequences. If that means that the only way to avoid the destruction of life is to cultivate a particular bias, then that's the implication.

comment by Nick_Tarleton · 2009-08-07T03:19:29.070Z · LW(p) · GW(p)

Of course, you have already shown that you choose to pretend I am using the word "selfish" in the colloquial sense which I have repeatedly explicitly said is not the sense I am using it in, in this post and in others, so this isn't going to help.

If it isn't working, why don't you try something different?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T03:35:18.740Z · LW(p) · GW(p)

(I deleted that paragraph.)

Do you have an idea for something else to try?

Replies from: Psychohistorian, MichaelVassar
comment by Psychohistorian · 2009-08-07T08:57:47.032Z · LW(p) · GW(p)

I don't think it's really a necessary distinction; the idea of an unselfish utility maximizer doesn't quite make sense, because utility is defined so nebulously that pretty much everyone has to seek maximizing their utility.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T14:45:21.730Z · LW(p) · GW(p)

the idea of an unselfish utility maximizer doesn't quite make sense

You're right that it doesn't make sense, which is why some people assume I mean something else when I say "selfish". But a lot of commenters do seem to believe in unselfish utility maximizers, which is why I keep using the word.

comment by MichaelVassar · 2009-08-07T15:10:46.192Z · LW(p) · GW(p)

Avoiding morally charged words. If possible shy far far away from ANY pattern that people can automatically match against with system 2 so that system 1 stays engaged.
My article here http://www.forbes.com/2009/06/22/singularity-robots-computers-opinions-contributors-artificial-intelligence-09_land.html is an attempt to do this.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-08-07T17:37:14.020Z · LW(p) · GW(p)

If possible shy far far away from ANY pattern that people can automatically match against with system 2 so that system 1 stays engaged.

Do you mean "system 1 ... system 2"?

comment by PhilGoetz · 2009-08-06T22:19:49.920Z · LW(p) · GW(p)

Evolution cares about how many of your offspring survive. Selfishness need not be conducive to this.

Selection, acting on the individual, selects for those individuals who act in ways that cause their own offspring to survive more. That is what I mean by selfishness. Selfish genes. Selfish memes.

Once people no longer die, selection will not have so much to do with death and reproduction, but with the accumulation of resources. Think about that, and it will become more clear that that will select directly for selfishness in the conventional sense.

comment by cousin_it · 2009-08-06T19:22:14.881Z · LW(p) · GW(p)

Although your conclusions are very depressing, it seems I must accept them. The other commenters' reluctance to agree puzzles me.

Replies from: FrankAdamek
comment by FrankAdamek · 2009-08-09T19:22:15.539Z · LW(p) · GW(p)

I find the analysis largely convincing as well, and further feel a 3/1M chance per century of existential disaster is extremely conservative. But I also don't find the idea of a singleton depressing. Bostrom suggests the idea of a singleton being a world democratic government or a benevolent superintelligent machine, which Eliezer's CEV seems able to realize, at least with my initial understanding. It even seems possible that singletons such as that might dissolve themselves if that's what was desired (<-serious handwaving), but I admit that a singleton has such potential for staying power that it's probably best to assume it's "forever".

With my views on the varied risks we face, the unique potential of a singleton to solve many of them, and with a personal estimate of .5 probability of surviving this century at best, a singleton seems worth looking into. Its a huge danger itself, but I think we ought to investigate the best ways to make a "safe" singleton at the same time as looking for ways to avoid risk without one, not waiting until we are sure we absolutely need one.

I realize this was not the focus of the post and so apologize if it's too off-topic. I wanted to draw more attention to it as a potential solution, though I don't mean to withdraw attention from the post's central issues.

comment by Roko · 2010-06-28T10:06:01.779Z · LW(p) · GW(p)

The figure of 3/1,000,000 for the probability of the trinity nuke destroying the world is almost certainly too low. Consider that, subjectively, the scientists should have assigned at least a 1 in 1000 probability that they'd made a mistake in their calculation of safety. Probably more like 1 in 100, considering that the technology was entirely new. In fact, the first serious mistake in a physical calculation that resulted in an actual disaster involving a nuke was Castle Bravo, which occurred probably only 50-150 detonations after trinity. Since then, we have have Chernobyl, which is arguably somewhat different, but still 10,000 dead, and apparently set off by scientists who thought what they were doing was safe.

Another way to look at it is to ask yourself, if 100 similar incidents occurred (that is, instances of scientists developing a new and very destructive technology in wartime, and worrying that it just might blow the whole world up but that it's probably OK), how many instances of "fail" would you expect?

Looking at it this way, even 1 in 100 is too optimistic. The dominant failure mode is that the scientists fail to grasp a crucial consideration, like trying to build a military super-intelligence without understanding the need for friendly AI, which I suspect occurs with probability 1 in 8.

In fact, now that I think about it, the dominant failure mode is that the overall project leadership fails to listen to those scientists who say they might have found a crucial consideration, if there is one.

comment by Wei Dai (Wei_Dai) · 2009-08-06T21:17:59.448Z · LW(p) · GW(p)

What are the existential risks for a multi-galaxy super-civilization? Or even a multi-stellar civilization expanding outward at some fraction of light speed? I don't see how life can be exterminated once it has spread that far. "liberate much of the energy in the black hole at the center of our galaxy in a giant explosion" does not make sense, since a black hole is not considered a store of energy that can be liberated.

If you are speculating about new physics that haven't been discovered yet, then "subjective-time exponential" and risk per century seems irrelevant (we can just assume that all of physics will be discovered sooner or later), and a more pertinent question might be how much of physics are as yet undiscovered, and what is the likelihood that some new physics will allow a galaxy/universe killer to be built.

I argue that the amount of physics left to be discovered is finite, and therefore the likelihood that a galaxy/universe killer can be built in the future does not approach arbitrarily close to 1 as time goes to infinity.

Replies from: private_messaging, PhilGoetz
comment by private_messaging · 2014-05-29T04:59:29.303Z · LW(p) · GW(p)

Speaking of new physics, there was the discovery that stars are other suns rather than tiny holes in celestial sphere... and in the future there's the possibility of discovering practically attainable interstellar travel. Discoveries in physics can have different effects.

And if we're to talk of limitless new and amazing physics, there may be superbombs, and there may be infinite subjective time within finite volume of spacetime, or something of that sort.

comment by PhilGoetz · 2009-08-06T22:25:54.401Z · LW(p) · GW(p)

I don't see how life can be exterminated once it has spread that far.

You may be right. It takes a long time to become a multi-galaxy super-civilization. IIRC our galaxy is 100,000 light-years across, and the nearest galaxy is about 2 million light-years away. We might make it in time. It depends a lot on how far time-compression goes, and on how correlated apocalypses are.

"liberate much of the energy in the black hole at the center of our galaxy in a giant explosion" does not make sense, since a black hole is not considered a store of energy that can be liberated.

Wrong. Google 'black hole explosions'.

I argue that the amount of physics left to be discovered is finite, and therefore the likelihood that a galaxy/universe killer can be built in the future does not approach arbitrarily close to 1 as time goes to infinity.

That's my hope as well.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-08-06T23:42:19.981Z · LW(p) · GW(p)

Wrong. Google 'black hole explosions'.

None of the results indicate a possibility that the "energy in the black hole at the center of our galaxy" can be liberated in a giant explosion.

The first result is a 1974 paper by Stephen Hawkings predicting that black holes emit black-body radiation at a temperature inverse in its mass. For large black holes this temperate is close to absolute zero, making them more useful as entropy dumps than energy sources.

On the other hand, if you could simultaneously convert a lot of ordinary matter into numerous tiny black holes, they would all instantly evaporate and have the effect of a single great explosion, so that's one risk to be worried about.

Replies from: PhilGoetz, Vladimir_Nesov
comment by PhilGoetz · 2009-08-07T00:09:06.202Z · LW(p) · GW(p)

None of the results indicate a possibility that the "energy in the black hole at the center of our galaxy" can be liberated in a giant explosion.

You're right about that. But they do indicate that the energy in smaller black holes can be liberated in giant explosions. And they indicate that black holes could be used as an energy sources. So when you said, "a black hole is not considered a store of energy that can be liberated," that was wrong; or at least it was wrong if you meant "a black hole is not considered a store of energy." And that was what I said was wrong.

comment by Vladimir_Nesov · 2009-08-06T23:45:07.562Z · LW(p) · GW(p)

Why are you continuing to talk about this one particular hypothetical risk?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-08-06T23:47:32.371Z · LW(p) · GW(p)

I asked for a list of possible risks, and nobody has given any other answer...

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-06T23:48:52.823Z · LW(p) · GW(p)

Still, the question of whether one particular risk is real has almost no bearing on the total existential risk.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-08-07T00:55:50.898Z · LW(p) · GW(p)

That's only true if there are lots of different existential risks besides this particular one. The fact that no one has answered my question with a list of such risks seems to argue against that. I also argued earlier that the amount of physics left to be discovered is finite, so the number of such risks is finite.

More generally, I guess it boils down to cognitive strategies. I like to start from specific examples, build intuitions, find similarities, then proceed to generalize. I program like this too. If I have to write two procedures that I know will end up sharing a lot of code, I will write one complete procedure first, then factor out the common code as I write the second one, instead of writing the common function first. I suppose this seems like a waste of time to someone used to working directly on the general/abstract issue.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-08-07T01:25:14.217Z · LW(p) · GW(p)

Well, you know my specific example of a risk. Even if you know all about physics, that is the rules of the game, you can still lose to an opponent that can figure out a winning strategy.

Examples are good when you can confidently say something about them, and their very existence was in question. But there are so many ways to sidestep mere physical threat that it doesn't seem a good choice. An explosion is just something that happens to the local region, in a lawful physical way. You could cook some dynamic redundancy to preserve computation in case of an explosion.

comment by PhilGoetz · 2009-08-09T19:32:13.738Z · LW(p) · GW(p)

If you don't believe black holes can ever be used as weapons, here's an article about a star 8000 light years away that some astronomers worry may harm life on Earth (to what extent it doesn't say).

comment by Psychohistorian · 2009-08-07T02:58:06.834Z · LW(p) · GW(p)

This whole point may reflect collective confusion surrounding the term "utility."

I do not presently have a coefficient in my utility function attached to John Doe, who is a mechanic in Des Moines (I'm assuming). I know nothing about him, and whatever happens to him does not affect my experience of happiness in the slightest. I wish him well, but it would make little sense to say he is reflected in my utility function. I would agree that, ceteris paribus, the better off he is, the better, but (particularly since I won't know it), this doesn't really weigh in my experience of life.

On the other hand, if you asked me if I would rather him die if meant I got a thousand dollars, I'd have to turn down the offer. I care about his utility in the abstract, even if it doesn't actually affect my happiness.

There's a relevant distinction between abstract collective utility and personally experienced utility. The human mind is not powerful enough to comprehend true, complete, abstract utility, and if it was you'd probably become terminally depressed. One can believe in the importance of maximizing abstract utility while not actually experiencing it. When Omega offers to double our utility, we think that means something we experience, and we don't experience the abstract utility of the entire planet. I believe that this distinction gets confused, leading to this post feeling contrary to intuition.

On which note, we really don't know what total utility looks like - it's too complex. So the world gets destroyed or total utility gets doubled 50-50 bet is not evaluable by the individual because we don't know how to evaluate the disutility of the world being destroyed, other than we'd rather not risk it.

This is all made that much more painful by the fact that reason alone cannot say which is preferable, the scratching of my finger or the destruction of the world. I think Hume may have beaten us to this.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T03:38:58.905Z · LW(p) · GW(p)

When Omega offers to double our utility, we think that means something we experience, and we don't experience the abstract utility of the entire planet. I believe that this distinction gets confused, leading to this post feeling contrary to intuition.

Actually, if you think of the paradox below in these terms, as being one where you're offered vague, unmeasurable rewards, it ceases to be a paradox. It's only a paradox because we've abstracted those confusing issues away.

It is a puzzle that is meant to get at the question of whether our mathematical models of rationality are correct. If you're not talking about mathematical models, you're having a different conversation.

comment by Vladimir_Nesov · 2009-08-06T21:55:00.319Z · LW(p) · GW(p)

But deciding to use utility functions that will [...] seems to be rational.

You can't decide your utility function. It's a given. You can only make decisions based on preference that, probably, can be represented in part as a utility function. Deciding to use a particular utility function (that doesn't happen to be exactly the one representing your own preference) constitutes throwing away your humanity and replacing it with whatever the new utility function says.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T23:00:20.445Z · LW(p) · GW(p)

Okay. Substitute "Maximizing your expected utility" instead.

comment by [deleted] · 2009-08-06T17:46:38.483Z · LW(p) · GW(p)

On "incorporating the benefits to others into their utility functions", you hint at a sharp dichotomy between Scrooges and Saints - people who act entirely in their own self-interest, and people who act in everyone's interest (because that is the nature of their own self-interest). But most humans are not at these poles - most of us act in the interest of at least several people. Mirroring (understanding) is partially a learned trait, but actually caring about other people who you mirror is emotionally "basic". By this I mean it's entirely in our self-interest to act in the interest of some others. That was to partially address your "unselfish agents will be out-competed by selfish agents" claim. False dichotomy.

You haven't given a convincing argument that people will stop having Life-And-Death conflicts.

On the LHC, it sounds like you're arguing for a more precautionary approach to science. What would be acceptable conditions in your book for turning on the LHC? (Also, that particular experiment received so much publicity, the numbers were undoubtedly checked hundreds of times).

(I've got to get to work, and I'm sure other posters will address the relevant points and problems before I get a chance.)

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T18:14:18.381Z · LW(p) · GW(p)

By this I mean it's entirely in our self-interest to act in the interest of some others. That was to partially address your "unselfish agents will be out-competed by selfish agents" claim. False dichotomy.

It's not a false dichotomy. If you act in the interest of others because it's in your self-interest, you're selfish. Rational "agents" are "selfish", by definition, because they try to maximize their utility functions. An "unselfish" agent would be one trying to also maximize someone else's utility function. That agent would either not be "rational", because it was not maximizing its utiltity function; or it would not be an "agent", because agenthood is found at the level of the utility function. I tried to make this point in another thread, and lost like 20 karma points doing so. But it's still right. I request anyone down-voting this comment to provide some alternative interpretation under which a rational agent is not selfish.

EDIT: A great example of what I mean by "agenthood is found at the level of the utility function" is that you shouldn't consider an ant an agent.

The whole point of the essay is to try to find some way for it to be in everyone's self-interest to act in ways that will prevent us from taking small risks of exterminating life. And I failed to find any such way. So you see, the entire essay is predicated on the point that you're making.

You haven't given a convincing argument that people will stop having Life-And-Death conflicts.

Do you mean, I haven't given a convincing argument that people will not stop having Life-And-Death conflicts?

On the LHC, it sounds like you're arguing for a more precautionary approach to science.

Not actually. The next hundred thousand years are a special case.

Replies from: None
comment by [deleted] · 2009-08-06T21:15:16.719Z · LW(p) · GW(p)

I think we're agreeing on the first point - any rational agent is selfish. But then there's no such thing as an unselfish agent, right? Also, no need to use the term selfish, if it's implicit in rational agent. If unselfish agents don't exist, it's easy to out-compete them!

"trying to also maximize someone else's utility function... would not be an 'agent', because agenthood is found at the level of the utility function." What do you mean by this? I read this as saying that a utility function which is directly dependent on another's utility is not a utility function. In other words, anyone who cares about another, and takes direct pleasure from another's wellbeing, is not an agent. If that's what you mean, then most humans aren't agents. Otherwise, I'm not understanding.

On Life-And-Death conflicts, yes, that's what I meant. You haven't given any such argument!

On the LHC, why are the next hundred thousand years a special case? And again, under what conditions should the LHC run?

Replies from: PhilGoetz, PhilGoetz
comment by PhilGoetz · 2009-08-06T22:05:00.066Z · LW(p) · GW(p)

Also, no need to use the term selfish, if it's implicit in rational agent.

Right - now I remember, we've gone over this before. I think it is implicit in rational agent; but a lot of people forget this, as evidenced by the many responses that say something like, "But it's often in an agent's self-interest to act in the interest of others!"

If you think about why they're saying this in protest to my saying that a rational agent is selfish, it can't be because they are legitimately trying to point out that a selfish agent will sometimes act in ways that benefit others. That would be an uninteresting and uninformative point. No, I think the only thing they can mean is that they believe that decision theory is something like the Invisible Hand, and will magically result in an equilibrium where everybody is nice to each other, and so the agents really aren't selfish at all.

So I use the word "selfish" to emphasize that, yes, these agents really pursue their own utility.

Replies from: None
comment by [deleted] · 2009-08-06T22:51:50.158Z · LW(p) · GW(p)

(Well, "we" haven't - I'm pretty new on these forums, and missed that disagreement!)

You still haven't addressed any of my complaints with your argument. I never mentioned anything about time-discounting - it looked like you saw your second-to-last proposition to be the only one with merit, so I was totally addressing two that you dismissed.

In my first point, now that we are clear on definitions, I meant that you 1) implied a dichotomy between agents whose utility functions are entirely independent of other people's, and those whose utility functions are very heavily dependent (Scrooges and Saints). You then made the statement "unselfish agents will be out-competed by selfish agents." Since we agree that there's no such thing as an unselfish agent, you probably meant "people who care a lot about everyone will be out competed by people who care about nobody but themselves" (selfish rational agents with highly dependent vs. highly independent utility functions). This is a false dichotomy because most people don't fall into either extreme, but have a utility function that depends on some others, but not everyone and not to an equal degree.

And my two questions still stand, on conflict and the LHC.

(Interesting post, by the way!)

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T04:53:10.891Z · LW(p) · GW(p)

you probably meant "people who care a lot about everyone will be out competed by people who care about nobody but themselves."

No, I didn't mean that. This is, I think, the 5th time I've denied saying this on Less Wrong. I've got to find a way of saying this more clearly. I was arguing against people who think that rational agents are not "selfish" in the sense that I've described elsewhere in these comments. If it helps, I'm using the word "selfish" in a way so that an agent could consciously desire strongly to help other people, but still be "selfish".

On life-and-death conflicts, I did give such an argument, but very briefly:

Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.

I realize this isn't enough for someone who isn't already familiar with the full argument, but it's after midnight and I'm going to bed.

On the LHC, why are the next hundred thousand years a special case? And again, under what conditions should the LHC run?

The next 100,000 years are a special case because we may learn most of what we will learn over the next billion years in the next 100,000 years. During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.

Replies from: None
comment by [deleted] · 2009-08-07T05:58:30.644Z · LW(p) · GW(p)

My confusion isn't coming from the term selfish, but from the term unselfish agent. You clearly suggested that such a thing exists in the quoted statement, and I have no idea what this creature is.

On life-and-death conflicts, sorry if I'm inquiring on something widely known by everyone else, but I wouldn't mind a link or elaboration if you find the time. I agree that people will have conflicts both as a result of human nature and of finite resources, but I don't see why conflicts must always be deadly.

During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.

You just said the opposite of what you said in your original post here, that the LHC was turned on for no practical advantage.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T20:34:27.961Z · LW(p) · GW(p)

My confusion isn't coming from the term selfish, but from the term unselfish agent. You clearly suggested that such a thing exists in the quoted statement, and I have no idea what this creature is.

I wrote, "Even if you don't agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents." The "unselfish agent" is a hypothetical that I don't believe in, but that the imaginary person I'm arguing with believes in; and I'm saying, "Even if there were such an agent, it wouldn't be competitive."

My argument was not very clear. I wouldn't worry too much over that point.

You just said the opposite of what you said in your original post here, that the LHC was turned on for no practical advantage.

No; I said, "no practical advantage that I've heard of yet." First, the word "practical" means "put into practice", so that learning more theory doesn't count as practical. Second, "that I've heard of yet" was a qualifier because I suppose that some practical advantage might result from the LHC, but we might not know yet what that will be.

Replies from: Alicorn
comment by Alicorn · 2009-08-07T20:44:23.463Z · LW(p) · GW(p)

If "selfish" (as you use it) is a word that applies to every agent without significant exception, why would you ever need to use the word? Why not just say "agent"? It seems redundant, like saying "warm-blooded mammal" or something.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T20:48:05.103Z · LW(p) · GW(p)

Yes, it's redundant. I explained why I used it nonetheless in the great-great-great-grandparent of the comment you just made. Summary: You might say "warm-blooded mammal" if you were talking with people who believed in cold-blooded mammals.

Replies from: Alicorn
comment by Alicorn · 2009-08-07T21:15:50.554Z · LW(p) · GW(p)

Someone who believes in cold-blooded mammals is either misusing the term "mammal" or the term "cold-blooded" or both, and I don't think I'd refer to "cold-blooded mammals" without addressing the question of where that misunderstanding is. If people don't understand you when you say "selfish" (because I think you are using an unpopular definition, if nothing else) why don't you leave it out or try another word? If I was talking to someone who insisted that mammals were cold-blooded because they thought "warm" was synonymous with "water boils at this temperature" or something, I'd probably first try to correct them - which you seem to have attempted for "selfish" with mixed results - and then give up and switch to "endothermic".

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T21:23:29.589Z · LW(p) · GW(p)

Sounds like good advice.

comment by PhilGoetz · 2009-08-06T22:10:39.525Z · LW(p) · GW(p)

I read this as saying that a utility function which is directly dependent on another's utility is not a utility function.

No; I meant that each agent has a utility function, and tries to maximize that utility function.

If we can find an evolutionarily-stable cognitive makeup for an agent that allows it to have a utility function that weighs the consequences in the distant future equally with the consequences to the present, then we may be saved. In other words, we need to eliminate time-discounting.

One thing I didn't explain clearly, is that it may be that uncertainty alone provides a large enough time-discounting to make universe-death inevitable. Because you're more and more uncertain what the impact of a decision will be the farther you look into the future, you weigh that impact less and less the farther into the future you go.

But maybe this is not inevitably the right thing to do, if you can find a way to predict future impacts that is uncertain, but also unbiased!

EDIT: No. Unbiased doesn't cut it.

On life-and-death conflicts, I did give such an argument, but very briefly:

Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.

I realize this isn't enough for someone who isn't already familiar with the full argument, but it's after midnight and I'm going to bed.

On the LHC, why are the next hundred thousand years a special case? And again, under what conditions should the LHC run?

The next 100,000 years are a special case because we may learn most of what we will learn over the next billion years in the next 100,000 years. During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.

comment by A1987dM (army1987) · 2012-05-29T15:54:19.887Z · LW(p) · GW(p)

The loss to them if they ignited the atmosphere would be another 30 or so years of life. The loss to them if they lost the war and/or were killed by their enemies would also be another 30 or so years of life.

This assumes they don't care about their children and grandchildren after their death.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-29T16:09:18.009Z · LW(p) · GW(p)

...and the children and grandchildren of their enemies, and the millions of currently-living people not involved in the war, and etc.

comment by Vladimir_Nesov · 2009-08-06T21:15:36.224Z · LW(p) · GW(p)

You can put monetary value on humanity, just as you can on a person's life.

Replies from: Alicorn
comment by Alicorn · 2009-08-06T21:24:47.848Z · LW(p) · GW(p)

If humanity goes away, who will collect the savings?

Replies from: PhilGoetz, Aurini
comment by PhilGoetz · 2009-08-06T22:17:46.265Z · LW(p) · GW(p)

I see what he's saying, but there's something wrong with it. If you put a monetary value on a life, it means that you could increase utility by trading that life for more than that much money, because you could do things with that money that would increase other people's utility enough to make up for the life. But once you've traded the last life away, you can't use the money.

comment by Aurini · 2009-08-06T22:59:53.201Z · LW(p) · GW(p)

Robert A. Heinlein: An extinct species has no moral behaviour. ~An address to a Westpoint graduating class

comment by PhilGoetz · 2009-08-06T19:52:18.228Z · LW(p) · GW(p)

Incidentally, I once met Brian Moriarty at a party John Romero threw where I embarassed myself in front of or offended all of my childhood heroes, one after another, ending with Steve Wozniak.

I was talking to him about trends in text adventures, and said, "One great thing is that IF authors have gotten away from the idea that every game has to be about saving the world."

He said something like, "Well, I happen to think that saving the world is not such a bad thing," and went off in a bit of a huff. And then I remembered that he was the author of Trinity, which was about the Trinity test and saving the world from nuclear holocaust. (And was a really good game, BTW.)

comment by timtyler · 2009-08-06T19:26:51.881Z · LW(p) · GW(p)

Re: "If you believe in accelerating change, then the number of important events in a given time interval increases exponentially, or, equivalently, the time intervals that should be considered equivalent opportunities for important events shorten exponentially."

Uh - that doesn't go on forever. Any more than rats with a grain pile allows growth forever. Your statement takes the idea of exponential change into an utterly ridiculous realm.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T19:39:03.823Z · LW(p) · GW(p)

You're right. It still has a large impact, though. Even if we get only 3 more doublings, it reduces the time available by a factor of 8.

The nearest other galaxy is 2 million light years away. If we get 6 doublings, that's 128 million subjective light years. That's a worrisome amount.

The nearest other star is 4.24 light years away. If we get 20 doublings, that's over 4 million subjective light years away. Also a worrisome amount.

Replies from: timtyler
comment by timtyler · 2009-08-06T20:33:55.534Z · LW(p) · GW(p)

The observation bears on this statement:

"More important is that our compression of subjective time can be exponential, while our ability to escape from ever-broader swaths of destruction is limited by lightspeed."

Eventually, compression in subjective time stops, while expansion continues.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T21:13:42.710Z · LW(p) · GW(p)

Yes, that's right. If you can survive long enough to get to that point.

comment by TitaniumDragon · 2014-05-27T22:46:04.717Z · LW(p) · GW(p)

I was directed here from FIMFiction.

Because of https://en.wikipedia.org/wiki/Survivorship_bias we really can't know what the odds are of doing something that ends up wiping out all life on the planet; nothing we have tried thus far has even come close, or even really had the capability of doing so. Even global thermonuclear war, terrible as it would be, wouldn't end all life on Earth, and indeed probably wouldn't even manage to end human civilization (though it would be decidedly unpleasant and hundreds of millions of people would die).

Some people thought that the nuclear bomb would ignite the atmosphere... but a lot of people didn't, either, and that three in a million chance... I don't even know how they got at it, but it sounds like a typical wild guess to me. How would you even arrive at that figure? Indeed, there is good reason to believe that the atmosphere may well have experienced such events before, in the form of impact events; this is why we knew, for instance, that the LHC was safe - we had experienced considerably more energetic events previously. Some people claimed it might destroy the universe, but the odds were actually 0 - it simply lacked the ability to do so, because if it was going to cause a vacuum collapse the universe would have already been destroyed by such an event elsewhere. Meanwhile, the physics of small black holes means that they're not a threat - they would decay almost instantly, and would lack the gravity necessary to cause any real problems. And thus far, if we actually look at what we've got, the reality is that everything we have tried has had p=0 of destroying civilization in reality (that is the universe we -actually- live in), meaning that that p = 3 x 10^-6 was actually hopelessly pessimistic. Just because someone can assign arbitrary odds to something doesn't mean that they're right. In fact, it usually means that they're bullshitting.

Remember NASA making up its odds of an individual bolt failing at being one in a 10^8? That's the sort of made up number we're looking at here.

And that's the sort of made up number I always see in these situations; people simply come up with stuff, then pretend to justify it with math when in reality it is just a guess. Statistics used as a lamppost; for support, not illumination.

And this is the biggest problem with all existential threats - the greatest existential threat to humanity is, in all probability, being smacked by a large meteorite, which is something we KNOW, for certain, happens every once in a while. And if we detected that early enough, we could actually prevent such an event from happening.

Everything else is pretty much entirely made up guesswork, based on faulty assumptions, or very possibly both.

Of the "humans kill us all" scenarios, the most likely is some horrible highly transmissible genetically engineered disease which was deliberately spread by madmen intent on global destruction. Here, there are tons of barriers; the first, and perhaps largest barrier is the fact that crazy people have trouble doing this sort of thing; it requires a level of organization which tends to be beyond them. Secondly, it requires knowledge we lack, and which indeed, once we obtain it, may or may not make containing the outbreak of such a disease relatively trivial - you speak of offense being easier than defense, but in the end, a lot of technological systems are easier to break than they are to make, and understanding how to make something like this may well require us to understand how to break it in the process (and indeed, may well be derived from us figuring out how to break it). Thirdly, we actually already have measures which require no technology at all - quarantines - which could stop such a thing from wiping out too many people. Even if you did it in a bunch of places simultaneously, you'd still probably fail to wipe out humanity with it just because there are too many people, too spread out, to actually succeed. And fourth, you'd probably need to test it, and that would put you at enormous risk of discovery. I have my doubts about this scenario, but it is by far the likelist sort of technological disaster.

Of course, if we have sentient non-human intelligences, they'd likely be immune to such nonsense. And given our improvements in automation controlling plague-swept areas is probably going to only get easier over time; why use soldiers who can potentially get infected when we can patrol with drones?

Replies from: TitaniumDragon, Vaniver, TitaniumDragon
comment by TitaniumDragon · 2014-05-27T22:46:25.348Z · LW(p) · GW(p)

Everything else is way further down the totem pole.

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life. There are absolutely enormous amounts of bacteria and viruses and fungi and everything else all around us, and given the enormous advantage which would be conferred by being a grey goo from an evolutionary standpoint, we would expect the entire planet to have already been covered in the stuff - probably repeatedly. The fact that we see so much diversity - the fact that nothing CAN do this, despite enormous evolutionary incentive TO do this - suggests that grey goo scenarios are either impossible or incredibly unlikely. And that's ignoring the thermodynamic issues which would almost certainly prevent such a scenario from occurring as well, given the necessity of reshaping whatever material into the self-replicating material, which would surely take more energy than is present in the material to begin with.

Physics experiments gone wrong have similar problems - we've seen supernovas. The energy released by a supernova is just vastly beyond what any sort of planetary civilization is likely capable of doing. And seeing as supernovas don't destroy everything, it is vastly unlikely that whatever WE do will do the same. There are enormously energetic events in the universe, and the universe itself is reasonably stable - it seems unlikley that our feeble, mere planetary energy levels are going to do any better in the "destroy everything" department. And even before that, there was the Big Bang, and the universe came to exist out of that whole mess. We have the Sun, and meteoritic impact events, both of which are very powerful indeed. And yet, we don't see exotic, earth-shattering physics coming into play there in unexpected ways. Extremely high energy densities are not likely to propagate - they're likely to dissipate. And we see this in the universe, and in the laws of thermodynamics.

It is very easy to IMAGINE a superweapon that annihilates everything. But actually building one? Having one have realistic physics? That's another matter entirely. Indeed, we have very strong evidence against it: surely, intelligent life has arisen elsewhere in the universe, and we would see galaxies being annihilated by high-end weaponry. We don't see this happening. Thus we can assume with a pretty high level of confidence that such weapons do not exist or cannot be created without an implausible amount of work.

The difficult physics of interstellar travel is not to be denied, either - the best we can do with present physics is nuclear pulse propulsion, which is perhaps 10% of c and has enormous logistical issues. Anything FTL requires exotic physics which we don't have any idea of how to create, and which may well describe situations which are not physically plausible - that is to say, the numbers may work, but there may well be no way to get there, the same as how there's no particular reason going faster than c is impossible, but you can't ever even REACH c, so the fact that there is a "safe space" according to the math on the other side is meaningless. Without FTL, interstellar travel is far too slow for such disasters to really propagate themselves across the galaxy - any sort of plague would die out on the planet it was created on, and even WITH FTL, it is still rather unlikely that you could easily spread something like that. Only if cheap FTL travel existed would spreading the plague be all that viable... but with cheap FTL travel, everyone else can flee it that much more easily.

My conclusion from all of this is that these sorts of estimates are less "estimates" and more "wild guesses which we pretend have some meaning, and which we throw around a lot of fancy math to convince ourselves and others that we have some idea what we're talking about". And that estimates like one in three million, or one in ten, are wild overestimates - and indeed, aren't based on any logic any more sound than the guy on the daily show who said that it would either happen, or it wouldn't, a 50% chance.

We have extremely strong evidence against galactic and universal annihilation, and there are extremely good reasons to believe that even planetary level annihilation scenarios are unlikely due to the sheer amount of energy involved. You're looking at biocides and large rocks being diverted from their orbits to hit planets, neither of which are really trivial things to do.

It is basically a case of http://tvtropes.org/pmwiki/pmwiki.php/Main/ScifiWritersHaveNoSenseOfScale, except applied in a much more pessimistic manner.

The only really GOOD argument we have for lifetime limited civilizations is the url=https://en.wikipedia.org/wiki/Fermi_paradox - that is to say, where are all the bloody aliens? Unfortunately, the Fermi Paradox is a somewhat weak argument primarily because we have absolutely no idea whatsoever which side of the Great Filter we are on. That being said, if practical FTL travel exists, I would expect that to pretty much ensure that any civilization which invented it would likely simply never die because of how easy it would be to spread out, making destroying them all vastly more difficult. The galaxy would probably end up colonized and recolonized regardless of how much people fought against it.

Without FTL travel, galactic colonization is possible, but it may be impractical from an economic standpoint; there is little benefit to the home planet of having additional planets colonized - information is the only thing you could expect to really trade over interstellar distances, and even that is questionable given that locals will likely try to develop technology locally and beat you to market, so unless habitable systems are very close together duplication of effort seems extremely likely. Entertainment would thus be the largest benefit - games, novels, movies and suchlike. This MIGHT mean that colonization is unlikely, which would be another explaination... but even there, that assumes that they wouldn't want to explore for the sake of doing so.

Of course, it is also possible we're already on the other side of the Great Filter, and the reason we don't see any other intelligent civilizations colonizing our galaxy is because there aren't any, or the ones which have existed destroyed themselves earlier in their history or were incapable of progressing to the level we reached due to lack of intelligence, lack of resources, eternal, unending warfare which prevented progress, or something else.

This is why pushing for having a multiplanetary civilization is, I think, a good thing; if we hit the point where we had 4-5 extrasolar colonies, I think it would be pretty solid evidence in favor of being beyond the Great Filter. Given the dearth of evidence for interstellar disasters created by intelligent civilizations, I think that it is likely that our main concern about destroying ourselves comes until the point where we expand.

But I digress.

It isn't impossible that we will destroy ourselves (after all, the Fermi Paradox does offer some weak evidence for it), but I will say that I find any sort of claims of numbers for the likelihood of doing so incredibly suspect, as they are very likely to be made up. And given that we have no evidence of civilizations being capable of generating galaxy-wide disasters, it seems likely that whatever disasters exist are planetary scale at best. And our lack of any sort of plausible scenarios even for that hurts even that argument. The only real evidence we have against our civilization existing indefinitely is the Fermi Paradox, but it has its own flaws. We may destroy ourselves. But until we find other civilizations, you are fooling yourself if you think you aren't just making up numbers. Anything which destroys us outside of an impact event is likely something we cannot predict.

Replies from: more_wrong, ChristianKl, TitaniumDragon, TitaniumDragon
comment by more_wrong · 2014-05-28T03:59:55.362Z · LW(p) · GW(p)

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life" ... " nothing CAN do this, because nothing HAS done it."

The grey goo scenario isn't really very silly. We seem to have had a green goo scenario around 1.5 to 2 billion years ago that killed off many or most critters around due to release of deadly deadly oxygen; if the bacterial ecosystem were completely stable against goo scenarios this wouldn't have happened. We have had mini goo scenarios when for example microbiota pretty well adapted to one species made the jump to another and oops, started reproducing rapidly and killing off their new host species rapidly, e.g. Yersinia pestis. Just because we haven't seen a more omnivous goo sweep over the ecosphere recently ..., ...other than Homo sapiens, which is actually a pretty good example of a grey goo - think of the species as a crude mesoscale universal assembler, which is spreading pretty fast and killing off other species at a good clip and chewing up resources quite rapidly... ... doesn't mean it couldn't happen at the microscale also. Ask the anaerobes if you can find them, they are hiding pretty well still after the chlorophyll incident.

Since the downside is pretty far down, I don't think complacency is called for. A reasonable caution before deploying something that could perhaps eat everyone and everything in sight seems prudent.

Remember that the planet spent almost 4 billion years more or less covered in various kind of goo before the Precambrian Explosion. We know /very little/ of the true history of life in all that time; there could have been many, many, many apocalyptic type scenarios where a new goo was deployed that spread over the planet and ate almost everything, then either died wallowing in its own crapulence or formed the base layer for a new sort of evolution.

Multicellular life could have started to evolve /thousands of times/ only to be wiped out by goo. If multicellulars only rarely got as far as bones or shells, and were more vulnerable to being wiped out by a goo-plosion than single celled critters that could rebuild their population from a few surviving pockets or spores, how would we even know? Maybe it took billions of years for the Great War Of Goo to end in a Great Compromise that allowed mesoscopic life to begin to evolve, maybe there were great distributed networks of bacterial and viral biochemical computing engines that developed intelligence far beyond our own and eventually developed altruism and peace, deciding to let multicellular life develop.

Or we eukaryotes are the stupid runaway "wet" technology grey goo of prior prokaryote/viral intelligent networks, and we /destroyed/ their networks and intelligence with our runaway reproduction. Maybe the reason we don't see disasters like forests and cities dissolving in swarms of Andromeda-Strain like universal gobblers is that safeguards against that were either engineered in, or outlawed, long ago. Or, more conventionally, evolved.

What we /do/ think we know about the history of life is that the Earth evolved single celled life or inherited it via panspermia etc. within about half a billion years of the Earth's coalescence, then some combination of goo more or less dominated the Earth's surface te roost (as far as biology goes) for over three billion years, esp if you count colonies like stromatolites as gooey. In the middle of this long period was at least one thing that looked like a goo apocalypse that remade the Earth profoundly enough that the traces are very obvious (e.g. huge beds of iron ore). But there could have been many more mass extinctions we know of.

Then less than a billion years ago something changed profoundly and multicellulars started to flourish. This era is less than a sixth of the span of life on earth. So... five sixths, goo dominated world, one sixth, non goo dominated world, is the short history here. This does not fill me with confidence that our world is very stable against a new kind of goo based on non-wet, non-biochemical assemblers.

I do think we are pretty likely not to deploy grey goo, though. Not because humans are not idiots - I am an idiot, and it's the kind of mistake I would make, and I'm demonstrably above average by many measures of intelligence. It's just that I think Eliezer and others will deploy a pre-nanotech Friendly AI before we get to the grey goo tipping point, and that it will be smart enough, altruistic enough, and capable enough to prevent humanity from bletching the planet as badly as the green microbes did back in the day :)

Replies from: TitaniumDragon
comment by TitaniumDragon · 2014-05-28T23:27:36.600Z · LW(p) · GW(p)

You are starting from the premise that gray goo scenarios are likely, and trying to rationalize your belief.

Yes, we can be clever and think of humans as green goo - the ultimate in green goo, really. That isn't what we're talking about and you know it - yes, intelligent life can spread out everywhere, that isn't what we're worried about. We're worried about unintelligent things wiping out intelligent things.

The great oxygenation event is not actually an example of a green goo type scenario, though it is an interesting thing to consider - I'm not sure if there even is a generalized term for that kind of scenario, as it was essentially slow atmospheric poisoning. It would be more of a generalized biocide type scenario - the cyanobacteria which caused the great oxygenation event created something which was incidentally toxic to other things, but it was purely incidental, had nothing to do with their own action, probably didn't even benefit most of them directly (that is to say, the toxicity of the oxygen they produced probably didn't help them personally), and what actually took over afterwards were things which were rather different from what came before, many of which were not descended from said cyanobacteria.

It was a major atmospheric change, and is (theoretically) a danger, though I'm not sure how much of an actual danger it is in the real world - we saw the atmosphere shift to an oxygen-dominated one, but I'm not sure how you'd do it again, as I'm not sure there's something else which can be freed en-mass which is toxic - better oxygenators than oxygen are hard to come by, and by their very nature are rather difficult to liberate from an energy balance standpoint. It seems likely that our atmosphere is oxygen-based and not, say, chlorine or fluorine based for a reason arising from the physics of liberating said chemicals from chemical compounds.

As far as repeated green goo scenarios prior to 600Mya - I think that's pretty unlikely, honestly. Looking at microbial diversity and microbial genomes, we see that the domains of life are ridiculously ancient, and that diversity goes back an enormously long distance in time. It seems very unlikely that repeated green goo type scenarios would spare the amount of diversity we actually see in the real world. Eukaryotic life arose 1.6-2.1Bya, and as far as multicellular life goes, we've evidence of cyanobacteria which showed signs of multicellularity 3Bya.

That's a long, long time, and it seems unlikely that repeated gray goo scenarios are what kept life simple. It seems more likely that what kept life simple was the fact that complexity is hard - indeed, I suspect the big advancement was actually major advancements in modularity of life. The more modular life becomes, the easier it is to evolve quickly and adapt to new circumstances, but modularity from non-modularity is something which is pretty tough to sort out. Once things did sort it out, though, we saw a massive explosion in diversity. Evolving to be better at evolving is a good strategy for continuing to exist, and I suspect that complex multicelluar life only came to exist when stuff got to the point where this could happen.

If we saw repeated green goo scenarios, we'd expect the various branches of life to be pretty shallow - even if some diversity survived, we'd expect each diverse group to show a major bottleneck back at whenever the last green goo occurred. But that's not what we actaully see. Fungi and animals diverged about 1.5 Bya, for instance, and other eukaryotic diversity occurred even prior to that. Animals have been diverging for 1.2 billion years.

It seems unlikely, then, that there have been any green goo scenarios in a very, very long time, if indeed they ever did occur. Indeed, it seems likely that life evolved to prevent said scenarios, and did so successfully, as none have occurred in a very, very, very long time.

Pestilence is not even close to green goo. Yes, introducing a new disease into a new species can be very nasty, but it almost never actually is, as most of the time, it just doesn't work at all. Even amongst the same species, Smallpox and other old-world diseases wiped out the Native Americans, but Native American diseases were not nearly so devastating to the old-worlders.

Most things which try to jump the species barrier have a great deal of difficulty in doing so, and even when they successfully do so, their virulence ends up dropping over time because being ridiculously fatal is actually bad for their own continued propagataion. And humans have become increasingly better at stopping this sort of thing. I did note engineered plagues as the most likely technological threat, but comparing them to gray goo scenarios is very silly - pathogens are enormously easier to control. The trouble with stuff like gray goo is that it just keeps spreading, but with a pathogen, it requires a host - there are all sorts of barriers in place to pathogens, and everything is evolved to be able to deal with pathogens because they sometimes have to deal with even new ones, and things which are more likely to survive exposure to novel pathogens are more likely to pass on their genes in the long term.

With regards to "intelligent viral networks" - this is just silly. Life on earth is NOT the result of intelligence. You can tell this from our genomes. There are no signs of engineering ANYWHERE in us; no signs of intelligent design.

Replies from: private_messaging
comment by private_messaging · 2014-05-29T04:23:31.008Z · LW(p) · GW(p)

The gray goo is predicated on the sort of thinking common in bad scifi.

Basically, in scifi the nanotech self replicators which eat everything in their path are created in one step. As opposed to realistic depiction of technological progress where the first nanotech replicators have to sit in a batch of special nutrients and be microwaved, or otherwise provided energy, while being kept perfectly sterile (to keep bacteria from eating your nanotech). Then it'd get gradually improved in great many steps and find many uses ranging from cancer cure to dishwashers, with corresponding development in goo control methods. You don't want your dishwasher goo eating your bread.

The levels of metabolic efficiency and sheer universality required for the gray goo to be able to eat everything in it's path (and that's stuff which hasn't gotten eaten naturally), require multitude of breakthroughs on top of an incredibly advanced nanotechnology and nano-manufacturing capacity within artificial environments.

How does such an advanced civilization fight the gray goo? I can't know what would be the best method, but a goo equivalent of bacteriophage is going to be a lot, lot less complicated than the goo itself (as the goo has to be able to metabolize a variety of foods efficiently).

Replies from: David_Gerard
comment by David_Gerard · 2014-06-01T13:19:46.093Z · LW(p) · GW(p)

Please add something like this to the RW nanotech article!

comment by ChristianKl · 2014-05-28T12:33:17.084Z · LW(p) · GW(p)

Indeed, we have very strong evidence against it: surely, intelligent life has arisen elsewhere in the universe, and we would see galaxies being annihilated by high-end weaponry.

That's a bad argument. We don't know for sure that intelligent life has arisen. The fact that we don't see events like that can simply mean that we are the first.

Replies from: TitaniumDragon
comment by TitaniumDragon · 2014-05-28T22:42:05.733Z · LW(p) · GW(p)

That's a pretty weak argument due to the mediocrity principle and the sheer scale of the universe; while we certainly don't know the values for all parts of the Drake Equation, we have a pretty good idea, at this point, that Earth-like planets are probably pretty common, and given that abiogenesis occurred very rapidly on Earth, that is weak evidence that abiogenesis isn't hard in an absolute sense.

Most likely, the Great Filter lies somewhere in the latter half of the equation - complex, multicellular life, intelligent life, civilization, or the rapid destruction thereof. But even assuming that intelligent life only occurs in one galaxy out of every thousand, which is incredibly unlikely, that would still give us many opportunities to observe galactic destruction.

It is theoretically possible that we're the only life in the Universe, but that is incredibly unlikely; most Universes in which life exists will have life exist in more than one place.

Replies from: ChristianKl
comment by ChristianKl · 2014-05-29T01:13:34.310Z · LW(p) · GW(p)

given that abiogenesis occurred very rapidly on Earth, that is weak evidence that abiogenesis isn't hard in an absolute sense.

We don't even know that it occurred on earth at all. It might have occurred elsewhere in our galaxy and traveled to earth via asteroids.

most Universes in which life exists will have life exist in more than one place.

Why? I don't see any reason why that should be the case. If you take for example posts that internet forum users write most of the time most users who write posts only write one post.

Replies from: army1987
comment by A1987dM (army1987) · 2014-06-01T08:01:02.129Z · LW(p) · GW(p)

We don't even know that it occurred on earth at all. It might have occurred elsewhere in our galaxy and traveled to earth via asteroids.

That would make it more likely that there's life on other planets, not less likely.

Replies from: ChristianKl
comment by ChristianKl · 2014-06-01T08:55:18.454Z · LW(p) · GW(p)

Most planets and stars in the universe are not in our galaxy. If our galaxy has a bit of unicellular life because some very rare event happened and is the only galaxy with life, that fits to a universe where we are the only intelligent species.

Replies from: army1987
comment by A1987dM (army1987) · 2014-06-01T10:03:00.577Z · LW(p) · GW(p)

It looks like you accidentally submitted your comment before finishing it (or there's a misformatted link or something).

Replies from: ChristianKl
comment by ChristianKl · 2014-06-01T13:00:03.865Z · LW(p) · GW(p)

I corrected it.

comment by TitaniumDragon · 2014-05-27T23:23:31.470Z · LW(p) · GW(p)

After reading through all of the comments, I think I may have failed to address your central point here.

Your central point seems to be "a rational agent should take a risk that might result in universal destruction in exchange for increased utility".

The problem here is I'm not sure that this is even a meaningful argument to begin with. Obviously universal destruction is extremely bad, but the problem is that utility probably includes all life NOT being extinguished. Or, in other words, this isn't necessarily a meaningful calculation if we assume that the alternative makes it more likely that universal annihilation will occur.

Say the Nazis gain an excessive amount of power. What happens then? Well, there's the risk that they make some sort of plague to cleanse humanity, screw it up, and wipe everyone out. That scenario seems MORE likely in a Nazi-run world than one which isn't. And - let's face it - chances are the Nazis will try and develop nuclear weapons, too, so at best you only bought a few years. And if the wrong people develop them first, you're in a lot of trouble. So the fact of the matter is that the risk is going to be taken regardless, which further diminishes the loss of utility you could expect from universal annihilation - sooner or later, someone is going to do it, and if it isn't you, then it will be someone else who gains whatever benefits there are from it.

The higher utility situation likely decreases the future odds of universal annihilation, meaning that, in other words, it is entirely rational to take that risk simply because the odds of destroying the world NOW are less than the odds of the world being destroyed further on down the line by someone else if you don't make this decision, especially if you can be reasonably certain someone else is going to try it out anyway. And given the odds are incredibly low, it is a lot less meaningful of a choice to begin with.

comment by TitaniumDragon · 2014-05-27T22:47:57.918Z · LW(p) · GW(p)

Incidentally, regarding some other things in here:

[quote]They thought that just before World War I. But that's not my final rejection. Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.[/quote]

There's actually a pretty good counter-argument to this, namely the fact that capital is vastly easier to destroy than it is to create, and that, thusly, an area which avoids conflict has an enormous advantage over one that doesn't because it maintains more of its capital. As capital becomes increasingly important, conflict - at least, violent, capital-destroying conflict - becomes massively less beneficial to the perpetrator of said conflict, doubly so when they actually also likely benefit from the capital contained in other nations as well due to trade.

And that's ignoring the fact that we've already sort of engineered a global scenario where "The West" (the US, Canada, Japan, South Korea, Taiwan, Australia, New Zealand, and Western Europe, creeping now as far east as Poland) never attack each other, and slowly make everyone else in the world more like them. It is group selection of a sort, and it seems to be working pretty well. These countries defend their capital, and each others' capital, benefit from each others' capital, and engage soley in non-violent conflict with each other. If you threaten them, they crush you and make you more like them; even if you don't, they work to corrupt you to make you more like them. Indeed, even places like China are slowly being corrupted to be more like the West.

The more that sort of thing happens, the less likely violent conflict becomes because it is simply less beneficial, and indeed, there is even some evidence to suggest we are being selected for docility - in "the West" we've seen crime rates and homicide rates decline for 20+ years now.

As a final, random aside:

My favorite thing about the Trinity test was the scientist who was taking side bets on the annihilation of the entire state of New Mexico, right in front of the governor of said state, who I'm sure was absolutely horrified.

Replies from: Lumifer, Vaniver
comment by Lumifer · 2014-05-28T14:28:58.985Z · LW(p) · GW(p)

the fact that capital is vastly easier to destroy than it is to create

Capital is also easier to capture than it is to create. Your argument looks like saying that it's better to avoid wars than to lose them. Well, yeah. But what about winning wars?

we've already sort of engineered a global scenario where "The West" ... never attack each other

In which meaning are you using the word "never"? :-D

Replies from: TitaniumDragon, TheAncientGeek
comment by TitaniumDragon · 2014-05-28T23:42:54.271Z · LW(p) · GW(p)

The problem is that asymmetric warfare, which is the best way to win a war, is the worst way to acquire capital. Cruise missiles and drones are excellent for winning without any risk at all, but they're not good for actually keeping the capital you are trying to take intact.

Spying, subversion, and purchasing are far cheaper, safer, and more effective means of capturing capital than violence.

As far as "never" goes - the last time any two "Western" countries were at war was World War II, which was more or less when the "West" came to be in the first place. It isn't the longest of time spans, but over time armed conflict in Europe has greatly diminished and been pushed further and further east.

Replies from: Lumifer
comment by Lumifer · 2014-05-29T16:07:03.902Z · LW(p) · GW(p)

The problem is that asymmetric warfare, which is the best way to win a war, is the worst way to acquire capital.

The best way to win a war is to have an overwhelming advantage. That sort is situation is much better described by the word "lopsided". Asymmetric warfare is something different.

Example: Iraqi invasion of Kuwait.

Spying, subversion, and purchasing are far cheaper, safer, and more effective means of capturing capital than violence.

Spying can capture technology, but technology is not the same thing as capital. Neither subversion nor purchasing are "means of capturing capital" at all. Subversion destroys capital and purchases are exchanges of assets.

As far as "never" goes - the last time any two "Western" countries were at war was World War II, which was more or less when the "West" came to be in the first place.

That's an unusual idea of the West. It looks to me like it was custom-made to fit your thesis.

Can you provide a definition? One sufficiently precise to be able to allocate countries like Poland, Israel, Chile, British Virgin Islands, Estonia, etc. to either "West" or "not-West".

comment by TheAncientGeek · 2014-05-28T16:52:33.178Z · LW(p) · GW(p)

Depends on the capital. Doesn't work too well for infrastructure and human capital, and the west has plenty of those anyway. What the west is insecure about is energy,and it seems that a combination of diplomacy, threat and proxy warfare is a more efficient way to keep it flowing than all out capture.

Replies from: Lumifer
comment by Lumifer · 2014-05-28T18:09:55.447Z · LW(p) · GW(p)

Doesn't work too well for infrastructure and human capital

Depends on the human capital. Look at the history of the US space program :-/

is a more efficient way to keep it flowing

At the moment. I'm wary of evolutionary arguments based on a few decades worth of data.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-28T18:17:39.590Z · LW(p) · GW(p)

The example of von Braun and co crossed my mind. But that was something of a side effect. Fighting a war specifically to capture a smallish numbers of smart people is frought with risks.

Replies from: TitaniumDragon
comment by TitaniumDragon · 2014-05-28T23:43:59.337Z · LW(p) · GW(p)

Opportunistic seizure of capital is to be expected in a war fought for any purpose.

comment by Vaniver · 2014-05-27T23:52:11.981Z · LW(p) · GW(p)

Incidentally, you can blockquote paragraphs by putting > in front of them, and you can find other help by clicking the "Show Help" button to the bottom right of the text box. (I have no clue why it's all the way over there; it makes it way less visible.)

There's actually a pretty good counter-argument to this, namely the fact that capital is vastly easier to destroy than it is to create, and that, thusly, an area which avoids conflict has an enormous advantage over one that doesn't because it maintains more of its capital.

But, the more conflict avoidant the agents in an area, the more there is to gain from being an agent that seeks conflict.

Replies from: TitaniumDragon
comment by TitaniumDragon · 2014-05-28T23:47:33.256Z · LW(p) · GW(p)

The more conflict avoidant the agents in an area, the more there is to gain from being an agent that seeks conflict.

This is only true if the conflict avoidance is innate and is not instead a form of reciprocal altruism.

Reciprocal altruism is an ESS where pure altruism is not because you cannot take advantage of it in this way; if you become belligerent, then everyone else turns on you and you lose. Thus, it is never to your advantage to become belligerent.

Replies from: Vaniver
comment by Vaniver · 2014-05-29T04:33:11.104Z · LW(p) · GW(p)

Agreed. The word 'avoid' and the group selection-y argument made me think it was a good idea to raise that objection and make sure we were discussing reciprocal pacifists, not pure pacifists.

comment by Vaniver · 2014-05-27T23:47:23.345Z · LW(p) · GW(p)

I don't even know how they got at it, but it sounds like a typical wild guess to me. How would you even arrive at that figure?

Here is a contemporary paper discussing the risk, which doesn't seem to come up with the 3e-6 number, and here are some of Hamming's reflections. An excerpt from the second link:

Shortly before the first field test (you realize that no small scale experiment can be done--either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "It is the probability that the test bomb will ignite the whole atmosphere." I decided I would check it myself!

Compton claims (in an interview with Pearl Buck I cannot easily find online) that 3e-6 was actually the decision criterion (if it was higher than that, they were going to shut down the project as more dangerous than the Nazis), and the estimate came in at lower, and so they went ahead with the project.

In modern reactors, they try to come up with a failure probability by putting distributions on unknown variables during potential events, simulating those events, and then figuring out what portion of the joint input distribution will lead to a catastrophic failure. One could do the same with unknown parameters like the cross-section of nitrogen at various temperatures; "this is what we think it could be, and we only need to be worried if it's over here."

comment by TitaniumDragon · 2014-05-27T22:46:49.655Z · LW(p) · GW(p)

Apparently I don't know how to use this system properly.

comment by Jonathan_Graehl · 2009-08-06T23:20:00.781Z · LW(p) · GW(p)

In saying "our compression of subjective time can be exponential", do you actually mean that the compression rate may keep growing exponentially as a function of real time?

Replies from: PhilGoetz, Jonathan_Graehl
comment by PhilGoetz · 2009-08-06T23:56:00.716Z · LW(p) · GW(p)

Compression attained can be an exponential function of time. That's not the same as saying that compression rate can grow exponentially. I mean, if it's a "rate", it already expresses how compression grows, so "the compression rate grows exponentially" means "the first derivative of compression grows exponentially".

Anyway, compression can't keep increasing indefinitely, due to the Planck constant. Mike Vassar once did some back-of-envelope-calculations showing that we have surprisingly few orders of magnitude to go before we hit it in terms of computational power - less than 20 orders of magnitude, IIRC. But two orders of magnitude is enough to kill us, in this scenario. Basically, it will take us something like 5 million years to reach another galaxy, at which point you might consider life safe. If we get just 2 orders of magnitude out of subjective time compression that's like 500 million years, and our survival to that point seems dubious.

(I went back to clarify this because I realized people don't usually think of 20 orders of magnitude as "surprisingly few".)

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2009-08-07T00:00:26.325Z · LW(p) · GW(p)

By "compression rate" I meant "compression ratio". Sorry for the confusion. But you know that if something grows exponentially, all of its nth derivatives do also, right?

I did know that the actual universe probably has some physical limits in how you can shrink a computation in space and/or time, thus my question. Actually, I thought you might have done the not-math "exponential" as a way of saying "A LOT!!!"

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-07T00:05:56.728Z · LW(p) · GW(p)

Okay, it is the same thing as saying that compression rate can grow exponentially.

I meant exponential. I don't know if I believe it's exponential, but almost all other futurists say that things are speeding up (time is compressing) exponentially.

comment by Jonathan_Graehl · 2009-08-06T23:28:00.704Z · LW(p) · GW(p)

"if your utility is at risk of going negative; it's not possible that you would not accept a .999% risk". I assume you mean that if my utility is around 0, and things are trending toward worse, I should be happy to accept a 99.9% chance of destroying the universe (assuming I'm the .1% possibility gives me some improvement).

"Is life barely worth living? Buy a lottery ticket, and if you don't win, kill yourself - you win either way!" - probably not the best marketing campaign for the state-run lottery.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T23:49:37.730Z · LW(p) · GW(p)

"if your utility is at risk of going negative; it's not possible that you would not accept a .999% risk"

Look at where the semicolon is. You've combined the end of one clause with the beginning of a different clause.

"Is life barely worth living? Buy a lottery ticket, and if you don't win, kill yourself - you win either way!" - probably not the best marketing campaign for the state-run lottery.

I wrote a post on that..

comment by PhilGoetz · 2009-08-06T22:12:20.939Z · LW(p) · GW(p)

Here's another possible objection:

Much of time-discounting is due to uncertainty. Because you're more and more uncertain what the impact of a decision will be the farther you look into the future, you weigh that impact less and less the farther into the future you go.

But if you can find a way to predict future impacts that is unbiased, then you don't need to time-discount due to uncertainty! Um... do you?

No, I think you still do, because your probability distribution's variance increases the farther you look into the future. Rats.

comment by saturn · 2009-08-06T22:00:44.815Z · LW(p) · GW(p)

It seems like this post exhibits a great deal of omission bias. Refusing to make rational trade-offs with existential risk doesn't make the risk go away.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T23:18:29.687Z · LW(p) · GW(p)

You seem to be saying that we're doomed if we do, and doomed if we don't.

So, take on all these risks, because life is going to be extinguished anyway?

I don't think that's what you mean. I think, if you mean anything coherent, you mean that... no, I can't figure out what you might mean.

If you choose not to take a risk, it makes that risk go away. If you mean that you're going to get hit by some other risk that you didn't think of anyway, you are showing little faith in intelligence. As we learn more, we become aware of more and more risks. There can't be an infinite number of existential risks waiting for us, or we wouldn't be here. Therefore, we can expect to eventually anticipate most existential risks, and deal with them.

Replies from: saturn
comment by saturn · 2009-08-08T07:03:52.407Z · LW(p) · GW(p)

I'm assuming that avoiding risks involves a tradeoff versus other forms of human welfare, and I don't see why a strategy that makes humanity worse off but longer lived is necessarily preferable to one that makes humanity better off but shorter lived.

And yes, we're "doomed" in the sense that, as far as I understand, an infinitely long existence isn't an available option.

comment by Psychohistorian · 2009-08-06T18:29:53.055Z · LW(p) · GW(p)

We're talking about bringing existential threats to chances less than 1 in a million per century. I don't know of any defensive technology that can guarantee a less than 1 in a million failure rate.

Under your theory of 3/1M/Century, you'd only need to do better than a 1/3 failure rate to lower chances to 1/1M/C. A 1/3 failure rate seems rather plausible. If the defense had a 1/1M failure rate, you'd have a 3/1,000,000,000,000 chance of eradication per century.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T18:42:39.334Z · LW(p) · GW(p)

Assume that there is at least one attack per century, and a successful attack will end life. Therefore, you need a failure rate less than 1 in a million to survive a million centuries.

Replies from: Psychohistorian, timtyler
comment by Psychohistorian · 2009-08-06T21:26:08.880Z · LW(p) · GW(p)

This assumes that a successful attack will end life with P~=1 and a successful attack will occur once per century, which seems, to put it mildly, excessive.

As I understood your original assumption, each century sees one event with P=3/1M of destruction, independent of any intervention. If an intervention has a 1/3 failure rate, and you intervene every time, this would reduce your chance of annihilation/century to 1/1M, which is your goal.

It's quite possible we're thinking of different things when we say failure rate; I mean the failure rate of the defensive measure; I think you mean the failure rate as in the pure odds the world blows up.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T21:58:17.042Z · LW(p) · GW(p)

I wasn't using the Trinity case when I wrote that part. This part assumes that we will develop some technology X capable of destroying life, and that we'll also develop technology to defend against it. Then say each century sees 1 attack using technology X that will destroy life if it succeeds. (This may be, for instance, from a crazy or religious or just very very angry person.) You need to defend successfully every time. It's actually much worse, because there will probably be more than one Technology X.

If you think about existing equilibria between attackers and defenders, such as spammers vs. spam filters, it seems unlikely that, once technology has stopped developing, every dangerous technology X will have such a highly-effective defense Y against it. The priors, I would think, would be that (averaged over possible worlds) you would have something more like a 50% chance of stopping any given attack.

comment by timtyler · 2009-08-06T20:51:00.001Z · LW(p) · GW(p)

Every orgainism you see has is the result of an unbroken chain of non-extinction that stretches back some 4 billion years. The rate of complete failure for living systems is not known - but it appears to have been extremely low so far.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T21:25:43.875Z · LW(p) · GW(p)
  • Time compression did not start recently. (Well, it did, once you account for time compression.)

  • Bacteria have limited technological capabilities.

comment by topynate · 2009-08-06T18:17:40.013Z · LW(p) · GW(p)

Your argument assumes that the time-horizon of rational utility maximisers never reaches further than their next decision. If I only get one shot to increase my expected utility by 1%, and I'm rational, yes, I'll take any odds better than 99:1 in favour on an all or nothing bet. That is a highly contrived scenario: it is almost always possible to stake less than your entire utility on an outcome, in which case you generally should in order to reduce risk-of-ruin and thus increase long-term expected utility.

Further, the risks of not using nuclear weapons in the Second World War were nothing like those you gave. Japan was in no danger of occupying the United States at the time the decision to initiate the Trinity test was made; as for Germany, it had already surrendered! The anticipated effects of not using the Bomb were rather that of losing perhaps several hundred thousand soldiers in an invasion of Japan, and of course the large economic cost of prolonging the war. As for the calculated risk of atmospheric ignition, the calculated risk at the time of testing was even lower than the 300000:1 you stated (although Fermi offered evens on the morning of the Trinity test).

Replies from: PhilGoetz, Cyan, teageegeepea
comment by PhilGoetz · 2009-08-06T18:44:25.352Z · LW(p) · GW(p)

That is a highly contrived scenario: it is almost always possible to stake less than your entire utility on an outcome, in which case you generally should in order to reduce risk-of-ruin and thus increase long-term expected utility.

It is derived from real-life experiences, which I listed. Yes, it is almost always possible to stake less than your entire utility. Almost. Hence the "once per century", instead of "billions of times per day".

Further, the risks of not using nuclear weapons in the Second World War were nothing like those you gave. Japan was in no danger of occupying the United States at the time the decision to initiate the Trinity test was made; as for Germany, it had already surrendered! The anticipated effects of not using the Bomb were rather that of losing perhaps several hundred thousand soldiers in an invasion of Japan, and of course the large economic cost of prolonging the war.

I think that's right. Do you realize you are only making my case stronger, by showing that the decision was made by somewhat-rational people for even fewer benefits?

If you accept that it would be rational to perform the Trinity test in a hypothetical world in which the Germans and Japanese were winning the war, and in which it had a 3 in 1 million chance of destroying life on Earth, then I have made my point. Arguing about what the risks and benefits truly were historically is irrelevant. It doesn't really matter what actual humans did in an actual situation, because we already agree that humans are irrational. What matters is whether we can find conditions under which perfectly rational beings, faced with the situation as I posed it, would choose not to conduct the Trinity test.

As for the calculated risk of atmospheric ignition, the calculated risk at the time of testing was even lower than the 300000:1 you stated

So what were they?

Replies from: topynate
comment by topynate · 2009-08-06T19:36:50.319Z · LW(p) · GW(p)

I meant that low existential risk wagers are almost always available, regardless of the presence or absence of high existential risk wagers, and I claimed that those low risk wagers are preferable even when the cost of not taking the high risk wager is very high, provided that is that you have a long time horizon. The only time you should take a high existential risk wager is when your long-term future utility will be permanently and substantially decreased by your not doing so. That doesn't apply to your example of the first nuclear test, as the alternative would not lead to the nightmare invasion scenario, but rather to a bloody but recoverable mess. So you haven't proven the rationality of testing nukes (although only in that scenario, as you point out).

If you accept that it would be rational to perform the Trinity test in a hypothetical world in which the Germans and Japanese were winning the war, and in which it had a 3 in 1 million chance of destroying life on Earth, then I have made my point.

It actually still depends on the time horizon and the utility of a surrender or truce. I suppose a 30 year cut-off could be short enough to make it rational. Of course if being overrun by Nazis is assumed to lead to eternal hellish conditions...

What matters is whether we can find conditions under which perfectly rational beings, faced with the situation as I posed it, would choose not to conduct the Trinity test.

I really don't think we can. If the utility of not-testing is zero at all times after T, the utility of destroying the Earth is zero at all times after T, and the utility of winning the war is greater than or equal to 0 at all times after T, then whatever your time horizon you will test if you are an expected utility maximizer. Where I disagree is that there exists a base rate of anywhere like once per century for taking a 3 in a million chance of destroying the world in any scenarios where those sorts of figures don't hold. I also disagree that rational agents will confront each other with such choices anything like so often as once per century.

As for the calculated risk of atmospheric ignition, the calculated risk at the time of testing was even lower than the 300000:1 you stated

So what were they?

They were zero! Proved to be physically impossible, and then no-one seriously questioned the validity of the calculations. As for the best-possible estimate, it must have been higher, of course.

Replies from: PhilGoetz, PhilGoetz, PhilGoetz
comment by PhilGoetz · 2009-08-06T20:00:42.913Z · LW(p) · GW(p)

Where I disagree is that there exists a base rate of anywhere like once per century for taking a 3 in a million chance of destroying the world in any scenarios where those sorts of figures don't hold. I also disagree that rational agents will confront each other with such choices anything like so often as once per century.

If you were to wind the clock back to 1940, and restart World War 2, is there less than a 10% chance of arriving in the nightmare scenario? If not, doesn't this imply the base rate is at least once per thousand years?

comment by PhilGoetz · 2009-08-06T19:44:10.404Z · LW(p) · GW(p)

I think that you're historically correct, but it's enough to posit a hypothetical World War 2 in which the alternative was the nightmare invasion scenario, and show that it would then be rational to conduct the Trinity test.

Replies from: topynate
comment by topynate · 2009-08-06T19:54:57.921Z · LW(p) · GW(p)

I've edited my comment in response to the hypothetical.

comment by PhilGoetz · 2009-08-06T19:58:40.766Z · LW(p) · GW(p)

It actually still depends on the time horizon. I suppose a 30 year cut-off could be short enough to make it rational. Of course if being overrun by Nazis is assumed to lead to eternal hellish conditions...

The time-horizon is very important. One of my points is that I don't see how a rational agent could have a time-horizon on the scale of the life of the universe.

comment by Cyan · 2009-08-06T18:29:41.494Z · LW(p) · GW(p)

although Fermi offered evens on the morning of the Trinity test

I bet I know which side of the bet Fermi was willing to take.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T19:10:54.803Z · LW(p) · GW(p)

It would be rational to offer any odds for that bet.

comment by teageegeepea · 2009-08-06T19:03:29.725Z · LW(p) · GW(p)

Seconded regarding the stakes in WW2. The scientists weren't on the front lines either, so it's highly doubtful they would have been killed.

comment by HopeFox · 2011-05-29T13:02:53.696Z · LW(p) · GW(p)

Assuming rational agents with a reasonable level of altruism (by which I mean, incorporating the needs of other people and future generations into their own utility functions, to a similar degree to what we consider "decent people" to do today)...

If such a person figures that getting rid of the Nazis or the Daleks or whoever the threat of the day is, is worth a tiny risk of bringing about the end of the world, and their reasoning is completely rational and valid and altrustic (I won't say "unselfish" for reasons discussed elsewhere in this thread) and far-sighted (not discounting future generations too much)...

... then they're right, aren't they?

If the guys behind the Trinity test weighed the negative utility of the Axis taking over the world, presumably with the end result of boots stamping on human faces forever, and determined that the 3/1,000,000 chance of ending all human life was worth preventing this future from coming to pass, then couldn't Queen Victoria perform the same calculations, and conclude "Good heavens. Nazis, you say? Spreading their horrible fascism in my empire? Never! I do hope those plucky Americans manage to build their bomb in time. Tiny chance of destroying the world? Better they take that risk than let fascism rule the world, I say!"

If the utility calculations performed regarding the Trinity test were rational, altrustic and reasonably far-sighted, then they would have been equally valid if performed at any other time in history. If we apply a future discounting factor of e^-kt, then that factor would apply equally to all elements in the utility calculation. If the net utility of the test were positive in 1945, then it should have been positive at all points in history before then. If President Truman (rationally, altrustically, far-sightedly) approved of the test, then so should Queen Victoria, Julius Caesar and Hammurabi have, given sufficient information. Either the utility calculations for the test were right, or they weren't.

If they were right, then the problem stops being "Oh no, future generations are going to destroy the world even if they're sensible and altruistic!", and starts being "Oh no, a horrible regime might take over the world! Let's hope someone creates a superweapon to stop them, and damn the risk!"

If they were wrong, then the assumption that the ones performing the calculation were rational, altrustic and far-sighted is wrong. Taking these one by one:

1) The world might be destroyed by someone making an irrational decision. No surprises there. All we can do is strive to raise the general level of rationality in the world, at least among people with the power to destroy the world.

2) The world might be destroyed by someone with only his own interests at heart. So basically we might get stuck with Dr Evil. We can't do a lot about that either.

3) The world might be destroyed by someone acting rationally and altrustically for his own generation, but who discounts future generations too much (i.e. his value of k in the discounting factor is much larger than ours). This seems to be the crux of the problem. What is the "proper" value of k? It should probably depend on how much longer humans are going to be around, for reasons unrelated to the question at hand. If the world really is going to end in 2012, then every dollar spent on preventing global warming should have been spent on alleviating short-term suffering all over the world, and the proper value for k is very large. If we really are going to be here for millions of years, then we should be exceptionally careful with every resource (both material and negentropy-based) we consume, and k should be very small. Without this knowledge, of course, it's very difficult to determine what k should be.

That may be the way to avoid a well-meaning scientist wiping out all human life - find out how much longer we have as a species, and then campaign that everyone should live their lives accordingly. Then, the only existential risks that would be implemented are the ones that are actually, seriously, truly, incontrovertibly, provably worth it.

Replies from: PhilGoetz
comment by PhilGoetz · 2014-04-29T14:09:58.639Z · LW(p) · GW(p)

You've sidestepped my argument, which is that just the existential risks that are worth it are enough to guarantee destroying the universe in a cosmologically short time.

comment by timtyler · 2009-08-06T19:24:37.922Z · LW(p) · GW(p)

Re: "If technology keeps changing, it is inevitable that, much of the time, a technology will provide the ability to destroy all life before the counter-technology to defend against it has been developed."

Unsupported hypothesis. As life spreads out in the universe, it gets harder and harder to destroy all of it - while the technology of destruction will stabilise.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T19:41:07.158Z · LW(p) · GW(p)

Unsupported hypothesis. As life spreads out in the universe, it gets harder and harder to destroy all of it - while the technology of destruction will stabilise.

Did you read the essay before posting? I have a section on life spreading out in the universe, and a section on whether the technology of destruction can stabilize.

Replies from: timtyler
comment by timtyler · 2009-08-06T20:30:31.411Z · LW(p) · GW(p)

Yes, I read the essay. It doesn't make it any more inevitable. It isn't inevitable at all - rather this is speculation on your part, and unsubstantiated speculation that I can see no sensible basis for.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-06T21:14:57.974Z · LW(p) · GW(p)

It would be more helpful if you explained why each of the many reasons I gave are insensible.

Replies from: homunq, timtyler
comment by homunq · 2009-08-07T10:32:01.474Z · LW(p) · GW(p)

When arguing about the future, the imaginable is not all there is. You essentially gave several imaginable futures (some in which risks continue to arise, and others in which they do not) and did some handwaving about which class you considered likely to be larger. There are three ways to dispute this: to dispute your handwaving (eg, you consider compression of subjective time to be a conclusive argument, as if this is inevitable), to propose not-considered classes of future (eg, technology continues to increase, but some immutable law of the universe means that there are only a finite number of apocalyptic technologies), or to maintain that there are large classes of future which cannot possibly be imagined because they do not clearly fall into any categories such as we are likely to define in the present. If you use the latter dispute, arguing about probability is just arguing about which uninformative prior to use.

Replies from: PhilGoetz, torekp
comment by PhilGoetz · 2009-08-08T03:59:34.556Z · LW(p) · GW(p)

I'm not pretending this is an airtight case. If you previously assumed that existential threats converge to zero as rationality increases; or that rationality is always the best policy; or that rationality means expectation maximization; and now you question one of those things; then you've gotten something out of it.

comment by torekp · 2009-08-08T01:36:50.972Z · LW(p) · GW(p)

homung suggests that there may be immutable laws of the universe that mean there are only a finite number of apocalyptic technologies. Note that even if the probability of such technological limits is small, in order for Phil's argument to work, either that probability would have to be infinitesimal, or some of the doomsday devices have to continue to be threatening after the various attack/defense strategies reach a very mature level of development. All of the probabilities look finite to me.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-08T04:01:23.184Z · LW(p) · GW(p)

No; that probability about a property of the universe is a one-shot trial. It only has to be false once, out of one trial.

Replies from: torekp
comment by torekp · 2009-08-08T13:07:22.286Z · LW(p) · GW(p)

So your thesis is not that rationality dooms civilization, but only that as far as we know, it might. I get it now.

comment by timtyler · 2009-08-07T17:40:15.342Z · LW(p) · GW(p)

You talk as if you have presented a credible case - but you really haven't.

Instead there is a fantasy about making black holes explode (references, please!) another fantasy about subjective time compression outstripping expansion - and a story about disasters triggering other disasters - which is true, but falls a long way short of a credible argument that civilisation is likely to be wiped out once it has spread out a bit.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-08T03:51:01.694Z · LW(p) · GW(p)

Instead there is a fantasy about making black holes explode (references, please!)

You have me there. We have not yet successfully detonated a black hole.

Small black holes are expected to eventually explode. Large black holes are expected to take longer than the expected life of the universe to evaporate to that point.

Anyway, I'm not a physicist. It's just a handwavy example that maybe there is some technology with solar-scale or galaxy-scale destructive power. When all the humans lived on one island, they didn't imagine they could one day destroy the Earth.

Replies from: DBseeker, timtyler
comment by DBseeker · 2009-08-11T02:52:48.417Z · LW(p) · GW(p)

Anyway, I'm not a physicist. It's just a handwavy example that maybe there is some technology with solar-scale or galaxy-scale destructive power.

Then the example is pointless. A weapon powerful enough to cause extinction galaxy wide is a very big if. It's unlikely there would be, simply because of the massive distances between stars.

Also, if you base your argument (or part of it, anyways) on such an event, it is equally fair to state "if not". And in the case of "if not" (which I imagine to be highly more likely), the argument must end there.

Therefor, it is likely to assume that yes, we could outrun our own destructive tendencies.

When all the humans lived on one island, they didn't imagine they could one day destroy the Earth.

At that point in our evolution we had no firm grasp on what "world" even meant, let alone a basic understanding of scale. Now, we do. We also have a basic understanding of the universe, and a method to increase our understanding (Ability to postulate theories, run experiments and collect evidence). When all humans (most likely an ancestor) were contained in one geographic coordinate, none of these things even existed as concepts. There are a few more problems with this comparison, but I'll leave them alone for now, as it does nothing to bring them out.

comment by timtyler · 2009-08-11T21:18:22.426Z · LW(p) · GW(p)

I wasn't asking for references supporting the idea that we had detonated a black hole. It's an incredible weapon, which seems to have a low probability of existing - based on what we know about physics. The black hole at the center of our galaxy is not going to go away any time soon.

Bizarre future speculations which defy the known laws of physics don't add much to your case.