Disambiguating Doom
post by steven0461 · 2010-03-29T18:14:12.075Z · LW · GW · Legacy · 19 commentsContents
19 comments
Analysts of humanity's future sometimes use the word "doom" rather loosely. ("Doomsday" has the further problem that it privileges a particular time scale.) But doom sounds like something important; and when something is important, it's important to be clear about what it is.
Some properties that could all qualify an event as doom:
- Gigadeath: Billions of people, or some number roughly comparable to the number of people alive, die.
- Human extinction: No humans survive afterward. (Or, modified: no human-like life survives, or no sentient life survives, or no intelligent life survives.)
- Existential disaster: Some significant fraction, perhaps all, of the future's potential moral value is lost. (Coined by Nick Bostrom, who defines an existential risk as one "where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential", which I interpret to mean the same thing.)
- "Doomsday argument doomsday": The total number of observers (or observer-moments) in existence ends up being small – not much larger than the total that have existed in the past. This is what we should believe if we accept the Doomsday argument.
- Great filter: Earth ends up not colonizing the stars, or doing anything else widely visible. If all species are filtered out, this explains the Fermi paradox.
Examples to illustrate that these properties are fundamentally different:
- If billions die (1), humanity may still recover and not go extinct (2), retain most of its potential future value (3), spawn many future observers (4), and colonize the stars (5). (E.g., nuclear war, but also aging.)
- If cockroaches or Klingon colonists build something even cooler afterward, human extinction (2) isn't an existential disaster (3), and conversely, the creation of an eternal dystopia could be an existential disaster (3) without involving human extinction (2).
- Human extinction (2) doesn't imply few future observers (4) if it happens too late, or if we're not alone; and few future observers (4) doesn't imply human extinction (2) if we all live forever childlessly. (It's harder to find an example of few observer-moments without human extinction, short of p-zombie infestations.)
- If we create an AI that converts the galaxy to paperclips, humans go extinct (2) and it's an existential disaster (3), but it isn't part of the great filter (5). (For an example where all intelligence goes extinct, implying few future observers (4) for any definition of "observer", consider physics disasters that expand at light speed.) If our true desire is to transcend inward, that's part of the great filter (5) without human extinction (2) or an existential disaster (3).
- If we leave our reference class of observers for a more exciting reference class, that's a doomsday argument doomsday (4) but not an existential disaster (3). The aforementioned eternal dystopia is an existential disaster (3) but implies many future observers (4).
- Finally, if space travel is impossible, that's a great filter (5) but compatible with many future observers (4).
19 comments
Comments sorted by top scores.
comment by PhilGoetz · 2010-03-29T22:36:29.880Z · LW(p) · GW(p)
Gigadeath: Billions of people, or some number roughly comparable to the number of people alive, die.
That happened during the 20th century.
Human extinction: No humans survive afterward. (Or modified slightly: no human-like life survives, or no sentient life survives, or no intelligent life survives.)
Wait wait wait... These are four vastly different things.
Existential disaster: Some significant fraction, perhaps all, of the future's potential moral value is lost.
Since a whole lotta people here don't believe in morals, or at least not without so many qualifications that the average Joe wouldn't recognize what they were talking about, you need to explain this in a different way.
"Doomsday argument doomsday": The total number of observers (or observer-moments) in existence ends up being small – not much larger than the total that have existed in the past.
Not all observers are equal; so counting them is not enough. Is a singleton a doomsday?
Replies from: Tyrrell_McAllister, steven0461↑ comment by Tyrrell_McAllister · 2010-03-30T15:33:35.472Z · LW(p) · GW(p)
Existential disaster: Some significant fraction, perhaps all, of the future's potential moral value is lost.
Since a whole lotta people here don't believe in morals, or at least not without so many qualifications that the average Joe wouldn't recognize what they were talking about, you need to explain this in a different way.
It all adds up to normality. The average Joe might not credit our explanations of what morality is. But such explanations are about what morality is "behind the scenes". That is, they are explanations of what stands behind our experience of morally evaluating something. But that experience itself would probably be very familiar to the average Joe.
So, when we talk about a morally value-less future, the experience that we anticipate, were we to know of this future, is just the normal one of moral repugnance that Joe would expect.
↑ comment by steven0461 · 2010-03-29T23:02:12.922Z · LW(p) · GW(p)
Wait wait wait... These are four vastly different things.
fixed
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-30T04:05:12.445Z · LW(p) · GW(p)
It still appears the same to me.
No humans surviving is a very different set of possible worlds than no human-like life surviving. No humans surviving is my default assumption; the latter is not.
Sentient life could be very un-human. Intelligent life could be non-sentient.
Replies from: steven0461↑ comment by steven0461 · 2010-03-30T05:50:16.266Z · LW(p) · GW(p)
I know. The original said "slightly modified" because the modifications were only one word. The current version makes no claims about whether the modifications are slight in any sense.
comment by Mitchell_Porter · 2010-03-31T01:58:55.740Z · LW(p) · GW(p)
I am very skeptical about SIA, but I've always respected the doomsday argument, and lately I wonder if Bill Joy luddism is the right response.
If there's a great filter ahead it is far more likely to be involved with the advanced technologies which are meant to make galactic civilization possible in the first place, rather than some unanticipated tripwire in the natural world. So if we interpret the doomsday argument as information about the danger of these advanced technologies - if we do this, we are overwhelmingly likely to die - then isn't the logical action just to fight them down at every opportunity, rather than trying to be lucky by being ultra-smart about how we develop and deploy them? Yes, if we don't go there we forego a future of cosmic expansion, but if such a future is overwhelmingly unlikely, then the rational thing to do may be precisely to stay within our own little bubble here in this solar system.
ETA: One other observation: Those hoping for a really long future lifespan may feel aggrieved by a civilizational strategy which seems to eschew the technologies you would need for radical life extension. In this regard I have noticed one thing. Suppose you had a civilization whose members stopped reproducing but which all lived for a million years. At the very beginning of those million years they might discover the doomsday argument and conclude that no-one would get to live so long. But if you are going to live for a million years, you first have to live for ten years, fifty years, a hundred years, and so on. So it is inevitable that such erroneous ideas would arise early. However, if you not only live for a million years, but plan on expanding into the universe and having lots of descendants who also live that long, then this argument is no longer valid, because the majority of observer-moments should still be in the distant future rather than back here on the planet of origin. Therefore, I see some hope that you can have very long lifespans without risking doom, if your society explicitly stops creating new observers. Thought I have to think that the technologies for radical life extension are intrinsically threatening anyway; it would require remarkable discipline to have rejuvenating biotechnology or a solid-state platform for consciousness, and not to develop dangerous forms of nanotechnology and artificial intelligence.
Replies from: Mallah, Jack↑ comment by Mallah · 2010-03-31T03:59:27.522Z · LW(p) · GW(p)
I am very skeptical about SIA
Righly so, since the SIA is false.
The Doomsday argument is correct as far as it goes, though my view of the most likely filter is environmental degradation + AI will have problems.
↑ comment by Jack · 2010-03-31T03:00:19.348Z · LW(p) · GW(p)
So if we interpret the doomsday argument as information about the danger of these advanced technologies - if we do this, we are overwhelmingly likely to die - then isn't the logical action just to fight them down at every opportunity, rather than trying to be lucky by being ultra-smart about how we develop and deploy them?
This would make a lot of sense if there were any way to enforce it. As it stands defecting would be way too easy and the incentives to do so way too high. Further, the people most likely to defect would be those we least want deciding how new technology is deployed.
comment by LoganStrohl (BrienneYudkowsky) · 2013-08-04T04:41:39.240Z · LW(p) · GW(p)
The "doomsday argument" link is broken.
comment by taw · 2010-03-30T05:53:43.162Z · LW(p) · GW(p)
If by "gigadeath" you mean massive drop in world population levels within very short timespan, it already happened once - and I don't understand why it had so little effect on human civilization.
Replies from: RobinZ↑ comment by RobinZ · 2010-03-30T14:02:22.898Z · LW(p) · GW(p)
Back up - the very Wikipedia page you link says:
Replies from: tawThe Black Death is estimated to have killed 30% to 60% of Europe's population, reducing the world's population from an estimated 450 million to between 350 and 375 million in 1400. This has been seen as creating a series of religious, social and economic upheavals which had profound effects on the course of European history. It took 150 years for Europe's population to recover. The plague returned at various times, resulting in a larger number of deaths, until it left Europe in the 19th century. [emphasis added]
↑ comment by taw · 2010-03-30T16:46:28.180Z · LW(p) · GW(p)
Effects like what? (Fall in population level is not a result, it's like saying "Great Depression caused GDP decline").
Compared to killing half of world population, the effects were ridiculously close to zero. There's no discontinuity in history before and after late 1340s. No major country fell, let alone entire civilization. Social structures were what they used to be. No major region changed its religion. Society stayed as it was. Nearly nothing happened.
Replies from: Mass_Driver, knb↑ comment by Mass_Driver · 2010-03-30T17:55:11.602Z · LW(p) · GW(p)
An increase in wages, and a trend toward the legal emancipation of serfs in England. Without emancipated serfs (i.e. peasants) it's much harder to start an Industrial Revolution, because the surplus food you need for factory workers isn't grown by serfs who have no incentive to increase their productivity, and the surplus labor you need to go move to the cities are legally forbidden to move around.
Because "killing half of world population" happened over 300 years in an era with high mortality rates, you wouldn't expect much more serious consequences than that. Even if a village could lose 50% of its population in 4 days, that sort of thing also happened as a result of wars from time to time -- the plague acted like an increase in the frequency of war.
http://en.wikipedia.org/wiki/Black_Death_in_England#Economic.2C_social_and_political_consequences
Replies from: taw↑ comment by taw · 2010-03-30T22:18:00.633Z · LW(p) · GW(p)
This is a particularly bad example of post-hoc-ergo-propter-hoc thinking - everything that happened between 1000 BCE and 1800 CE somewhere near Britain has a few just-so stories how it caused Industrial Revolution.
But Industrial Revolution happened so far later that it doesn't make any sense; and it didn't look anything like what you're suggesting - major increase in agricultural productivity only happened in 1700s - far far after Black Death; and agriculture had local surpluses capable of sustaining large urban populations since pretty much Neolithic - a single farmer given enough good land and decent level of capital can easily produce far more than he needs. By taxes, rents, forced labour or some other measure you generate as high food surplus as you desire. So essentially every major urban center in history imported massive amount of food from rural regions. To avoid looking far Athens and Rome are two example of huge well-known ancient grain importers for which such imports were a very important part of foreign politics. Serfdom or lack of it has nothing to do with capability of creating food surpluses.
Also - England was one of the countries least affected by Black Death, so if Black Death caused something in England, it would have had similar effect in most countries which were as badly affected - and that's very definitely not true. On the other hand some other countries least affected by Black Death of Eastern Europe moved towards greater serfdom (and contrary to what you imply - these serf-based economies were major food exporters feeding centers of Industrial Revolution in Western Europe), so there's no correlation here either.
tl;dr - serfdom and food surpluses are uncorrelated; increases in food surpluses happened many centuries too late; you cannot claim Black Death caused something in less affected country like England while mysteriously it didn't in far more affected countries
↑ comment by knb · 2010-03-30T18:24:00.067Z · LW(p) · GW(p)
I believe the fall of the Byzantine Empire was partially caused by black plague.
Replies from: taw↑ comment by taw · 2010-03-30T22:00:09.860Z · LW(p) · GW(p)
No, Byzantine Empire was broken by battle of Manzikert in 1071, and then by Fourth Crusade in 1204. After that it was hardly an "Empire" any more - it was a minor country surrounded by much bigger powers.
And the plague story doesn't work at all - Byzantium fell over a century after the plague.
Replies from: knbcomment by Rain · 2010-03-29T18:29:00.277Z · LW(p) · GW(p)
Doom is inevitable. (see: Ultimate fate of the universe)
I will agree that any particular doom event is often misunderstood, overplayed, improperly categorized, etc.