Existential Risk

post by lukeprog · 2011-11-15T14:23:18.220Z · LW · GW · Legacy · 108 comments

This is a "basics" article, intended for introducing people to the concept of existential risk.

On September 26, 1983, Soviet officer Stanislav Petrov saved the world.

Three weeks earlier, Soviet interceptors had shot down a commercial jet, thinking it was on a spy mission. All 269 passengers were killed, including active U.S. senator Lawrence McDonald. President Reagan called the Soviet Union an “evil empire" in response. It was one of the most intense periods of the Cold War.

Just after midnight on September 26, Petrov sat in a secret bunker, monitoring early warning systems. He did this only twice a month, and it wasn’t his usual shift; he was filling in for the shift crew leader.

One after another, five missiles from the USA appeared on the screen. A siren wailed, and the words "ракетном нападении" ("Missile Attack") appeared in red letters. Petrov checked with his crew, who reported that all systems were operating properly. The missiles would reach their targets in Russia in mere minutes.

Protocol dictated that he press the flashing red button before him to inform his superiors of the attack so they could decide whether to launch a nuclear counterattack. More than 100 crew members stood in silence behind him, awaiting his decision.

"I thought for about a minute," Petrov recalled. "I thought I’d go crazy... It was as if I was sitting on a bed of hot coals."

Petrov broke protocol and went with his gut. He refused to believe what the early warning system was telling him.

His gut was right. Russian satellites had misinterpreted shiny reflections on the Earth’s surface as missile launches. Russia was not under attack.

If Petrov had pressed the red button, and his superiors had launched a counterattack, the USA would have detected the incoming Russian missiles and launched their own missiles before they could be destroyed in the ground. Soviet and American missiles would have passed in the night sky over the still, silent Arctic before detonating over hundreds of targets — each detonation more destructive than all the bombs dropped in World War II combined, including the atomic bombs that vaporized Hiroshima and Nagasaki. Most of the Northern Hemisphere would have been destroyed.

Petrov was reprimanded and offered early retirement. To pay his bills, he took jobs as a taxi driver and a security guard. The biggest award he ever received for saving the world was a "World Citizen Award" and $1000 from a small organization based in San Francisco. He spent half the award on a new vacuum cleaner.

During his talk at Singularity Summit 2011 in New York City, hacker Jaan Tallinn drew an important lesson from the story of Stanislav Petrov:

Contrary to our intuition that society is more powerful than any individual or group, it was not society that wrote history on that day... It was Petrov.

...Our future is increasingly determined by individuals and small groups wielding powerful technologies. And society is quite incompetent when it comes to predicting and handling the consequences.

Tallinn knows a thing or two about powerful technologies making global impact. Kazaa, the file-sharing program he co-developed, was once responsible for half of all Internet traffic. He went on to develop the internet calling program Skype, which in 2010 accounted for 13% of all international calls.

Where could he go from there? After reading dozens of articles about the cognitive science of rationality, Tallinn realized:

In order to maximize your impact in the world, you should behave as a prudent investor. You should look for underappreciated [concerns] with huge potential.

Tallinn found the biggest pool of underappreciated concerns in the domain of “existential risks": things that might go horribly wrong and wipe out our entire species, like nuclear war.

The documentary Countdown to Zero shows how serious the nuclear threat is. At least 8 nations have their own nuclear weapons, and the USA has given nuclear weapons to 5 others. There are enough nuclear weapons around to destroy the world several times over, and the risk of a mistake remains even after the cold war. In 1995, Russian president Boris Yeltsin had the “nuclear suitcase" — capable of launching a barrage of nuclear missiles — open in front of him. Russian radar had mistaken a weather rocket for a US submarine-launched ballistic missile. Like Petrov before him, Yeltsin disbelieved his equipment and refused to press the red button. Next time we might not be so lucky.

But it's not just nuclear risks we have to worry about. As Sun Microsystems’ co-founder Bill Joy warned in his much-discussed article Why the Future Doesn’t Need Us, emerging technologies like synthetic biology, nanotechnology, and artificial intelligence may quickly become even more powerful than nuclear bombs, and even greater threats to the human species. Perhaps the International Union for Conservation of Nature will need to reclassify Homo sapiens as an endangered species.

Academics are beginning to accept that humanity lives on a knife’s edge. The famous physicists Martin Rees and John Leslie have written books about existential risk, titled Our Final Hour: A Scientist’s Warning and The End of the World: The Science and Ethics of Human Extinction. In 2008, Oxford University Press published Global Catastrophic Risks, inviting experts to summarize what we know about a variety of existential risks. New research institutes have been formed to investigate the subject, including the Singularity Institute in San Francisco and the Future of Humanity Institute at Oxford University.

Governments, too, are taking notice. In the USA, NASA was given a congressional mandate to catalogue all near-earth objects that are one kilometer or more in diameter, because an impact with such a large object would be catastrophic. President Bush established the National Nanotechnology Initiative to ensure the safe development of molecule-sized materials and machines. (Precisely self-replicating molecular machines could multiply themselves out of control, consuming resources required for human survival.) Many nations are working to reduce nuclear armaments, which pose the risk of human extinction by global nuclear war.

The public, however, remains mostly unaware of the risks. Existential risk is an unpleasant and scary topic, and may sound too distant or complicated to discuss in the mainstream media. For now, discussion of existential risk remains largely constrained to academia and a few government agencies.

The concern for existential risks may appeal to one other group: analytically-minded "social entrepreneurs" who want to have a positive impact on the world, and are accustomed to making decisions based on calculation. Tallinn fits this description, as does Paypal co-founder Peter Thiel. These two are among the largest donors to Singularity Institute, an organization focused on the reduction of existential risks from artificial intelligence.

What is it about the topic of existential risk that appeals to people who act by calculation? The analytic case for doing good by reducing existential risk was laid out decades ago by moral philosopher Derek Parfit:

The Earth will remain inhabitable for at least another billion years. Civilization began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history.

...Classical Utilitarians... would claim... that the destruction of mankind would be by far the greatest of all conceivable crimes. The badness of this crime would lie in the vast reduction of the possible sum of happiness...

For [others] what matters are... the Sciences, the Arts, and moral progress... The destruction of mankind would prevent further achievements of these three kinds.

Our technology gives us great power. If we can avoid using this power to destroy ourselves, then we can use it to spread throughout the galaxy and create structures and experiences of value on an unprecedented scale.

Reducing existential risk — that is, carefully and thoughtfully preparing to not kill ourselves — may be the greatest moral imperative we have.

108 comments

Comments sorted by top scores.

comment by Gedusa · 2011-11-15T16:04:01.631Z · LW(p) · GW(p)

Whilst I really, really like the last picture - it seems a little odd to include it in the article.

Isn't this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn't the picture sort of act against that - by being slightly sci-fi and weird?

Replies from: Nornagest, katydee, gjm, None, Bongo, MichaelAnissimov, juliawise
comment by Nornagest · 2011-11-15T18:51:13.203Z · LW(p) · GW(p)

Actually, both that and the Earth image at the beginning of the article seem a little out of place. At least the latter would fit well into a print article (where you can devote half a page or a page to thematic images and still have plenty of text for your eyes to seek to), but online it forces scrolling on mid-sized windows before you can read comfortably. I think it'd read more smoothly if it was smaller, along the lines of the header images in "Philosophy by Humans" or (as an extreme on the high end) "The Cognitive Science of Rationality".

comment by katydee · 2011-11-15T21:36:33.658Z · LW(p) · GW(p)

Agreed, especially since it is presented with no explanation or context. If the aim was "here's a picture of what we might achieve," I would personally aim for more of a Shock Level 2 image rather than an SL3 one-- presuming, of course, that this is being written for someone around SL1 (which seems likely). That said, I might omit it altogether.

Replies from: Gedusa
comment by Gedusa · 2011-11-15T21:49:39.520Z · LW(p) · GW(p)

I thought this article was for SL0 people - that would give it the widest audience possible, which I thought was the point?

If it's aimed at the SL0's, then we'd be wanting to go for an SL1 image.

Replies from: katydee
comment by katydee · 2011-11-15T23:55:37.118Z · LW(p) · GW(p)

SL0 people think "hacker" refers to a special type of dangerous criminal and don't know or have extremely confused ideas of what synthetic biology, nanotechnology, and artificial intelligence are.

Replies from: Gedusa
comment by Gedusa · 2011-11-16T00:11:47.514Z · LW(p) · GW(p)

Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0's - maybe without mentioning exotic technologies? And would they change their charitable behavior?

I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).

Replies from: katydee
comment by katydee · 2011-11-16T02:51:59.124Z · LW(p) · GW(p)

I agree with your estimates/answers. There are certainly SL0 existential risks (most people in the US understand nuclear war), but I think the issue in question is that the risks most targeted by the "x-risks community" are above those levels-- asteroid strikes are SL2, nanotech is SL3, AI-foom is SL4. I think most people understand that x-risks are important in an abstract sense but have very limited understanding of what the risks the community is targeting actually represent.

comment by gjm · 2011-11-15T19:06:04.536Z · LW(p) · GW(p)

Not only is the picture slightly sci-fi and weird, it's also wrong. I mean, my thought processes on seeing it went something like this: "Oh, hey, it's a ringworld. Presumably this is meant to hint at the glorious future that might be ahead of us if we don't get wiped out, and therefore the importance of not getting wiped ou ... no, wait a moment, it's kinda like a ringworld but it's really really really small. Much smaller than the earth. What the hell's the point of that?"

Replies from: arundelo, timtyler, dlthomas
comment by arundelo · 2011-11-15T23:39:11.746Z · LW(p) · GW(p)

The picture is of a Stanford torus.

Replies from: gjm
comment by gjm · 2011-11-15T23:43:23.874Z · LW(p) · GW(p)

Don't those have to be fully enclosed?

Replies from: arundelo
comment by arundelo · 2011-11-15T23:50:49.823Z · LW(p) · GW(p)

Yes. The part that looks like a sky in the picture is some transparent material that holds the atmosphere in.

comment by timtyler · 2011-11-16T15:00:42.935Z · LW(p) · GW(p)

it's kinda like a ringworld but it's really really really small. Much smaller than the earth. What the hell's the point of that?

Faster build, reduced cost, not such heavy stresses placed on the materials.

Replies from: gjm
comment by gjm · 2011-11-17T00:10:27.705Z · LW(p) · GW(p)

I meant "what's the point of that, as opposed to not bothering?". Not "what's the point of that, as opposed to building a full-sized ringworld?".

comment by dlthomas · 2011-11-15T19:21:55.454Z · LW(p) · GW(p)

Not much smaller than the earth at all!

With more physics and attention, one could produce better numbers, but as a crude ballpark (using data from wikipedia):

Surface area of the Earth: 510,072,000 km^2

Circumference of ring, if it's placed at 1 AU: 2 * pi AU = 939,951,956 km

So, if the ring is a little over a half a kilometer in width, it has the same surface area as the Earth - and could be smaller still, if we just compare habitable area.

Replies from: wnoise
comment by wnoise · 2011-11-15T19:24:20.406Z · LW(p) · GW(p)

The scale of curvature there makes it clear it's not 1 AU in radius.

Replies from: dlthomas
comment by dlthomas · 2011-11-15T19:33:56.153Z · LW(p) · GW(p)

Fair enough, I suppose. But then it's not really a ring world so much as a... what? Space station?

Replies from: wnoise
comment by wnoise · 2011-11-15T20:06:57.320Z · LW(p) · GW(p)

Yeah, pretty much. If it were bigger, I might call it a Culture orbital).

comment by [deleted] · 2011-11-15T17:46:58.064Z · LW(p) · GW(p)

Agreed on this. The ringworld thing comes out of nowhere and doesn't clearly follow from the content of the article.

Unless the point is to wink-wink-nudge-nudge at the idea that we might have to do some weird-looking and weird-sounding things in order to save the world... in which case I still don't like the picture.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-11-15T18:53:55.309Z · LW(p) · GW(p)

I read it as "but there's still hope for a big wonderful future", but this is tentative.

In any case, thanks for the exposure to Richard Fraser's art.

Replies from: gjm
comment by gjm · 2011-11-15T19:06:54.522Z · LW(p) · GW(p)

Or, apparently, a small wonderful future. Look how tiny that ring is!

comment by Bongo · 2011-11-16T12:45:46.355Z · LW(p) · GW(p)

Also, I'd say both of those pictures seem to have the effect of inducing far mode.

comment by MichaelAnissimov · 2011-11-16T14:01:32.634Z · LW(p) · GW(p)

I'm in favor of including the last picture as part of the article, because it shows the possible world we gain by averting existential risk. I don't believe that "context" is necessary, the image is self-explanatory.

Nitpicking on ringworld vs. stanford torus is not relevant, or interesting. The overall connotations and message are clear.

"Sci-fi" of today becomes "reality" of tomorrow. Non-transhumanists ought to open up their eyes to the potential of the light cone, and introducing them to that potential, whether directly or indirectly, is one of the biggest tasks that we have. Otherwise people are just stuck with what they see right in front of their eyes.

For a big picture issue like existential risk, it fits that one would want to also introduce a vague sketch of the possibilities of the big picture future.

Suggesting that the Earth picture itself doesn't belong in the post shows some kind of general bias against visuals, or something. You think that a picture about saving human life on earth isn't appropriately paired with a picture of the Earth? What image could be more appropriate than that?

Replies from: Grognor, thomblake, thomblake, DSimon, JoshuaZ
comment by Grognor · 2011-11-16T14:14:11.627Z · LW(p) · GW(p)

the image is self-explanatory.

I didn't understand it. It didn't self-explain to me.

Non-transhumanists ought to open up their eyes to the potential of the light cone, and introducing them to that potential, whether directly or indirectly, is one of the big tasks we have.

Woah! That's quite a leap! But hold on a second! This isn't meant to be literature, is it? It doesn't seem to me that an explanation of this kind benefits from having hidden meanings and whatnot, especially ideological ones like that.

Nitpicking on ringworld vs. stanford torus is not relevant, or interesting.

Agreed.

Suggesting that the Earth picture itself doesn't belong in the post shows some kind of general bias against visuals, or something.

This is a Fully General Counterargument that you could use on objections to any image, no matter what the image is, and no matter what the objection is.

As for me, I'm not really Blue or Green on whether to keep the image. It's really pretty, but the relevance is dubious at best and nonexistent at worst.

comment by thomblake · 2011-11-16T15:46:49.069Z · LW(p) · GW(p)

The overall connotations and message are clear.

I'm a genius transhumanist who likes sci-fi, and the connotations and message of the image were not clear to me. I wasn't even sure what it was supposed to be a picture of (my first guess was something from the Halo games, though I couldn't imagine the relevance). Is this more something that would be clear to the general populace and not folks like me, and thus should be included in a post to appeal to the general populace?

Replies from: Randolf
comment by Randolf · 2011-11-17T15:03:57.123Z · LW(p) · GW(p)

Strange enough. After all, while I am a transhumanist to some degree and also enjoy scifi, I am far from being a genious. Still the message of the pictures were immeditately obvious.This would suggest towards what you said: they maybe appealing to general people, while not necessarily as appealing to those already very familiar with scifi and transhumanism.

Replies from: NickiH
comment by NickiH · 2011-11-22T16:39:26.868Z · LW(p) · GW(p)

I would count myself among "general people". I didn't get it at all. In fact, having read the comments, I'm still not sure I get it. It's a pretty picture and all, but why is it there?

Replies from: Randolf
comment by Randolf · 2011-11-23T14:45:36.707Z · LW(p) · GW(p)

The first picture is a dark image of a planet with a sligthly threatening atmosphere. It looks like the upper half of a mushroom cloud, but it could be also seen as the earth violently torn apart. This is why I think , given the context, that it symbolises the threat of a nuclear war, and more universally, the threat of a dystopia.

The last picture shows a beatiful utopia. I thought it's there to give a message of the type: "If everything goes well, we can still achieve a very good future." That is, while the first picture symbolises the threat of a dystopia, the last one symbolises the hope and possibility of an utopia.

Of course, this is merely my interpretation. There are very many ways one can inerprent these pictures.

comment by thomblake · 2011-11-16T15:59:38.482Z · LW(p) · GW(p)

Nitpicking on ringworld vs. stanford torus is not relevant, or interesting. The overall connotations and message are clear.

Note: "interesting", "clear", and perhaps even "relevant" are 2-place words.

comment by DSimon · 2011-11-16T14:42:52.687Z · LW(p) · GW(p)

You think that a picture about saving human life on earth isn't appropriately paired with a picture of the Earth? What image could be more appropriate than that?

Well, how about a picture of human life? Or even a picture of human life being saved; it might not be a bad idea to suggest a similarity between a doctor saving a patient's life and an x-risk-reduction policy saving many peoples' lives.

Well, or something like that but a little more subtle as a metaphor.

comment by JoshuaZ · 2011-11-16T14:37:42.499Z · LW(p) · GW(p)

Needlessly distracting. Most people have enough trouble appreciating the scale of existential risk that their minds often shut down when thinking about it, or just try to change the subject. Adding into it other ideas which are larger scale and even more controversial is not a recipe for getting them to pay attention.

comment by juliawise · 2011-11-16T21:05:52.469Z · LW(p) · GW(p)

The squid-shaped dingbats are pretty bad, too.

comment by timtyler · 2011-11-15T20:05:47.029Z · LW(p) · GW(p)

On September 26, 1983, Soviet officer Stanislav Petrov saved the world.

Allegedly saved the world. It actually seems pretty unlikely that the world was saved by Petrov. For one thing, Wikipedia says:

There are varying reports whether Petrov actually reported the alert to his superiors and questions over the part his decision played in preventing nuclear war, because, according to the Permanent Mission of the Russian Federation, nuclear retaliation is based on multiple sources that confirm an actual attack.{2}.

Replies from: gwern, hairyfigment, taw, youwouldntknowme
comment by gwern · 2014-01-27T04:01:21.663Z · LW(p) · GW(p)

because, according to the Permanent Mission of the Russian Federation, nuclear retaliation is based on multiple sources that confirm an actual attack.

Given that this is coming from the sort of people who thought that setting up the Dead Hand was a good idea, and given that ass-covering and telling the public less than the truth was standard operating procedure in Russia, and given everything we know about the American government's incompetence, paranoia, greed, destructive experiments & actions (like setting PAL locks to zero, to pick a nuclear example) and that nuclear authority really was delegated to individual officers (this and other scandalous aspects came up recently in the New Yorker, actually: http://www.newyorker.com/online/blogs/newsdesk/2014/01/strangelove-for-real.html )...

I see zero reason to place any credence in their claims. This is fruit of the poisonous tree. They have reason to lie. I have no more reason to disbelieve Petrov than other similar incidents (like the Cuban Missile Crisis's submarine incident).

comment by hairyfigment · 2011-11-20T09:10:41.047Z · LW(p) · GW(p)

Very interesting. But the standard account says that Russian authorities were afraid of American attack at the time, and likely to make the wrong decision regardless of standard procedure. So the parent by itself doesn't address the relevant claim.

Also, the Wikipedia quote made it sound like Petrov might have reported sighting missiles after all (perhaps with a disclaimer). This is neither cited nor credible. If one of his superiors arguably saved the world by following protocol, high probability Putin's people would have mentioned it in their press release.

comment by taw · 2011-11-20T06:04:54.135Z · LW(p) · GW(p)

And that's why I hate Petrov story. It's ridiculous how otherwise sensible people are willing to believe in it.

comment by youwouldntknowme · 2011-11-16T19:54:10.436Z · LW(p) · GW(p)

This seems to beg the question: what is wrong with our established methods for dealing with these risks? The information you posted, if it is credible, would completely change the story that this post tells. Rather than a scary story about how we may be on the brink of annihilation, it becomes a story about how our organizations have changed to recognize the risks posed by technology, in order to avert these risks. In the Cold War, our methods of doing so were crude, but they sufficed, and we no longer have the same problems.

Is x-risk nevertheless an under-appreciated concern? Maybe, but I don't find this article convincing. You could make the argument that, along with the development of a technology, understanding of its risks and how to mitigate them also advances. Then it would not require a dedicated effort to understand these risks in advance. So, why is the best approach to analyse possible future risks, rather than working on projects which solve immediate problems, and dealing with issues as they arise?

Don't get me wrong, I respect what the guys at SIAI do, but I don't know the answer to this question. And it seems quite important.

Replies from: timtyler
comment by timtyler · 2011-11-16T20:03:40.095Z · LW(p) · GW(p)

Presumably, in the long term, extinction risk will decrease, as civilisation spreads out.

Increased risks have been accompanied by increased risk control - and it is not obvious how these things balance out. Pinker suggests going by death by violence in his latest book - and indeed the risk of death by violence is in decline.

Superpowers and world-spanning companies duking it out does not necessarily lead to global security in the short term, though. Most current trends seem positive and probably things will continue to improve - but it is hard to be sure - since technological history is still fairly short.

comment by Fleisch · 2011-11-15T16:11:23.794Z · LW(p) · GW(p)

There aren't enough nuclear weapons to destroy the world, not by a long shot. There aren't even enough nuclear weapons to constitute an existential risk in and off themselves, though they might still contribute strongly to the end of humanity.

EDIT: I reconsidered, and yes, there is a chance that a nuclear war and its aftereffects permanently cripples the potential of humanity (maybe by extinction), which makes it an existential risk. The point I want to make, which was more clearly made by Pfft in a child post, is that this is still something very different from what Luke's choice of words suggests.

How many people will die is of course somewhat speculative, but I think if the war itself killed 10%, that would be a lot. More links on the subject: The effects of a Global Thermonuclear War Nuclear Warfare 101, 102 and 103

Replies from: kilobug, None
comment by kilobug · 2011-11-15T17:01:18.174Z · LW(p) · GW(p)

"Destroy the world" can mean many things. There aren't nearly enough nuclear weapons to blast Earth itself, the planet will continue to exist, of course.

The raw destructive power of nukes may not be enough to kill most of humanity, yes. Targeted on major cities, it'll still kill an enormous amount of people, an overwhelming majority of the targeted country for industrial (ie, urban) countries.

But that's forgetting all the "secondary effects" : direct radioactive fallouts, radioactive contamination of rivers and water sources, nuclear winter, ... those are pretty sure to obliterate in the few next years most of the remaining humanity. Maybe not all of us. Maybe a few would survive, in a scorched Earth, without much left of technological civilization. That's pretty much "destroy the world" to me.

Replies from: CarlShulman, Pfft
comment by CarlShulman · 2011-11-15T22:54:27.362Z · LW(p) · GW(p)

This survey's median estimates rate nuclear war as ten times as likely to kill a billion people in the 21st century as to cause human extinction: http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0020/3854/global-catastrophic-risks-report.pdf

Replies from: steven0461
comment by steven0461 · 2011-11-15T23:29:54.134Z · LW(p) · GW(p)

How many of the respondents had any specific expertise on nuclear wars?

Replies from: CarlShulman
comment by CarlShulman · 2011-11-15T23:33:39.079Z · LW(p) · GW(p)

A handful, who had given presentations to the rest of the group with discussion. Also climate folks.

Replies from: steven0461
comment by steven0461 · 2011-11-15T23:36:12.671Z · LW(p) · GW(p)

Do you know anything about what their estimates were?

Replies from: CarlShulman
comment by CarlShulman · 2011-11-15T23:49:55.434Z · LW(p) · GW(p)

Not broken out.

comment by Pfft · 2011-11-15T22:45:10.370Z · LW(p) · GW(p)

The article says "There are enough nuclear weapons around to destroy the world several times over". That suggests some kind of clear-cut quantitative measure, and does not describe the actual situation.

comment by [deleted] · 2011-11-15T17:15:04.050Z · LW(p) · GW(p)

.

comment by cyphyr · 2012-03-08T16:10:55.623Z · LW(p) · GW(p)

Hi there I'm the artist who's image you've used to illustrate this article. Good article and I've learned a thing or two. Thankyou for using my image and placing it as a link back to my page, all links are good etc. I don't have a problem with my work being used and indeed its pleasant to come across it like this. In future however could you please ask me first and provide a written acknowledgement in the text.

Cheers

Richard

Oh and for those who were discussing the rings exactitudes ... Its a 200km diameter torus with a width of 10km. The atmosphere is "held in" by the strange alien structures looping about the outside of the ring. Probably some sort of induced electro statics. I made this image with the idea of showing a culture that was simultaneously extremely technically advanced and also quite blasé about the existence of the technology. The inhabitants in the towns below may not even glance at the structures above that protect their world. There was no social comment intended, imply what you will :) It was originally intended to be animated, maybe I'll have another attempt at that, but I think it could do with some finishing work first.

comment by beoShaffer · 2011-11-15T22:54:28.373Z · LW(p) · GW(p)

These two are among the largest donors to Singularity Institute, an organization focused on the reduction of existential risks from artificial intelligence.

Should this be the Singularity Institute?

Replies from: komponisto, army1987, arundelo
comment by komponisto · 2011-11-15T23:49:23.862Z · LW(p) · GW(p)

Indeed.

It's as if people are being deliberately mischievous by writing both "the SIAI" (which should be "SIAI"), and on the other hand, "Singularity Institute" (which should be "the Singularity Institute").

Luke is probably confused by the fact that the organization is often called "Singinst" by its members. But that expression grammatically functions as a name, like "SIAI" (or, now, "SI"), and thus does not take the definite article.

The full name, however, ("the Singularity Institute") functions grammatically as a description, and thus does take the definite article. Compare: the United Nations, the Brookings Institution, the Institute for Advanced Study, the London School of Economics, the Center for Inquiry, the National Football League.

Abbrevations differ as to whether they function as names or descriptions: IAS, but the UN. SI(AI) is like the former, not the latter.

If the abbreviation is an acronym (i.e. pronounced as a word rather than a string of letter names), then it will function as a name: ACORN, not "the ACORN" (even though, in full, it's "the Association...").

Replies from: steven0461, beoShaffer, army1987
comment by steven0461 · 2011-11-16T00:02:50.272Z · LW(p) · GW(p)

I think Luke may have been trying to take after Singularity University, which doesn't use "the", because that seems to be the convention for universities? But yes, I agree the lack of a definite article here is grating. It creates impression that writer of sentence is Russian.

Replies from: komponisto
comment by komponisto · 2011-11-16T00:23:55.801Z · LW(p) · GW(p)

...Singularity University, which doesn't use "the", because that seems to be the convention for universities?

Specifically, it's the convention for university names following the formula "X University" (as opposed to "University of/for/in X"). These should be thought of as analogous to geographic place-names (which is what they basically are): "Hamilton County", "Bikini Atoll", "Harvard University", etc. ("Singularity University" would be analogous to "Treasure Island".)

There are a few rare exceptions: The George Washington University, The Ohio State University (both articles often "mistakenly" omitted!), the Bering Strait.

Anyway, why in the world would SI want to "take after" SU? The risk of confusion between these two organizations is large enough as it is.

comment by beoShaffer · 2011-11-16T01:41:43.361Z · LW(p) · GW(p)

The main thing was that he used both in the same article. I assumed that the Singularity Institute was correct because I've seen it more frequently, but consistency is the big thing.

comment by A1987dM (army1987) · 2011-11-18T18:15:21.374Z · LW(p) · GW(p)

Things are not always that simple: http://languagelog.ldc.upenn.edu/nll/?p=2172

Replies from: komponisto
comment by komponisto · 2011-11-18T19:03:56.599Z · LW(p) · GW(p)

I didn't make any claim about "simplicity", and nor does anything in the link contradict anything I wrote. Indeed, it confirms my point: some things take "the", others don't, and it isn't a matter of on-the-spot whim.

Note that I did not propose any general rule for determining which category something falls into without prior knowledge. My comment about descriptions versus names does not have any predictive implications. I could have talked about "weak" and "strong" instead.

comment by A1987dM (army1987) · 2011-11-18T18:08:22.585Z · LW(p) · GW(p)

There have been quite a few posts on Language Log about which proper names are preceded by the, e.g. http://languagelog.ldc.upenn.edu/nll/?p=2172

Replies from: dlthomas
comment by dlthomas · 2011-11-18T18:17:07.196Z · LW(p) · GW(p)

In the time of John Muir and Theodore Roosevelt, "Yosemite" was apparently "The Yosemite"

I've been curious as to when it dropped its article.

comment by arundelo · 2011-11-15T23:47:01.314Z · LW(p) · GW(p)

The singinst.org About Us/Our Mission page uses the article#Definite_article), as do some other places on the site. The Strategic Plan ("UPDATE: AUGUST 2011") consistently uses no article.

Replies from: komponisto
comment by komponisto · 2011-11-15T23:59:09.093Z · LW(p) · GW(p)

I believe the Strategic Plan was authored by Luke, and hence the criticism also applies there.

Replies from: Louie
comment by Louie · 2011-11-18T11:21:15.319Z · LW(p) · GW(p)

Dropping "the" is a conscious, intentional decision by everyone at Singularity Institute as of several months ago and pre-dates Luke's involvement (but post-dates your visit last summer).

Replies from: komponisto, Grognor, rhollerith_dot_com
comment by komponisto · 2011-11-18T19:38:59.692Z · LW(p) · GW(p)

That only changes the target of my criticism (now all of you, instead of just Luke), not the criticism itself, obviously.

The "the" isn't droppable, because it was never part of the name in the first place: it was never "The Singularity Institute"; but rather "the Singularity Institute". That is, the article is a part of the contextual grammar. Attempting to "drop" it would be like me declaring that "komponisto" must always be followed by plural verb forms.

(Some organizations do have "The" in the name itself, e.g. The Heritage Foundation. They could decide to drop the "The", and then their logo would say "Heritage Foundation". But one would still write "at the Heritage Foundation"; one just wouldn't write "at The Heritage Foundation".)

I don't know of any example of an "Institute" where people don't use an article in such a context -- which suggests that any such example that might exist isn't high-status enough for me to have heard of it. Even the one that I thought might be an example -- the Mathematical Sciences Research Institute -- also has a grammatical "the"!

You guys should want to be like IAS and MSRI (after all, you'd rather the people at those places be working for you instead!) I don't understand the rationale for this gratuitous eccentricity.

Replies from: lessdazed, thomblake, Louie
comment by lessdazed · 2011-11-18T20:58:45.916Z · LW(p) · GW(p)

(Some organizations do have "The" in the name itself, e.g. The Heritage Foundation. They could decide to drop the "The", and then their logo would say "Heritage Foundation". But one would still write "at the Heritage Foundation"; one just wouldn't write "at The Heritage Foundation".)

Military units are the only counterexample I can think of, but using "the" is correct for them too, I think. Glancing at wikipedia, it is inconsistent within articles,

During January 1919, the Third Army was engaged in training and preparing the troops under its command for any contingency...Accordingly the Third Army was disbanded on 2 July 1919.

and

Until the buildup of American forces prior to its entry into World War II, Third Army remained largely a paper formation...Mobilization saw Third Army take on the role of training some of the huge numbers of recruits that the draft was bringing into the Armed Forces.

Perhaps Singularity Institute is an aspiring paramilitary force.

comment by thomblake · 2011-11-18T19:46:56.486Z · LW(p) · GW(p)

Indeed - "A dinner at Singularity Institute" would be pronounced "A dinner at ; Singularity Institute", with an awkward pause inserted due to the obviously missing article. Contrast with "A dinner at the Singularity Institute".

comment by Louie · 2011-11-20T09:34:13.741Z · LW(p) · GW(p)

I personally find "the SIAI" sound ridiculous to me. Do you log on to the Facebook while you read the Less Wrong? How many alumni from the MIT work at the NASA? How about that story on the CNN about the record profits at the PEPSI?

Notice a pattern? Well marketed concerns all drop "the" in their product or company name. Marketing science has demonstrated again and again that shorter names are more memorable and strictly better. Since English is not airtight on this matter, I say Science > English.

Replies from: komponisto, wedrifid, nshepperd, None
comment by komponisto · 2011-11-20T15:51:59.930Z · LW(p) · GW(p)

Louie, please read my comments again and tell me if you still think your reply makes any sense whatsoever as a response.

Because -- I'm sorry to say -- this represents a total failure of reading comprehension. Quoting myself:

"SIAI" (or, now, "SI")...does not take the definite article.

  • In the same comment:

    Abbrevations differ as to whether they function as names or descriptions: IAS, but the UN. SI(AI) is like the former, not the latter.

  • Here::

The "the" isn't droppable, because it was never part of the name in the first place...the article is a part of ...grammar

Abbreviations are treated separately from the corresponding full names. One doesn't say "the ABC", but one does say "the American Broadcasting Company". Likewise, "SIAI" (not "the SIAI"), but "the Singularity Institute for Artificial Intelligence".

You say:

I personally find "the SIAI" sound ridiculous to me

Guess what: I agree! Quite a while ago, I pointed out that "the SIAI" was a non-native quirk introduced by XiXiDu (and for some reason picked up by certain native speakers). Maybe it was too subtle for some people, but in that comment I was expressing the fact that "the SIAI" sounds completely wrong to me.

I don't understand why this is so complicated. One says "at the Singularity Institute", but also "at SIAI". This is the default English usage. It's what people have been saying all along. There is nothing weird, complex, or freaky going on here. I'm advocating a return to normalcy!

You ask:

Notice a pattern?

The answer is yes: people consistently underestimate the information content of my comments, and often simply fail to read what they say. This drives me frickin' crazy!

Replies from: Grognor, XiXiDu
comment by Grognor · 2011-11-20T16:19:55.717Z · LW(p) · GW(p)

You have all of my sympathies. What you are saying makes perfect sense (as such, I have nothing to add or take away from it), and I agree with it completely.

That said, I understand why you're getting so hot-blooded. I'm getting hot-blooded over Louie's systematic failure to understand you, and I'm not even the one being misunderstood!

(Keeping in mind, all the while, Wiio's Laws.)

Replies from: thomblake
comment by thomblake · 2011-11-29T17:23:50.297Z · LW(p) · GW(p)

upvoted for the link to Wiio's Laws.

comment by XiXiDu · 2011-12-11T11:27:48.178Z · LW(p) · GW(p)

Guess what: I agree! Quite a while ago, I pointed out that "the SIAI" was a non-native quirk introduced by XiXiDu (and for some reason picked up by certain native speakers).

Please correct me if I am wrong, I haven't read up on language evolution yet. But isn't it the case that natural language evolves? Unlike math, if enough people believe that a certain syntax is correct then it is correct. So if I start to write "the SIAI" at a time when people think that it sounds wrong but then gradually more and more people adapt that notation and start to perceive it to sound right, doesn't it become right?

If you like I will from now on use the syntax that you suggest simply because you seem to care strongly about it while I don't care at all.

comment by wedrifid · 2011-11-20T14:57:10.537Z · LW(p) · GW(p)

work at the NASA?

Work at the National Aeronautics and Space Administration or work at NASA. Refer to wikipedia if you are in doubt.

Notice a pattern?

Of the list you gave MIT and NASA are both analogous to SIAI. CNN is an entirely different kind of acronym and Pepsi isn't an acronym at all so doesn't even get allcaps when written in a sentence. Follow the example of the MIT and NASA, both of which use 'the' before the full name but not before the acronym. Because not doing so looks ridiculous.

Well marketed concerns all drop "the" in their product or company name. Marketing science has demonstrated again and again that shorter names are more memorable and strictly better. Since English is not airtight on this matter, I say Science > English.

Muddled thinking based on poorly understood science < English < Science.

comment by nshepperd · 2011-11-20T13:48:52.015Z · LW(p) · GW(p)

"The SIAI" wasn't suggested. "The Singularity Institute [for Artificial Intelligence]" was.

comment by [deleted] · 2011-11-20T10:07:36.767Z · LW(p) · GW(p)

Colonizing the middle ground between you and komponisto, I think it should be "the Singularity Institute", "the Singularity Institute for Artificial Intelligence", and "SIAI".

I don't think komponisto is even advocating for "the SIAI". It says "IAS" and "MSRI" despite saying "the Mathematical Sciences Research Institute".

Replies from: komponisto
comment by komponisto · 2011-11-20T15:19:51.632Z · LW(p) · GW(p)

Colonizing the middle ground between you and komponisto, I think it should be "the Singularity Institute", "the Singularity Institute for Artificial Intelligence", and "SIAI".

That isn't the "middle ground"! That is exactly my position!

Replies from: None
comment by [deleted] · 2011-11-20T18:11:41.380Z · LW(p) · GW(p)

This wasn't for your frame of reference, but for Louie's. That's why he gets the second person and you don't.

Replies from: wedrifid
comment by wedrifid · 2011-11-20T19:57:42.426Z · LW(p) · GW(p) Whenever you are speaking (or typing) in public you must consider the message conveyed to the audience as much (or often more) than the message to the person you are addressing. In this case by claiming for yourself the 'middle ground' you are positioning komponisto outside of that ground. Since you then go on to present what is essentially the only sane position to have on the subject your giving somewhat of significant slight to komponisto - the casual observer is being led to believe that kompo has been saying something silly. Being (apparently) oblivious to what you are doing you opted to condescend to komponisto rather than politely retract or ideally just leave kompo's reaffirmation as the final word. This is a mistake and changes how people will interpret the exchange. I, for example, just downvoted your initial mistake as well as the parent. While I noticed the distortion of komponisto's position on my first read I let it pass because it was a minor part of an otherwise decent comment. When you latter tried to reinforce you message with a challenge it became the salient detail and one the likes of which I prefer to discourage. Replies from: None
comment by [deleted] · 2011-11-20T21:43:13.585Z · LW(p) · GW(p)

I didn't consider it condescending; I thought it was amusingly ironic. (Note the juxtaposition: "he gets the second person and you don't.") But you've made it clear my opinion on my tone doesn't matter. So it goes. It's a pity you can't downvote me twice.

comment by Grognor · 2011-11-18T11:27:32.706Z · LW(p) · GW(p)

Would you explain why?

comment by RHollerith (rhollerith_dot_com) · 2011-11-18T17:02:11.944Z · LW(p) · GW(p)

In testimony to Congress about 15 years ago, the director of the CIA used "CIA" without the definite article, which certainly suggests that he preferred it to be referred to that way. How it is referred to by the public however is probably not up to the leaders of the CIA but rather up to the media and maybe bloggers and tweeters. Note that the American Broadcasting Corporation, the National Broadcasting Corporation, the Citizens Broadcasting Service and the Public Broadcasting Service are able to decide how they will be referred to by the public (because they have unparalleled access to the public's ear) and the have decided they'd like to be referred to without the definite article.

All that suggests that there is some advantage to being referred to without the definite article. (Perhaps the definite article has the effect of "distancing" the referent in the mind of the listener.)

Replies from: komponisto
comment by komponisto · 2011-11-18T19:51:46.442Z · LW(p) · GW(p)

Did you miss this comment? Abbreviations are treated separately from the corresponding full names. One doesn't say "the ABC", but one does say "the American Broadcasting Company". Et cetera.

Likewise, "SIAI" (not "the SIAI"), but "the Singularity Institute for Artificial Intelligence".

One may be either "at CIA" (especially if you're an insider) or "at the CIA", but as far as I know one is always "at the Central Intelligence Agency".

comment by Vladimir_M · 2011-11-16T07:04:52.465Z · LW(p) · GW(p)

active U.S. senator Lawrence McDonald.

Larry McDonald was a congressman, not a senator.

comment by ChrisHallquist · 2011-11-20T22:46:09.881Z · LW(p) · GW(p)

It's worth noting that "Humanity" /= "Human-like (or better) intelligences that largely share our values" /= "Civilization." This gives us three different kinds of existential risk.

Robin Hanson, as I understand him, seems to expect that only the third will survive, and seems to be okay with that. Many Less Wrongers, on the other hand, seem not so concerned with humanity per se, but would care about the survival of human-like intelligences sharing our values. And someone could care an awful lot about humanity per se, and want to put a lot of effort into making sure humans aren't largely replaced by AIs of any kind.

I'm not a huge reader of blog comment threads, so it's possible these debates have been done to death in comments and I'm not aware of it, but it would be nice to see some OPs on this issue.

comment by simplicio · 2011-11-21T13:27:39.580Z · LW(p) · GW(p)

His screen would have flashed "ракетное нападение." What you wrote is correct but in a grammatical form which suggests it was taken from inside a larger sentence involving words like "about a rocket attack"... Russian words change depending on their use within the sentence.

Replies from: arundelo
comment by arundelo · 2011-11-21T14:42:41.116Z · LW(p) · GW(p)

"Somebody set up us the bomb."*

comment by steven0461 · 2011-11-15T23:50:05.500Z · LW(p) · GW(p)

This is a good intro to human extinction, but Bostrom coined "existential risk" specifically to include more subtle ways of losing the universe's potential value. If you're not going to mention those, might as well not use the term.

comment by Paul Crowley (ciphergoth) · 2011-11-15T16:31:50.397Z · LW(p) · GW(p)

Starting a paragraph with "...and" there is very jarring.

Replies from: lukeprog, None
comment by lukeprog · 2011-11-15T22:28:50.951Z · LW(p) · GW(p)

Fixed.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-11-16T12:22:41.125Z · LW(p) · GW(p)

Thanks! I would have suggested a fix but I couldn't think of one in the time I had to post.

comment by [deleted] · 2011-11-15T18:45:57.786Z · LW(p) · GW(p)

I got caught on that as well.

comment by Kevin · 2011-11-16T17:05:03.664Z · LW(p) · GW(p)

I'd like to point out some lukeprog fatigue here, if anyone else wrote this article it would have way more points by now.

Replies from: wedrifid, Kaj_Sotala
comment by wedrifid · 2011-11-18T12:25:35.427Z · LW(p) · GW(p)

I'd like to point out some lukeprog fatigue here, if anyone else wrote this article it would have way more points by now.

I doubt it. I'd bet that if someone else wrote it it would have less votes.

It's a slightly expanded version of a post that has already been made, and re-run and had yearly posts linking to it. It's fine to hear the same old story again but it doesn't deserve more than half a dozen votes. It mostly scrapes through because luke seems to be writing something of a sequence on the subject so this would fit more neatly into a link collection.

comment by Kaj_Sotala · 2011-11-17T05:45:12.245Z · LW(p) · GW(p)

Thanks for the pointer - it made me realize my lukeprog fatigue and correct for it by upvoting.

comment by Randolf · 2011-11-15T19:48:48.371Z · LW(p) · GW(p)

If I had been one of those persons with the missile warning and red button, I wouldn't have pressed it even if I knew the warning was real. What use would it be to launch a barrage of nuclear weapons against normal citizens simply because their foolish leaders did so to you? It would only make things worse, and certainly wouldn't save anyone. Primitive needs to revenge can be extremely dangerous with todays technology.

Replies from: Nornagest, PhilosophyTutor, juliawise, TheOtherDave, TimS, wedrifid
comment by Nornagest · 2011-11-15T20:28:46.852Z · LW(p) · GW(p)

Mutually assured destruction is essentially a precommitment strategy: if you use nuclear weapons on me I commit to destroying you and your allies, a larger downside than any gains achievable from first use of nuclear weapons.

With this in mind, it's not clear to me that it'd be wrong (in the decision-theoretic sense, not the moral) to launch on a known-good missile warning. TDT states that we shouldn't differentiate between actions in an actual and a simulated or abstracted world: if we don't make this distinction, following through with a launch on warning functions to screen off counterfactual one-sided nuclear attacks, and ought to ripple back through the causal graph to screen off all nuclear attacks (a world without a nuclear war in it is better along most dimensions than the alternative). It's not a decision I'd enjoy making, but every increment of uncertainty increases the weighting of the unilateral option, and that's something we really really don't want. Revenge needn't enter into it.

(This assumes a no-first-use strategy, which the USSR at Petrov's time claimed to follow; the US claimed a more ambiguous policy leaving open tactical nuclear options following conventional aggression, which can be modeled as a somewhat weaker deterrent against that lesser but still pretty nasty possibility.)

Of course, that all assumes that the parties involved are making a rational cost-benefit analysis with good information. I'm not sure offhand how the various less ideal scenarios would change the weighting, except that they seem to make pure MAD a less safe strategy than it'd otherwise be.

comment by PhilosophyTutor · 2011-12-04T00:18:33.804Z · LW(p) · GW(p)

From a game-theoretic perspective, if the other side knew you thought that way then they should launch on your watch.

MAD only works if both sides believe the other is willing to retaliate. If one side is willing to push the button and the other is not willing to retaliate, then the side willing to push the button nukes the other and takes over the world.

If you can be absolutely certain the other side never finds out you aren't willing to retaliate, then yours is the optimal policy.

Replies from: lessdazed
comment by lessdazed · 2011-12-04T01:05:54.660Z · LW(p) · GW(p)

MAD only works if both sides believe the other is willing to retaliate.

"Willing" can be unpacked.

Having the other party believe you are operating under a mixed strategy would be optimal, so long as: a) each side values the other side winning more than mutual destruction, which as humans they probably do, and b) accidental/irrational launches are possible but not significantly higher when facing a perceived mixed strategy.

If, say, the USSR and the USA were willing to strike first to win, but not willing to incur a 95% risk of mutual destruction for a 5% chance of total victory, the optimal retaliatory strategy is to (have the other believe you will) retaliate based on a roll of 1d20 - a roll of a natural one has one refrain from retaliating. That way, an accidental launch has a 5% chance of not destroying the world.

In practice, declaring a mixed strategy will probably be seen as setting up an excuse to update one's actions based on the expected payoff considering the circumstances that have happened - i.e. to use CDT rather than TDT. Declaring an updateless strategy is a good way to convey one is operating under a mixed one.

comment by juliawise · 2011-11-16T21:11:14.842Z · LW(p) · GW(p)

This is why you would not have been hired to sit in front of the button, even given the Soviets' dubious hiring techniques. Also, if you had been raised in Soviet Russia, your thoughts on the topic might have been different.

Replies from: wedrifid, Randolf
comment by wedrifid · 2011-11-17T03:22:42.540Z · LW(p) · GW(p)

This is why you would not have been hired to sit in front of the button, even given the Soviets' dubious hiring techniques. Also, if you had been raised in Soviet Russia, your thoughts on the topic might have been different.

I wouldn't say that. Someone who cares about the issues is likely to lie for signalling purposes and do what he or she can to get the role.

Replies from: juliawise
comment by juliawise · 2011-11-17T21:22:16.340Z · LW(p) · GW(p)

But less likely to have had the foresight to have gotten into the right job track at age 15.

comment by Randolf · 2011-11-17T14:47:14.746Z · LW(p) · GW(p)

I could indeed simply lie and play the role of an obeying soldier to get the position I were looking for. However, it is of course true that if I had born and lived in a country where people are continiously fed with nationalist propaganda, I would be less likely to disobey the rules or to think it's wrong to retalite.

comment by TheOtherDave · 2011-11-16T00:26:06.397Z · LW(p) · GW(p)

Followup question: if someone was about to be placed in front of that red button, would you rather it be someone who had previously expressed the same opinion, or someone who had credibly committed to retaliate in case of a nuclear strike (however useless or foolish such retaliation might be)?

Conversely, if someone were to be placed in front of the corresponding red button of a country your leaders were about to launch a barrage of nuclear weapons against, which category would you prefer they be in?

comment by TimS · 2011-11-15T19:58:34.812Z · LW(p) · GW(p)

Not that I disagree with your conclusion, but there was a significant selection pressure in the process of qualifying to get into the chair in front of the button. Political leaders don't like to give power to subordinates who are not likely to implement leadership's desires.

Having gone through the process and its accompanying ideological training makes Petrov's refusal to risk nuclear armageddon even more impressive. Even though moral courage was [ETA: not] a criteria in selecting him, Petrov showed more that anyone could reasonably expect.

comment by wedrifid · 2011-11-16T07:09:03.072Z · LW(p) · GW(p)

Primitive needs to revenge can be extremely dangerous with todays technology.

Primitive need for revenge can be even more vital with today's technology. It is the only thing holding the most powerful players in check.

comment by Peacewise · 2011-11-20T02:05:16.110Z · LW(p) · GW(p)

The "discussion" of existential risk does occur in the mainstream media, sort of, it's mainly block buster movie's like Independence Day, War of the Worlds, 2012, The Day After Tomorrow and so on. I am confident that people understand the concept, probably however not the phrase. I respectfully suggest that the author amend the original post to include revelation that discussion of existential risk does occur, perhaps mentioning that the discussion is often trivial or often for entertainment purposes.

Whilst there have been a wide abundance of existential risk discussions over millennia within the huge variety of Armageddon stories that abound in various religions. I also recall the M.A.D principle was taught in high school, revealing that existential risk was a component of educational policy in the 70's, 80's and 90's.

comment by gaffa · 2011-11-15T15:48:06.357Z · LW(p) · GW(p)

The link for "Countdown to Zero" points to the wrong place (I presume).

Replies from: lukeprog
comment by lukeprog · 2011-11-15T17:19:16.126Z · LW(p) · GW(p)

Fixed, thanks.

comment by lukeprog · 2012-10-25T20:35:11.919Z · LW(p) · GW(p)

A quote relevant to the final section of this post...

The Earth is the cradle of the mind, but one cannot live forever in a cradle.

Konstantin Tsiolkovsky

comment by [deleted] · 2011-11-15T19:33:55.107Z · LW(p) · GW(p)

There are some points that I dislike about this introduction: The first one is the implicit speciesism resulting from the focus on extinction of Homo Sapiens as a species. It would have made sense to use Bostrom's definition of existential risk, which focuses on earth-originating intelligent life instead. Replacement of humans by posthumans is not existential risk. Transhumanism usually advocates the well- being of all sentience, not just humans. This can refer to both non-human animals (e.g. in natural ecosystems) and posthumans spreading into space.

Maybe more seriously, it is assumed without further justification that preventing existential risk is an ethical good, because colonizing the galaxy would create positive value structures on a great scale. This is, of course, incomplete without taking into consideration that it can also create negative value structures on a great scale. Currently, the galaxy isn't filled with involuntarily existing suffering entities, except for planet earth (as far as we know). In the future, that may change, and it may partially be Stanislav Petrov's fault.

We'd better get this right, because it really is important. Leaving out half of the equation in an introduction article like this doesn't further that goal.

Replies from: timtyler
comment by timtyler · 2011-11-15T20:20:46.738Z · LW(p) · GW(p)

It would have made sense to use Bostrom's definition of existential risk, which focuses on earth-originating intelligent life instead. Replacement of humans by posthumans is not existential risk.

Some people hereabouts are concerned about some types of posthuman and "earth-originating intelligent life".

comment by CharlesR · 2011-11-15T16:28:46.344Z · LW(p) · GW(p)

If you are writing for a general audience, I think you lose most of that audience here:

But it's not just nuclear risks we have to worry about. As Sun Microsystems’ co-founder Bill Joy warned in his much-discussed article Why the Future Doesn’t Need Us, emerging technologies like synthetic biology, nanotechnology, and artificial intelligence may quickly become even more powerful than nuclear bombs, and even greater threats to the human species. Perhaps the International Union for Conservation of Nature will need to reclassify Homo sapiens as an endangered species.

Replies from: None
comment by [deleted] · 2011-11-15T17:46:20.459Z · LW(p) · GW(p)

Or at the very least, explain why self-replicating things are more dangerous.