Existential risks open thread
post by John_Maxwell (John_Maxwell_IV) · 2013-03-31T00:52:46.589Z · LW · GW · Legacy · 47 commentsContents
47 comments
We talk about a wide variety of stuff on LW, but we don't spend much time trying to identify the very highest-utility stuff to discuss and promoting additional discussion of it. This thread is a stab at that. Since it's just comments, you can feel more comfortable bringing up ideas that might be wrong or unoriginal (but nevertheless have relatively high expected value, since existential risks are such an important topic).
47 comments
Comments sorted by top scores.
comment by John_Maxwell (John_Maxwell_IV) · 2013-03-31T00:55:18.867Z · LW(p) · GW(p)
The naive thing to do for existential risk reduction seems like: make a big list of all the existential risks, then identify interventions for reducing every risk, order interventions by cost-effectiveness, and work on the most cost-effective interventions. Has anyone done this? Any thoughts on whether it would be worth doing?
Replies from: None, lukeprog, shminux↑ comment by [deleted] · 2013-03-31T03:12:03.143Z · LW(p) · GW(p)
Bostrom's book 'Global Catastrophic Risks' does the first two of your list. The other two are harder. One issue is lack of information about organisations currently working in this space. If I remember correctly, Nick Beckstead at Rutgers is compiling a list. Another is interrelationships between risks - the GCR Institute is doing work on this aspect.
Yet another issue is that a lot of existential risks are difficult to solve with 'interventions' as we might understand the term in, say, extreme poverty reduction. While one can donate money to AMF and send out antimalarial bednets, it seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases. Indeed many of these problems can only be tackled by government action, because it requires regulation or because of the cost of the prevention device (i.e. an asteroid deflector). However its no secret that the cost-effectiveness of political advocacy is really hard to measure, which is perhaps why its been underanalysed in the Effective Altruism community.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-03-31T10:54:52.210Z · LW(p) · GW(p)
Thanks for reminding me about GCR.
it seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases.
OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field? Feels like ideally we would get someone who knew stuff about biology (and ideally had some level of respect in the biology community) to do this.
However its no secret that the cost-effectiveness of political advocacy is really hard to measure, which is perhaps why its been underanalysed in the Effective Altruism community.
Does anyone reading LW know stuff about political advocacy and lobbying? Is there a Web 2.0 "lobbyist as a service" company yet? ;)
Are there ways we can craft memes to co-opt existing political factions? I doubt we'd be able to infect, say, most of the US democractic party with the entire effective altruism memeplex, but perhaps a single meme could make a splash with good timing and a clever, sticky message.
Is there any risk of "poisoning the well" with an amateurish lobbying effort? If we can get Nick Bostrom or similar to present to legislators on a topic, they'll probably be taken seriously, but a half-hearted attempt from no-names might not be.
Replies from: Kaj_Sotala, satt, army1987, None, timtyler↑ comment by Kaj_Sotala · 2013-03-31T15:59:07.821Z · LW(p) · GW(p)
Is there any risk of "poisoning the well" with an amateurish lobbying effort?
E.g. annoyance towards the overenthusiastic amateurs wasting the time of a researcher who knows the field and issues better than they do seems plausible. Also, efforts to persuade researchers to leave the field seems most likely to work on the most responsible ones, leaving the more reckless researchers to dominate the field, which could reduce the social norms related to precaution-taking in the field overall.
Replies from: evand, timtyler, John_Maxwell_IV↑ comment by evand · 2013-03-31T16:56:13.603Z · LW(p) · GW(p)
Low-quality or otherwise low-investment attempts at convincing people to make major life changes seem to me to run a strong risk of setting up later attempts for the one argument against an army failure mode. Remember that the people you're trying to convince aren't perfect rationalists.
(And I'm not sure that convincing a few researchers would be an improvement, let alone a large one.)
↑ comment by timtyler · 2013-04-01T01:07:35.880Z · LW(p) · GW(p)
Also, efforts to persuade researchers to leave the field seems most likely to work on the most responsible ones, leaving the more reckless researchers to dominate the field, which could reduce the social norms related to precaution-taking in the field overall.
Only if they buy the argument in the first place. Have any "synthetic biology" researchers ever been convinced by such arguments?
↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-03-31T23:55:56.974Z · LW(p) · GW(p)
Were there any relatively uninformed amateurs that played a role in convincing EY that AI friendliness was an issue?
↑ comment by satt · 2013-04-01T15:07:39.733Z · LW(p) · GW(p)
OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field? Feels like ideally we would get someone who knew stuff about biology (and ideally had some level of respect in the biology community) to do this.
Systematically emailing researchers runs the risk of being pattern matched to crank spam. If I were a respected biologist, a better plan might be to
- write a short (500-1500 words) editorial that communicates the strongest arguments with the least inferential distance, and sign it
- get other recognized scientists to sign it
- contact the editors of Science, Nature, and PNAS and ask whether they'd like to publish it
- if step 3 works, try to get an interview or segment on those journals' podcasts (all three have podcasts), and try putting out a press release
- if step 3 fails, try getting a more specific journal like Cell or Nature Genetics to publish it
Some of these steps could of course be expanded or reordered (for example, it might be quicker to get a less famous journal to publish an editorial, and then use that as a stepping stone into Science/Nature/PNAS). I'm also ignoring the possibility that synthetic biologists have already considered risks of their work, and would react badly to being nagged (however professionally) about it.
Edit: Martin Rees got an editorial into Science about catastrophic risk just a few weeks ago, which is minor evidence that this kind of approach can work.
↑ comment by A1987dM (army1987) · 2013-03-31T11:44:16.642Z · LW(p) · GW(p)
OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field?
That might convince a few ones on the margin, but I doubt it would convince the bulk of them -- especially the most dangerous ones, I guess.
↑ comment by [deleted] · 2013-03-31T17:41:56.095Z · LW(p) · GW(p)
People like Bostrom and Martin Rees are certainly engaged in raising public awareness through the media. There's extensive lobbying on some risks, like global warming, nuclear weapons and asteroid defence. In relation to bio/nano/AI the most important thing to do at the moment is research - lobbying should wait until it's clearer what should be done. Although perhaps not - look at the mess over flu research.
↑ comment by timtyler · 2013-04-01T01:02:11.401Z · LW(p) · GW(p)
It seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases.
OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field?
One of the last serious attempts to prevent large-scale memetic engineering was the unabomber.
The effort apparently failed - the memes have continued their march unabated.
↑ comment by lukeprog · 2013-03-31T06:32:03.930Z · LW(p) · GW(p)
It's worth doing but very hard. GCR is a first stab at this, but really it's going to take 20 times that amount of effort to make a first pass at the project you describe, and there just aren't that many researchers seriously trying to do this kind of thing. Even if CSER takes off and MIRI and FHI both expand their research programs, I'd expect it to be at least another decade before that much work has been done.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-03-31T23:08:51.923Z · LW(p) · GW(p)
It feels like more research on this issue would have the effect of gradually improving the clarity of the existential risk picture. Do you think the current picture is sufficiently unclear that most potential interventions might backfire? Given limited resources, perhaps the best path is to do targeted investigation of what appear to be the most promising interventions and stop as soon as one that seems highly unlikely to backfire is identified, or something like that.
What level of clarity is represented by a "first pass"?
↑ comment by Shmi (shminux) · 2013-03-31T00:59:26.062Z · LW(p) · GW(p)
Any thoughts on whether it would be worth doing?
It would be worth doing, and has been done, to some degree.
make a big list of all the existential risks, then identify interventions for reducing every risk
There are a few steps missing in between, such as identifying causes of the risks, rating them by likelihood and odds of wiping human species, etc.
comment by turchin · 2013-04-01T08:57:36.625Z · LW(p) · GW(p)
I wrote a book about existential risks in Russian and translated it into English. In Russian it had maybe 100 000 clicks from different sites and generaly had positive reaction. I have translated it myself into English so translation is readable but not good. In English I got may be a couple negative comments. I attribute the diference to the fact that there much less books on existential risks in Russian and also to the bad translation.
Structure of the global catastrophe. Risks of human extinction in the 21 century. http://ru.scribd.com/doc/6250354/STRUCTURE-OF-THE-GLOBAL-CATASTROPHE-Risks-of-human-extinction-in-the-XXI-century-
Replies from: RomeoStevens↑ comment by RomeoStevens · 2013-04-01T10:54:19.086Z · LW(p) · GW(p)
Is there a way to crowdsource editing this into better english? I mean I've never seen a book length wiki.
Replies from: turchin↑ comment by turchin · 2013-04-01T11:21:37.740Z · LW(p) · GW(p)
It is good idea. But anyway booklenth format is obsolete - that is why sequenses in LW is more popular then earlier booklenth Creating Friendly AI. So for the book it is better to be cut on wiki pages of chapter lenth. Another problem is luck of interest - I could create such wiki but most likely nobody would visit it. Anyway may be I should create it and invite people to try and add information.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2013-04-01T11:45:38.205Z · LW(p) · GW(p)
What about editing and releasing chapters as posts on LW for discussion as each chapter is completed?
Replies from: turchin↑ comment by turchin · 2013-04-01T12:08:08.872Z · LW(p) · GW(p)
I have tried to do this in 2011 then I suggetsed sequence «moresafe« but there was some mistakes in grammer and formating and it was extesively downvoted. Also people claim that existential risks is not right theme for LW or simply disagree with my point opf view on some topics. Downvoting as instrument of communication is emotionly hurting me and I feel less encourage to post again. So, I decided to post rare and only if I have high quality text.
comment by Paul Crowley (ciphergoth) · 2013-03-31T07:39:32.913Z · LW(p) · GW(p)
A question Katja Grace posed at a CFAR minicamp (wording mine):
Are there things we can do that aren't targeted to specific x-risks but mitigate a great many x-risks at once?
Replies from: ciphergoth, Qiaochu_Yuan, SWIM↑ comment by Paul Crowley (ciphergoth) · 2013-03-31T07:44:14.415Z · LW(p) · GW(p)
So trying to increase the number of people who think about and work on x-risk and see it as a high priority would be one. Efforts to raise general rationality would be another. MIRI does sort-of represent a general strategy against existential risk, since if they are successful the problem will likely be taken care of.
↑ comment by Qiaochu_Yuan · 2013-03-31T08:03:39.558Z · LW(p) · GW(p)
I hope that SPARC will end up being one of these things.
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2013-03-31T21:19:22.956Z · LW(p) · GW(p)
"not you regular math camp" I gather
↑ comment by SWIM · 2013-03-31T10:35:19.073Z · LW(p) · GW(p)
In discussions about AI risks, the possibility of a dangerous arms race between the US and China sometimes comes up. It seems like this kind of arms race could happen with other dangerous techs like nano and bio. Pushing for more democratic governments in states like Russia and China might also decrease the chances of nuclear war, etc.
This article from the Christian Science Monitor suggests that if the Chinese government decided to stop helping North Korea, that might cause the country to "implode", which feels like a good thing from an x-risk reduction standpoint.
How could we push for regime change? Since the cost of living in China is lower than the US, funding dissidents who are already working towards democracy seems like a solid option. Cyberattacks seem like another... how hard would it be to neuter the Great Firewall of China?
Replies from: DanArmak, Larks, None, Emile, Elithrion, Sarokrae↑ comment by DanArmak · 2013-03-31T10:51:33.403Z · LW(p) · GW(p)
So pushing for more democratic governments in states like Russia and China
Do you expect democratic governments to engage less in arms races? Or to be less capable of engaging in them (because they might have less domestic/economic/military power)? Or to be less willing to actually deploy the produced arms? Or to be less willing to compete with the US specifically? Or to cause some other change that is desirable? And why?
I ask because "democracy" is an applause light that is often coopted when people mean something else entirely that is mentally associated with it. Such as low corruption, or personal freedom, or an alliance with Western nations.
Replies from: SWIM↑ comment by SWIM · 2013-03-31T23:59:47.836Z · LW(p) · GW(p)
Or to be less willing to compete with the US specifically?
This is what I had in mind. I'd guess that the fact that the US is democratic and China is not ends up indirectly causing a lot of US/China friction. Same is probably true for Russia.
↑ comment by Larks · 2013-03-31T11:47:27.459Z · LW(p) · GW(p)
Pushing for more democratic governments in states like Russia and China
That sounds like the sort of aggression which would lead to an arms race. How would America react if China tried to achieve regime change here?
Cyberattacks
...thereby encouraging them to invest in intelligent tech defence
↑ comment by [deleted] · 2013-03-31T13:39:22.744Z · LW(p) · GW(p)
I agree that if Russia and China became more democratic the world would be a safer place. Liberal democracies are generally better at cooperation, and almost never go to war with one another [see the extensive literature on Democratic Peace Theory].
However like Larks, I think this is a baaaaaad idea. Foreign inteference would either have no effect, or provoke harsh countermeasures.
Replies from: SWIM↑ comment by Emile · 2013-03-31T13:57:22.522Z · LW(p) · GW(p)
Pushing for more democratic governments in states like Russia and China might also decrease the chances of nuclear war, etc.
Most Chinese people I talked to really disliked Japan, and seemed in favour of China invading Taiwan to "get it back". And that's from a sample that was more educated and western-friendly than the general population. I'm really not sure giving everybody the vote would really decrease the chances of nuclear war. It's not as if democratic elections in Iran, and Egypt (and maybe Libya?) were making the countries more stable.
if the Chinese government decided to stop helping North Korea, that might cause the country to "implode", which feels like a good thing from an x-risk reduction standpoint.
Sure, a civil war in a highly militarized country that has The Bomb, what could go wrong?
Replies from: gwern↑ comment by gwern · 2013-03-31T19:29:11.874Z · LW(p) · GW(p)
Sure, a civil war in a highly militarized country that has The Bomb, what could go wrong?
Keep in mind that a potential consequence of letting NK run amok (remember that they have already bombed South Korean land and military, killing hundreds of South Koreans, over the last few years) is South Korea and Japan going nuclear. (Implausible? No: SK already had an active nuke program in the 1980s due to fear of NK.)
Replies from: Emile↑ comment by Emile · 2013-04-01T11:41:46.916Z · LW(p) · GW(p)
I agree that North Korea keeping up with it´s current behavior is dangerous, it´s just far from clear whether a regime collapse would make things better or worse. The safest solution might be something like a soft collapse where the Kims and their friends are offered a safe retirement in China en exchange for stepping down and letting South Korea and/or Soutj Korea take over (which is unlikely unless China is threatening military action otherwise - and since China does't want Japan to go Nuclear, it has an incentive of finding some way to calm down the Kims).
↑ comment by Elithrion · 2013-03-31T17:32:10.171Z · LW(p) · GW(p)
This article from the Christian Science Monitor suggests that if the Chinese government decided to stop helping North Korea, that might cause the country to "implode", which feels like a good thing from an x-risk reduction standpoint.
I think the civil war that would result combined with extreme proximity between Chinese and US troops (the latter supporting South Korea and trying to contain nuclear weapons) is probably an abysmal thing from an x-risk reduction standpoint.
Replies from: TitaniumDragon↑ comment by TitaniumDragon · 2013-04-06T11:24:56.964Z · LW(p) · GW(p)
China has privately told the US that they would support the US in extending South Korean control over the entire Korean peninsula per the diplomatic cables leak. The Chinese would probably be happy if the US rolled in and flattened the entire country as long as they didn't have to let too many refugees into China, and really, at this point, the way that North Korea is acting China is probably willing to take the risk given that the North Koreans seem eager to cause trouble, and there's no guarantee it won't happen in a worse way later on.
Honestly I think that crushing the North Korean government and military completely would probably pretty much end it. North Korea has a ton of propaganda about their country's superiority over the rest of the world; without the tight control over the country that the present government has, I don't think that vision of superiority would last very long.
Not to say that they'd be terribly awesomely happy with us, but the US rolled into Japan after WWII and it worked out quite well. Given the present day poverty of the country, really all you'd have to do to win is wait for a bad famine to hit the country and roll in then; showing the people that you care about them with food is a dirty but probably effective way to make them distrust you less, especially if you have the South Koreans move in and the US move out as much as possible. Though of course other options exist.
It would be a mess, but I think it would probably be significantly less messy than Afganistan, given that rather than having twenty different angry groups, you really have the government and that's about it.
↑ comment by Sarokrae · 2013-04-06T12:46:13.646Z · LW(p) · GW(p)
Pushing for more democratic governments in states like Russia and China might also decrease the chances of nuclear war, etc.
How sure are you?
- Acts of military aggression by the PRC since 1949: About 5.
- Acts of military aggression by the USSR/Russia in the same period: About 5
- Acts of military aggression by the USA in the same period: About 7
(I've tried to be upwardly biased on numbers for all three, since it's obviously hard to decide who the aggressors in a conflict are)
- Wars that the PRC have participated in that were not part of domestic territorial disputes since 1949: 2
- Likewise for Russia: 5
- Likewise for the USA: 17
(for the USA and USSR figures I'm counting all of the Cold War as one conflict, and likewise all of the War on Terror)
Edit: What happened to my formatting? I've had this problem before but I've never been able to fix it.
Replies from: SWIM↑ comment by SWIM · 2013-04-07T00:47:18.570Z · LW(p) · GW(p)
Good point. I think ideally your sample size would be larger, I'm not sure the US is representative of democratic countries.
Re: formatting. Try putting a blank line between bullets.
Replies from: Sarokrae↑ comment by Sarokrae · 2013-04-07T18:26:51.567Z · LW(p) · GW(p)
Re: formatting. Try putting a blank line between bullets.
Tried, doesn't work. Anyone got any ideas?
Replies from: SWIMcomment by John_Maxwell (John_Maxwell_IV) · 2013-03-31T01:05:10.586Z · LW(p) · GW(p)
After his cryonics hour with Robin Hanson, orthonormal wrote:
Robin made the case that it's much more marginally useful now to work on analyzing the potentially long tail of x-risks than to focus on one very salient scenario—kind of like the way Bruce Schneier talks about better security via building resilient structures rather than concentrating on foiling specific "Hollywood" terror scenarios.
This is something I've been vaguely wondering myself. CFAR or similar seems like it might be one way to do this, but right now their methodology doesn't look very scalable (in-person workshops run by a small number of highly trained employees; contrast with LW or HPMoR). I'd be interested to hear if they have any plans to scale their operations up and if so what those plans look like. I'm also curious if they're trying to get leading psychologists like Keith Stanovich or Daniel Kahneman involved--this seems like it would be useful for a bunch of reasons.
Another idea is to try to spread the politics is the mind-killer or nonviolent communication memes more strongly... in other words, try to accelerate the historical trend towards decreased violence, as discussed by Steven Pinker and others. I've heard rumors that Middle Easterners' aggression may be caused by zinc deficiency from eating unleavened bread; don't know how true/useful that is.
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2013-03-31T21:22:28.145Z · LW(p) · GW(p)
the way Bruce Schneier talks about better security via building resilient structures rather than concentrating on foiling specific "Hollywood" terror scenarios.
Also, see Taleb's Antifragile
I've heard rumors that Middle Easterners' aggression may be caused by zinc deficiency from eating unleavened bread; don't know how true/useful that is.
I suspect some history/culture is a better explanation... But why not drop some zinc on them just in case? Go Team America!
comment by buybuydandavis · 2013-03-31T12:57:00.550Z · LW(p) · GW(p)
Is anyone systematically working on the other side of the pancake: existential opportunities?
People are working on particular opportunities, but I haven't heard of people doing the depth first search for opportunities.
Replies from: None, UngnsCobra↑ comment by [deleted] · 2013-03-31T13:32:09.277Z · LW(p) · GW(p)
What do you mean by 'existential opportunities'?
Replies from: buybuydandavis↑ comment by buybuydandavis · 2013-04-01T01:00:46.281Z · LW(p) · GW(p)
What would be game changing in terms of avoiding threats, and our resilience to threats?
↑ comment by UngnsCobra · 2013-03-31T15:50:54.553Z · LW(p) · GW(p)
(not sure this is the right answer) Potentially FHI prize competition could be seen as an attempt to pursue that end? ( http://www.fhi.ox.ac.uk/prize , it's closed now)
Replies from: buybuydandavis↑ comment by buybuydandavis · 2013-04-01T01:09:11.815Z · LW(p) · GW(p)
Interesting.
They seem more focused on general problem solving than game changing, but just raising the questions they do is game changing in a way. The blue team is Us, and what would harm us is Them. Getting people to increasingly view everyone else as Us would be game changing.
One of the things that cheers me about Death - it's a common bond with everyone else. Well, those living today for an afterlife tomorrow probably aren't so much part of that common bond, but maybe they'll come around someday.
comment by Arbitrary · 2015-06-30T21:01:09.467Z · LW(p) · GW(p)
Has anyone tried advertising existential risk?
Bostroms "End of Humanity" talk for instance.
It costs about 0.2 $ per view for a video ad on YouTube, so if 0.2% of viewers give an average of 100 $ it would break even. Hopefully people would give more than that.
You can target ads to groups likely to give much by the way, like the highly educated