Breakdown of existential risks
post by Stuart_Armstrong · 2012-11-23T14:12:03.994Z · LW · GW · Legacy · 21 commentsContents
21 comments
Due to my colleague, Anders Sandberg:
21 comments
Comments sorted by top scores.
comment by JoshuaZ · 2012-11-23T17:38:55.970Z · LW(p) · GW(p)
Can you expand on what dynamical laws and deterministic dynamics mean? Also, how is one checking if there is a research community? My understanding is that a fair number of geologists study Yellowstone for example but supervolcanoes have a "no" next to them. Also, what does "obsolete" mean in the third column?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-11-23T19:57:50.489Z · LW(p) · GW(p)
You may be correct for super-volcanoes; I'm not sure about that.
Obsolete probably means that all data we have on past global computer failures is already out of date as far as predicting future ones is concerned.
Replies from: prase↑ comment by prase · 2012-11-24T01:26:55.021Z · LW(p) · GW(p)
And deterministic dynamics?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-11-26T10:30:07.070Z · LW(p) · GW(p)
Do they obey known deterministic laws?
comment by JonathanLivengood · 2012-11-23T20:20:47.209Z · LW(p) · GW(p)
I really don't understand the row for climate change. What exactly is meant by "inference" in the data column? I don't know what you want to count as data, but it seems to me that the data with respect to climate change include increasingly good direct measurements of temperature and greenhouse gas concentrations over the last hundred years or so, whatever goes into the basis of relevant physical and chemical theories (like theories of heat transfer, cloud formation, solar dynamics, and so forth), and measurements of proxies for temperature and greenhouse gas concentrations in the distant past (maybe this is what "inference" is supposed to mean?).
I also don't understand the "?" under probability distribution. Are the probability distributions at stake here distributions over credences? If so, then they can be estimated for most any scientist, at least. Are the distributions over frequencies? Then frequencies of what? I suspect we could estimate distributions for lots of climate related things, like severe storms or droughts or record high temperatures. I would be somewhat surprised if such distributions have not already been estimated by climate scientists. Is the issue about calibration? Then the answer seems to be a qualified yes. Groups like the IPCC give probabilistic statements based on their climate models. The climate models could be checked at least on past predictions, e.g. by looking at what the models from 2000 predicted for the period 2001-2011. We might not get a very good sense of how well calibrated the models are, but if the average temperature for each month, say, is a separate datum, then we could check the models by seeing how many of the months fall into the claimed 95% confidence bands, for example. (And just putting down confidence bands in the models should tell you that the climate scientists think that the distribution can be estimated for some sense of probability.)
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-11-23T20:40:55.508Z · LW(p) · GW(p)
I also don't understand the "?" under probability distribution
The uncertainties within the models are swamped by uncertainties outside the model - ie whether feedbacks are properly accounted for or not.
I agree that "inference" on its own is very odd. I would have put "inference and observations (delayed feedback)".
Replies from: JonathanLivengood↑ comment by JonathanLivengood · 2012-11-23T21:17:00.078Z · LW(p) · GW(p)
That's an interesting point. How precise do you think we have to be with respect to feedbacks in the climate system if we are interested in an existential risk question? And do you have other uncertainties in mind or just uncertainties about feedbacks?
The first thing I thought on reading your reply was that insofar as the evidence supports positive feedbacks, the evidence also supports the claim that there is existential risk from climate change. But then I thought maybe we need to know more about how far away the next equilibrium is -- assuming there is one. If we are in or might reach a region where temperature feedback is net positive and we run away to a new equilibrium, how far away will the equilibrium be? Is that the sort of uncertainty you had in mind?
comment by Curiouskid · 2012-11-23T22:28:46.393Z · LW(p) · GW(p)
Good work!
Perhaps you could put this into a google doc so that you can comment on each of the cells.
comment by MinibearRex · 2012-12-01T01:33:26.410Z · LW(p) · GW(p)
Why does the table indicate that we haven't observed pandemics the same way we've observed wars, famines, and earth impactors?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-12-03T12:14:12.494Z · LW(p) · GW(p)
We can observe past pandemics and past meteor impacts. But we can also observe current and future meteors, predict their trajectories, and see if they're going to be a threat. We can't really do this with pandemic.
ie with meteors, we can use the past events and the present observations to predict the future; for pandemics, we can only use past events (to a large extent).
comment by Shmi (shminux) · 2012-11-23T15:51:12.771Z · LW(p) · GW(p)
Why does superintelligence require global coordination? Apparently all one needs to do is to develop an FAI, and the rest will take care of itself.
Replies from: Kaj_Sotala, Stuart_Armstrong, MattMahoney, timtyler↑ comment by Kaj_Sotala · 2012-11-23T16:11:25.182Z · LW(p) · GW(p)
E.g. AI regulation (like most technology regulation) is only effective if you get the whole world on board, and without global coordination there's the potential for arms races.
"Only develop an FAI" also presumes a hard takeoff, and it's not exactly established beyond all doubt that we'll have one.
↑ comment by Stuart_Armstrong · 2012-11-23T19:59:01.344Z · LW(p) · GW(p)
Preventing UFAI or dealing safely with Oracles or using reduced impact AIs requires global coordination. Only the "FAI in a basement" approach doesn't.
↑ comment by MattMahoney · 2012-11-23T18:26:17.113Z · LW(p) · GW(p)
Because FAI is a hard problem. If it were easy then we would not still be paying people $70 trillion per year worldwide to do work that machines aren't smart enough to do yet.
Replies from: JoshuaZcomment by [deleted] · 2015-06-22T00:14:09.645Z · LW(p) · GW(p)
Is there an updated version of this table?
comment by twanvl · 2012-11-24T11:17:43.636Z · LW(p) · GW(p)
How are "global computer failures" an existential risk? Sure, it would suck, but it wouldn't be the end of the world.
And what are "physics threats"?
I would also like to see a column with strategies for mitigating the thread, beyond "requires global coordination". For example the solution against bioweapons would be regulation and maybe countermeasure research, while against supernovae there isn't much we can do.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-11-24T14:02:49.273Z · LW(p) · GW(p)
How are "global computer failures" an existential risk? Sure, it would suck, but it wouldn't be the end of the world.
Global trade depends on computers these days, and the human population depends on global trade to get food, medicine, building materials, technology parts, etc.; even if all humans would not be instantly killed by a global computer failure, it could stall or stop expansion.
And what are "physics threats"?
Vacuum metastability event, for instance?
Replies from: evand↑ comment by evand · 2012-11-25T22:02:57.547Z · LW(p) · GW(p)
I can see a global computer catastrophe rising to the level of civilization-ending, and 90-99% fatality rate, if I squint hard enough. I could see the fatality rate being even higher if it happens farther in the future. I'm having trouble seeing it as an existential risk, that literally kills enough people that there is no viable population remaining anywhere. Even in the case of computer catastrophe as malicious event, I'm having trouble envisioning an existential risk that doesn't also include one of the other options.
Are there papers that make the case for computer catastrophe as X-risk?
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-11-26T01:48:57.084Z · LW(p) · GW(p)
Rather than considering it in terms of fatality rate, consider it in terms of curtailing humanity's possible expansion into the universe. The Industrial Revolution was possible because of abundant coal, and the 20th century's expansion of technology was possible because of petroleum. The easy-access coal and oil are used up; the resources being used today would not be accessible to a preindustrial or newly industrial civilization. So if our civilization falls and humanity reverts to preindustrial conditions, it stays there.