Skoll World Forum: Catastrophic Risk and Threats to the Global Commons
post by XiXiDu · 2012-04-05T09:44:06.382Z · LW · GW · Legacy · 10 commentsContents
What we mean by global threats Organisations Resources None 10 comments
More: Skoll Global Threats Fund | To Safeguard Humanity from Global Threats
The panel surfaced a number of issues that contribute to our inability to date to make serious strides on global challenges, including income inequality, failure of governance and lack of leadership. It also explored some deeper issues around pysche and society – people’s inability to convert information to wisdom, the loss of sense of self, the challenges of hyperconnectivity, and questions about economic models and motivations that have long underpinned concepts of growth and wellbeing. The session was filmed, and we’ll make public that link once the file is available. In the meantime, here are some of the more memorable quotes (which may not be verbatim, but this is how I wrote them down):
“When people say something is impossible, that just means it’s hard.”
“Inequality is becoming an existential threat.”
“We’re at a crossroads. We can make progress against these big issues or we can kill ourselves.”
“We need inclusive globalization, to give everyone a stake in the future.”
‘Fatalism is our most deadly adversary.”
“What we’re lacking is not IQ, but wisdom.”
“We need to tap into the timeless to solve the urgent.”
What we mean by global threats
Global threats have the potential to kill or debilitate very large numbers of people or cause significant economic or social dislocation or paralysis throughout the world. Global threats cannot be solved by any one country; they require some sort of a collective response. Global threats are often non-linear, and are likely to become exponentially more difficult to manage if we don’t begin making serious strides in the right direction in the next 5-10 years.
More on existential risks: wiki.lesswrong.com/wiki/Existential_risk
Organisations
A list of organisations and charities concerned with existential risk research.
- Singularity Institute
- The Future of Humanity Institute
- The Oxford Martin Programme on the Impacts of Future Technology
- Global Catastrophic Risk Institute
- Saving Humanity from Homo Sapiens
- Skoll Global Threats Fund (To Safeguard Humanity from Global Threats)
- Foresight Institute
- Defusing the Nuclear Threat
- Leverage Research
- The Lifeboat Foundation
Resources
10 comments
Comments sorted by top scores.
comment by timtyler · 2012-04-05T14:59:09.444Z · LW(p) · GW(p)
I generally approve of the apparent move away from the "existential risks" terminology and towards "global catastrophic risks".
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-05T15:53:22.442Z · LW(p) · GW(p)
I'm not sure I endorse it as a terminological shift, insofar as catastrophic risks and existential risks are different things and using one label to refer to the other entity creates a lot of confusion. But I definitely endorse being concerned with catastrophic risks rather than solely with existential risks, and endorse properly labeling the object of my concern.
Replies from: timtylercomment by Shmi (shminux) · 2012-04-05T19:36:05.102Z · LW(p) · GW(p)
From your link, the top 5 threats identified by Skoll are:
Climate change
Water security
Pandemics
Nuclear proliferation
Middle East conflict
Singling out the last one is silly, as it is only globally dangerous as a trigger of nuclear proliferation. Pandemics are nothing new, water security is at most a limiting factor of further population explosion. The consequences of the global climate change are still unclear, as this is a topic of the current research, so any meaningful action is unlikely.
Thus the only truly global risk worth addressing at this time is nuclear proliferation.
Replies from: faul_sname↑ comment by faul_sname · 2012-04-06T16:55:39.955Z · LW(p) · GW(p)
Just because pandemics aren't new doesn't mean we shouldn't address them. Especially since we seem to have just been lucky and had no severe pandemics since the 1918 flu.
comment by Viliam_Bur · 2012-04-05T10:43:39.428Z · LW(p) · GW(p)
Inequality is becoming an existential threat.
Please be more specific: What kinds of inequality are how much of a threat?
Replies from: Oligopsony↑ comment by Oligopsony · 2012-04-05T13:59:20.128Z · LW(p) · GW(p)
I think there are defensible ways you could spin this, like:
"Inequality is likely to lead to more distributional conflicts that could metasize into existentially risky conflicts" or
"Unchecked rich countries can take more risks with the rest of the world's population (as with e.g. global warming, or other potentially more existentially risky tradeoffs) that they could not if there were more international parity" or most plausibly (of things I can think of off the top of my head)
"The coexistence of intensive economic growth (in new technologies), extensive economic growth (in greater total population and urban concentration of such), and poverty greatly increase the probability of a global pandemic."
But it seems pretty clear from all the other phrases written down that these are just meant to be applause lights, so maybe this is an exercise in excessive charity. (Not that applause lights might not have their place; for instance, a speech meant to convince layfolk that existential risk is important.)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-04-05T15:17:49.994Z · LW(p) · GW(p)
I have attacked this specific applause light, because it seems to me dangerous with regards to some existential risks.
Imagine for example that a giant meteor is falling on Earth... but we have divided all resources evenly between the 7 000 000 000 inhabitants of the planet, and there is no way to finance a defense against this danger. You would need too many people to agree to put their resources together -- and you can't get enough people to agree. Game over. With more inequality, this specific existential risk could have been avoided.
Sure, for any decision you can invent a "what if" scenario where that specific decision appears wrong. But dividing all resources equally would probably create more problems that it would solve. Starting with: many people would waste their resources soon, so a new redistribution would be necessary every week.
(This is not an argument of right-wing politics, or at least I am trying not to make it. I am perfectly OK with equality, as long as it can work well. Unfortunately, just like with superhuman AI constructions, a random solution is probably very bad. We need to think hard to find a good solution. And I wouldn't trust someone with too many applause lights to do this correctly.)
Replies from: Oligopsony↑ comment by Oligopsony · 2012-04-05T16:55:23.774Z · LW(p) · GW(p)
I agree that there are potential upsides and downsides to most everything. But it does seem facially unlikely that a high degree of equality will ever be achieved without institutions that are very effective at financing public goods (leaving aside the possibility of resources becoming so abundant that wealth becomes meaningless, in which case this particular problem is also solved.)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-04-09T19:59:24.829Z · LW(p) · GW(p)
A high degree of equality could be also achieved by a hypothetical institution powerful enough to redistribute everything, and yet very uneffective at financing public goods.
When everyone has nothing, that too is equality.