Absent coordination, future technology will cause human extinction

post by Jeffrey Ladish (jeff-ladish) · 2020-02-03T21:52:55.764Z · LW · GW · 12 comments

(Crossposted from on Medium and my blog)

Nick Bostrom, of the Future of Humanity Institute, uses an evocative metaphor to describe the future of humanity’s technological development:

One way of looking at human creativity is as a process of pulling balls out of a giant urn. The balls represent possible ideas, discoveries, technological inventions. Over the course of history, we have extracted a great many balls — mostly white (beneficial) but also various shades of grey (moderately harmful ones and mixed blessings).
What we haven’t extracted, so far, is a black ball: a technology that invariably or by default destroys the civilization that invents it. The reason is not that we have been particularly careful or wise in our technology policy. We have just been lucky.

The atom bomb, together with the long range bomber, marked the first time a small group of people could destroy dozens of cities in a matter of hours. The physicists who worked on the bomb knew that this invention had the capacity to threaten human civilization with unprecedented destruction. They built it out of fear that if they did not, an enemy state like Nazi Germany would build it first. With this development, the destructive power of humanity increased by several orders of magnitude.

History barely had time to catch its breath before many of these same physicists created a new type of nuclear bomb, the hydrogen bomb, that was hundreds of times more powerful than the atom bomb. They did it for the same reasons, out of fear that a rival state would build it first. The Soviet Union used the same justification to build their biological weapons program during the Cold War, producing large quantities of anthrax, plague, smallpox, and other biological weaponry. As far as we know, the United States did not have a comparably large program, but the fear that the US might have one was sufficient to motivate the Soviet leadership. Examples like this are not exceptions; they are the norm.

It’s clear from the history of warfare that the fear of a rival getting a technology first is sufficient to motivate the creation of purely destructive technology, including those that risk massive blowback from radiation, disease, or direct retaliation. This desire to get there first is not the only incentive to develop civilization-threatening technology, but it is the one that seems to drive people to take the most risks at a civilizational level.

Even when there is no perceived threat, the other motivations for technological innovation — profit, prestige, altruism, etc. — drive us to create new things. For most technologies this is good, and has enabled most of the progress of human civilization. The problem only arises when our technology becomes powerful enough to threaten civilization itself. While innovation is hard, it’s even more difficult to anticipate potentially dangerous innovations and prevent their creation. It’s made more difficult by the lack of personal incentive. We all know the names of famous inventors, but have you ever heard of a famous risk analyst who successfully prevented the development of a dangerous technology? I doubt it.

Still, while long term trends favor aggressive tech development, there are controls in place which slow the development of known dangerous technologies. The Non-Proliferation Treaty, the Biological Weapons Convention, and other efforts put pressure on states not to build new nuclear, chemical, or biological weapons, with variable success. Within their own borders, most countries create and enforce laws forbidding private citizens from researching or building weapons of mass destruction.

Some disincentives for dangerous technology development are cultural. The Asilomar Conference on Recombinant DNA in 1975 was an impressive effort by biologists to make sure their field did not create dangerous new kinds of organisms. A strong safety culture can lead to a reduction in accidents and an inclination towards safe exploration, though it’s not always clear how to create such a culture.

In the private sector, companies balance the benefits of “moving fast and breaking things” with the negative PR that comes from developing safety-critical tech without adequate safeguards. After one of Uber’s driverless cars killed a woman last year and the details were released, the poor safety practices of Uber were revealed. We can hope that the backlash from such accidents will incentivize a safer exploration of these technologies in the future. Other AI companies appear more inclined to invest in safety & robustness measures upfront. Both Deepmind and OpenAI, two leaders in deep learning research, have dedicated safety teams that focus on minimizing negative externalities from the technology. Whether such measures will be sufficient to curtail dangerous methods of exploration of powerful AI systems remains to be seen.

A limitation of current regulations is that they focus on technologies known to pose a risk to human life. Entirely new innovations or obscure technologies receive far less attention. If a particle accelerator might inadvertently set off a chain reaction that destroys all life, only their internal safety process and self-preservation instincts prevent them from taking risks on behalf of everyone. There is no international process for ensuring a new technology won’t end up being a black marble.

Avoiding black marbles is both a coordination problem and an uncertainty problem. In the long term, it’s not enough to get 90% of creators to refrain from exploring dangerous territory. Absent strong coordination mechanisms, future technological development suffers from the unilateralist’s curse. Even if no individual creator has ill-intent, those who believe their technologies prevent little risk will forge ahead, biasing the overall population of creators towards unsafe exploration.

While international coordination is necessary to prevent black marble scenarios, it is not sufficient. In some cases, it will not be easy to tell in advance which technologies will prove dangerous. Even if every country in the world agreed to share intelligence about black-marble technological threats and enforce international laws about their use, there is no guarantee a black marble would not be pulled out by accident.

There are no off-the-shelf solutions to international governance problems. We’re in new territory, and new social technology is required. It’s not clear how to design institutions which can incentivize rigorous risk analysis, the right kinds of caution, and quick responses to potentially dangerous developments. Nor is it clear how world powers can be brought to the negotiating table with the mandate to create the necessary frameworks. What is clear is that something new is necessary.

In 1946, physicists from the Manhattan Project brought together a group of prominent scientists and wrote a collection of essays about the implications of nuclear technology for the future. The volume was called One World or None, In the forward, Niels Bohr wrote the following, and though much has changed in the intervening decades, the message is as relevant in 2020 as it was in 1946. For context, Bohr is writing about the need for international governance to prevent nuclear war.

Such measures will, of course, demand the abolition of barriers hitherto considered necessary to safeguard national interests but now standing in the way of common security against unprecedented dangers. Certainly the handling of the precarious situation will demand the goodwill of all nations, but it must be recognized that we are dealing with what is potentially a deadly challenge to civilization itself. A better background for meeting such a situation could hardly be imagined than the earnest desire to seek a firm foundation for world security, so unanimously expressed by all those nations which only through united efforts have been able to defend elementary human rights. The extent of the contribution that agreement on this vital matter would make to the removal of obstacles to mutual confidence, and to the promotion of a harmonious relationship between nations can hardly be exaggerated.

Bohr believed that the threat posed by nuclear weapons was a game changer, and that strong international cooperation was the only solution. Without international control of the bomb, Bohr and other scientists believed that a global nuclear war was inevitable. Fortunately, this hasn’t come to pass, and while this is partially due to luck, it’s also due to the degree of coordination that has occurred between world powers. Efforts like the Non-Proliferation Treaty have been more successful than many believed possible. It’s been 75 years since Hiroshima, and only nine countries posses nuclear weapons.

The success we’ve had curtailing catastrophic threats has bought us time, but we shouldn’t mistake limited forms of cooperation like the UN Security Council for a global framework capable of addressing existential threats. Many scientists of the 1940s recognized that unprecedented forms of international coordination were necessary to address existentially-threatening technologies. 75 years later, we’ve all but lost this ambition, but the threats haven’t gone away. On the contrary, we continue to develop new technologies, and if the process continues we will find a black marble. Absent coordination, future technology will cause human extinction.

On the other hand, with better international systems of cooperation, we can anticipate and avert existential threats. Despite the fears of Cold War and many close calls, the world came through without a single nuclear weapon used against another nation. Designing international frameworks up to the tasks of coordination and global risk analysis will be difficult. It will be even more difficult to get existing leaders and stakeholders on board. It’s tempting to throw up our hands and call this kind of effort impossible, but I believe this would be a mistake. Despite the difficulty, real international coordination is still the best chance humanity has to avoid extinction.

12 comments

Comments sorted by top scores.

comment by Jeffrey Ladish (jeff-ladish) · 2020-02-03T22:00:10.976Z · LW(p) · GW(p)

Exchange from my facebook between Robin Hanson and myself:

Robin Hanson "Will" is WAY too strong a claim.

Jeffrey Ladish The key assumption is that tech development will continue in key areas, like computing and biotech. I grant that if this assumption is false, the conclusion does not follow.

Jeffrey Ladish On short-medium (<100-500 years) timescales, I could see scenarios where tech development does not reach "black marble" levels of dangerous. I'd be quite surprised if on long time scales 1k - 100k years we did not reach that level of development. This is why I feel okay making the strong claim, though I am also working on a post about why this might be wrong.

Robin Hanson You are assuming something much stronger than merely that tech improves.

Jeffrey Ladish However, I think we may have different cruxes here. I think you may believe that there can be fast tech development (i.e. Age of Em), without centralized coordination of some sort (I think of markets as kinds of decentralized coordination), without extinction.

Jeffrey Ladish I'm assuming that if tech improves, humans will discover some autopoietic process that will result in human extinction. This could be an intelligence explosion, it could be synthetic biotech ("green goo"), it could be some kind of vacuum decay, etc. I recognize this is a strong claim.

Robin Hanson Jeffrey, a strong assumption quite out of line with our prior experience with tech.

Jeffrey Ladish That's right.

Jeffrey Ladish Not out of line with our prior experience of evolution though.

Robin Hanson Species tend to improve, but they don't tend to destroy themselves via one such improvement.

Jeffrey Ladish They do tend to destroy themselves via many improvements. Specialists evolve then go extinct.
Though I think humans are different because we can engineer new species / technologies / processes. I'm pointing at reference classes like biotic replacement events: https://eukaryotewritesblog.com/2017/08/14/evolutionary-innovation/

Jeffrey Ladish I'm working on an longform argument about this, will look forward to your criticism / feedback on it.

Robin Hanson The risk of increasing specialization creating more fragility is not at all what you are talking about in the above discussion.

Jeffrey Ladish Yes, that was sort of a pedantic point. I do think it's related but not very directly. But the second point, about the biotic replacement reference class, is the main one.

comment by Wei Dai (Wei_Dai) · 2020-02-08T11:57:22.144Z · LW(p) · GW(p)

I'm really happy to see more people explore in this direction. I'm curious if you have any thoughts on global coordination with regard to climate change. What is preventing effective coordination on that issue (in terms of root causes / social dynamics), and do you think coordination on future x-risks will be easier or harder than on climate change?

Replies from: jeff-ladish
comment by Jeffrey Ladish (jeff-ladish) · 2020-02-10T03:26:15.831Z · LW(p) · GW(p)

I haven't yet formed clear hypotheses around what is preventing effective coordination around climate change. My current approach is to examine what led to the fairly successful nuclear arms control treaties and what is causing them to fail now. I have found Thomas Schelling's work quite useful for thinking about international cooperation, but I'm missing a lot of models around internal state politics that enables or prevents those states from being able to negotiate effectively.

One area I'm quite interested in, in regards to climate coordination / conflict, is geoengineering. Several high-impact geoengineering methods seem economically feasible to do unilaterally at scale. This seems like a complicated mixed-motive conflict. I'm not clear where the Schelling Points will be, but I am going to try to figure this out. I'd love to see other people do their own analyses here!

comment by jmh · 2020-02-04T13:52:58.751Z · LW(p) · GW(p)

Two things are nagging the back of my mind with this post but I'm not sure about one of them.

First, I am not at all sure history shows international coordination has ever done anything about limiting war. WWI could be seen as occurring due to the presence of the existing international agreements and coordination of that time. The League of Nations did little to prevent WWII, and the international coordination that produced the Treaty of Versailles and particularly the structure for managing reparations and other war debt have been seen as the cause of WWII. The United Nations has not really stopped conflict. I think it would be hard to demonstrate that the UN can take credit for the USA and USSR not going to war, for Russia and China not going war or the USA and China not going to war (with the exception of the USA-USSR fight the other cases have in fact occurred, just limited).

The critical question here is how the institution manages factions and mitigates the factional disputes effectively. I'm not sure we get that done well at the international level. If so, putting everyone in the same institutional straight jacket seems like the problem that produces the USA Civil War (which resulted in a complete shift from a collection of equal states into a single nation in fact if not in design or Constitution). Perhaps the decentralized, more flexible framework might be better.

In other words, greater international coordination may actually increase the risk of nuclear war, or other major technology risks.

The second aspect, which I'm not as sure about, is the metaphor of the colored balls and giant urn. I think research and technology is much more targeted than the random process suggested. I think that will have some impact on just how the technology is both implemented and made available to the world. The random black ball version seem to suggest we will be too surprised and unprepared for the risks -- like opening Pandora's box.

I will concede there are some elements of those problems but not as sure that this is the either a signification aspect of the risk or uncontrolled, just who could build a hydrogen bomb, new biological agent of mass destruction or some AI that will kill us all in their basement without that activity not setting off some alarms via material purchases or energy consumption?

If the metaphor used to frame the question is not fairly accurate, how will that influence the conclusion?

Replies from: jeff-ladish
comment by Jeffrey Ladish (jeff-ladish) · 2020-02-07T09:32:22.134Z · LW(p) · GW(p)
First, I am not at all sure history shows international coordination has ever done anything about limiting war.

I think there's a decent case that the Peace of Westphalia is a case of this. It wasn't strong centralized coordination, but it was a case of major powers getting together and engineering a peace that lasted for a long time. I agree that both the League of Nations and the UN have not been successful at the large-scale peacekeeping that their founders hoped for. I do think there are some arguments that the post-WWII US + allies prevented large scale wars. Obviously nuclear deterrence was a big part of that, but it doesn't seem like the only part. I wouldn't call this a big win for explicit international cooperation, but it is an example of a kind of prevention. I recognize that the kind of coordination I'm calling for is unprecedented, and it's unclear whether it's possible.

What I like about the urn metaphor is the recognition that the process is ongoing and it's very hard to model the effects of technologies before we invent them. It's very simplified, but it illustrates that particular point well. We don't know what innovation might lead to an intelligence explosion. We don't know if existentially-threatening biotech is possible, and if so what that might look like. I think the metaphor doesn't capture the whole landscape of existential threats, but does illustrate one class of them.

comment by Jeffrey Ladish (jeff-ladish) · 2020-02-03T21:56:58.989Z · LW(p) · GW(p)

I didn't really write this in "lesswrong style", but I think it's still appropriate to put this here. There are a number of assumptions implicit in this post that I don't spell out, but plan to with future posts.

comment by Donald Hobson (donald-hobson) · 2020-02-04T17:32:22.963Z · LW(p) · GW(p)

Consider these 5 states

1)FAI

2)UFAI

3) Tech progress fails. No one is doing tech research.

4) We coordinate to avoid UFAI, and don't know how to make FAI.

5) No coordination to avoid UFAI, no one has made one yet. (State we are currently in)

In the first 3 scenarios, humanity won't be wiped out by some other tech. If we can coordinate around AI, I would suspect that we would manage to coordinate around other black balls. (AI tech seems unusually hard to coordinate around, as we don't know where the dangerous regions are, tech near the dangerous regions is likely to be very profitable, it is an object entirely of information, thus easily copied and hidden. ) In state 5, it is possible for some other black ball to wipe out humanity.

So conditional on some black ball tech other than UFAI wiping out humanity, the most likely scenario is that it came sooner than UFAI could. I would be surprised if humanity stayed in state 5 for the next 100 years. (I would be most worried about grey goo here)

The other thread of possibility is that humanity coordinates around stopping UFAI being developed, and then gets wiped out by something else. This requires an impressive amount of coordination. It also requires that FAI isn't developed (or is stopped by the coordination to avoid UFAI) Given this happens, I would expect that humans had got better at coordinating, that people who cared about X-risk were in positions of power, and that standards and presidents had been set. Anything that wipes out a humanity that well coordinated would have to be really hard to coordinate around.

Replies from: jeff-ladish
comment by Jeffrey Ladish (jeff-ladish) · 2020-02-07T09:19:39.354Z · LW(p) · GW(p)

This sounds roughly right to me. There is the FAI/UFAI threshold of technological development, and after humanity passes that threshold, it's unlikely that coordination will be a key bottleneck in humanity's future. I think many would disagree with this take, who think multi-polar worlds are more likely and that AGI systems may not cooperate well, but I think the view is roughly correct.

The main thing I'm pointing at in my post is 5) and 3)-transition-to-5). It seems quite possible to me that SAI will be out of reach for a while due to hardware development slowing, and that the application of other technologies could threaten humanity in the meantime.

comment by RedMan · 2020-02-04T09:08:23.352Z · LW(p) · GW(p)

If the current statistics hold of 1 chernobyl/fukushima/mayak level disaster every fifty years, we already drew a black ball.

If business as usual with carbon dioxide pollution continues unabated until earth is uninhabitable in 500 years, we also already drew a black ball.

If the time it takes for a black ball to kill us is more than a few generations it's really hard to plan around fixing it.

Replies from: donald-hobson, jeff-ladish
comment by Donald Hobson (donald-hobson) · 2020-02-04T17:45:38.567Z · LW(p) · GW(p)

Quote from wikipedia on fukushima

Deaths 1 cancer death attributed to radiation exposure by government panel.[4][5]
Non-fatal injuries 16 with physical injuries due to hydrogen explosions,[6]
2 workers taken to hospital with possible radiation burns[7]

I think this puts the incident squarely in the class of minor accidents that the media had a panic about. Unless you think it had a 50 % chance of wiping out japan and we were just lucky, it is irrelevant to the discussion of X-risk.

With CO2, it depends what you mean by buisness as usual. We don't have 500 years of fossil fuels left, and we are already switching to renewables. I don't think that the earth will become uninhabitable to technologically advanced human life. A In a scenario where humans are using air conditioners and desalinators to survive the 80C Norwegian deserts, the world is still "habitable". (I don't think it will get that bad, but I think humans would survive if it did. )

If the time it takes for a black ball to kill us is more than a few generations it's really hard to plan around fixing it.

No those are the ones that are really easy to plan around, you have plenty of time to fix them. Its the ones that kill you instantly that are hard to plan around.

comment by Jeffrey Ladish (jeff-ladish) · 2020-02-04T09:17:18.529Z · LW(p) · GW(p)

I'd be surprised if a chernobyl/fukushima/mayak level disaster every fifty years led to human extinction over 500 years. Why do you think that is the case?

Replies from: RedMan
comment by RedMan · 2020-02-21T01:35:56.118Z · LW(p) · GW(p)

Separate paragraphs, intended to be separate issues.

A 7 on the INES every fifty years means an accident that requires an exclusion zone and long term containment. The chernobyl sarcophagus needs to be maintained, and the accident is not 'over'. Humans have committed to managing a problem (radioactive waste) that will be around longer than the human race has existed to the present point (100,000 years into the future, current radwaste will be a hazard). We are doing fine so far, whether that holds remains to be seen.

I read somewhere that there is enough 'fossil carbon' that if all of it is burned, it will be enough to cause a runaway, venus like greenhouse effect that destroys the biosphere and renders the earth uninhabitable. The timeframe for this I saw is '500ish years'. Stephen Hawking said something similar and was panned for it: https://www.livescience.com/59693-could-earth-turn-into-venus.html

There's an anthropic bias here. 'We are not dead, so therefore we have not already drawn a black ball'. If we had, we would not be around to discuss it, so therefore, we are unlikely to ever be in a position where we look backwards and can say unambiguously 'yep, that was definitely a black ball, we are irreparably screwed'.