Artificial Intelligence as exit strategy from the age of acute existential risk
post by Arturo Macias (arturo-macias) · 2023-04-12T14:48:35.771Z · LW · GW · 15 commentsContents
The Age of Existential Risk Organic exit: nuclear war risk in my lifetime Deus ex machina: the technological exit None 15 comments
This article argues that given the baseline level of existential risk implied by nuclear weapons the development of Artificial Intelligence (AI) probably implies a net reduction in existential risk. The so called Artificial General Intelligence (AGI) can replace the human political system and solve the worst alignment problem: the one that human groups have with respect to each other.
The Age of Existential Risk
If we had to describe in a few words our historical moment, not from the perspective of years or decades, but from that of our existence as a species, this moment should be called the age of acute existential risk.
In the last two hundred years, Humanity has experienced an immense expansion of its material capabilities that has intensified its ecological domination and has taken us out of the Malthusian demographic regime in which all other living species are trapped.
In August 6th, 1945, with the first use of a nuclear weapon on a real target, Humanity became aware that its material capabilities now encompassed the possibility of self-extinction. The following decades saw a steady increase in the destructive capacity of nuclear arsenals and several incidents where an escalation of political tension or a technical failure threatened to bring down the sword of Damocles.
An important feature of nuclear war is that it is a funnel for many other sub- existential risks. Financial, ecological, and geopolitical crises, while threatening neither human civilization nor its survival, substantially increase the risk of war, and wars can escalate into a nuclear exchange. Barring the possibility of nuclear war, the risks of a more populous, hotter world with growing problems of political legitimacy are partly mitigated by technology and economic interconnection. But the risk of nuclear war amplifies the other purely historical and environmental risks and turns them into existential risks.
If the nuclear war risk does not reduce over time, an accident, whether technical or political will happen sooner or later. Each of these 77 years after Hiroshima and Nagasaki is a miracle and a tribute to human reason and self-restraint. But without an exit strategy, sustained levels of nuclear war risk doom our technological and post-Malthusian civilization to be an ephemeral phenomenon.
In my opinion we can classify nuclear war risk exit strategies into two types: i) organic stabilization and ii) technological deux ex machina.
Organic stabilization refers to a set of social processes, linked to human development, that naturally reduce the risk of nuclear war. In the first place, in industrial societies the activities with the highest added value are linked to human capital. Consequently, the incentives to conquest and war are drastically reduced in a world where wealth is made by work, education and technology, compared to a world were land is the main source of wealth. Additionally, economic development implies a lower propensity for violence, either at the individual or at the group level. Economic interdependence and inter-elite permeability (that have increased for the last two centuries) are also a necessary condition for definitive pacification.
The other way out of acute existential risk is some technological deux ex machina. In my view, AI can be that game changer.
In the next section I am going to outline my subjective position on how the risk of nuclear conflict has evolved during my lifetime, and in the next I argue that these advances have proven to be limited and fragile, and that it is necessary to promote all the forms of technological progress and especially AI, to get out of this stage of acute existential risk as soon as possible.
Organic exit: nuclear war risk in my lifetime
I was born in 1977, and therefore I was six years old in the year of the highest risk of nuclear war in history: in 1983, the KGB launched the most extensive operation in Soviet intelligence history to assess the probability that NATO was preparing a preventive nuclear war; in September of that year the Petrov incident took place, and in November, Moscow seriously considered that the NATO winter maneuvers (Able Archer 83') were a covert operation for an all-out war with the USSR. Nuclear tension lasted for several more years amid an intensification of the Cold War that included: i) the Strategic Defense Initiative, ii) the dismantling of Soviet technological intelligence (Vetrov's leak), and the subsequent sabotage of the Trans-Siberian gas pipeline, iv) and the shooting down of the KAL007 plane in Korea.
During the 1990s, after the end of the Socialist Bloc, nuclear war disappeared from the collective consciousness, although political instability and ultra-nationalist and neo-communist currents in Russia suggest that in the final decade of the XX century nuclear war risk remained high. With the election of Vladimir Putin, an apparently authoritarian modernizer with an agenda of internal consolidation, the Russian origin nuclear war risk seemed to fade.
In parallel, since the mid-eighties in China a system of collective leadership was established, then a transition to a market economy was successfully performed, and finally the Communist Party appeared to open up to the homegrown moneyed class: in a few decades China seemed to have traveled the road from an absolute communist monarchy to a census-based bourgeois republic. Beyond the Middle East issues (which never implied nuclear risk), between 1991 and the mid-2010s, global trends were: i) universal economic convergence and interdependence, ii) consociational (not necessarily democratic) governance in the great powers, and iii) increasing permeability among the national elites of the different countries. For twenty-five years (those that go from my adolescence to my middle age) I saw the consolidation of a post-Malthusian, technocratic, and post-national world, where wealth depended above all on capital and technology. Of course, I was vaguely aware of what is obvious today: organic stabilization of existential risk is a soft solution to a hard problem, but everything looked so well behaved that Hegelian complacency (the opium of elites) looked a sensible position.
The general trends thus described were real, and they are more structural than the post-Ukrainian shock leads us to believe. However, after the mid-2010s, Xi Jinping has succeeded in replacing collective governance with his absolute power, and in Russia the “authoritarian modernizer” has become a “totalitarian warmonger”. Additionally, the waterways of the nuclear non-proliferation regime have widened with the development of the nuclear arsenal of North Korea and probably, in a few years, that of Iran.
The lesson of these decades of success is bleak: human institutions have an error margin clearly above what is tolerable for the risks of the Nuclear Age: nuclear war can mean billions of deaths, and the fall “into the abyss of a new Dark Age made more sinister and perhaps more prolonged by the lights of a perverted science” (the famous Churchill’s description of the consequences of a nazi victory perfectly fits a post nuclear world).
Even if we overcome the Russian-Ukrainian war, these years have shown that the probability of democratic regression is high even in developed countries. Only open societies have been able to generate the kind of international ties that can definitely lower international tensions. Autocracies may temporarily ally, but their elites are nationalized, and do not have the systems of reciprocal social influence and commitment on which the “Pax Democratica” is based. Apart from their intrinsic flaws (which technology is going to sharpen) autocracies have not a natural pathway for nuclear war risk reduction, and their resilience means that the trust that can be placed in the organic social progress to overcome the era of acute existential risk is limited.
Of course, economic stabilization, institutional innovation, and democratizing and internationalist activism are not useless: they are the only way in which the vast majority of Humanity can participate in the task of surviving the nuclear age. The organic path is not totally impracticable even as a definitive solution (social science is developing and can offer new forecasting and governance mechanisms safe enough for a nuclear world). Furthermore, each day that we survive by opportunistic means is one more day of life, and one more day to find a definitive solution.
But looking back at these four and a half decades, and given the regression towards autocracy and international chaos in less than ten years, my opinion is that Nuclear War is among the most likely causes of death for a person my age in the Northern Hemisphere.
Deus ex machina: the technological exit
That is why it makes no sense to fear the great technological transformations ahead. Humanity is on the verge of a universal catastrophe, so in reality, accelerationism is the only prudent strategy.
I have serious doubts that Artificial General Intelligence (AGI) is close: cars have not been able to drive autonomously in big cities and the spectacular results in robotics from Boston Dynamics are not yet being seen in civilian life nor in the battlefield. It's very easy to point to major successes (like chat GTP), but the failures are prominent as well.
An additional argument to consider AI risk as remote, is the Fermi Paradox. Unlike nuclear war, AI-risk is not a Fermi Paradox explanation. If an alien civilization is destroyed by the development of AI, the genocidal AI would still be in place to expand (even faster than the original alien species) across the Universe. So, while nuclear war is a very likely alien killer, AI is only an alien replacer. The vast galactic silence we observe suggest a substantially higher nuclear than AI risk. Probably, the period between the first nuclear detonation and the development of AGI is simply too risky for the majority of intelligent species, and we have been extremely lucky so far (or we live in a low probability Everett’s multiverse branch where 77 years of nuclear war risk have not materialized).
In any case, a super-human intelligence is the definitive governance tool: it would be capable of proposing social and political solutions superior to those that the human mind can develop, it would have a wide predictive superiority, and since it is not human it would not have particularistic incentives. All of this would give an AGI immense political legitimacy: a government oriented AGI would give countries that follow its lead a decisive advantage. During the Cold War, Asimov already saw AI (see the short story “ The Avoidable Conflict ”) as a possible way to achieve a “convergence of systems” that would overcome the ideological confrontation through an ideal technocracy.
Despite fears about the alignment of interests between AI and Humanity, in reality what we know for sure is that the most intractable problem is the alignment among humans, and that problem with nuclear weapons is also existential. Technological progress has already given us the tools of our own destruction. The safest way out for Mankind is forward, upward and ever faster, because from this height, the fall is already deadly.
Apart from AGI, there are several technologies for nuclear war risk mitigation. Decentralized manufacturing and mini-nuclear power plants could lead to a world without large concentrations of population and with moderate economic interdependence, that is, without the large bottlenecks that would be the main military targets in the event of a strategic nuclear war. Cheap rockets (like those developed by Space X) could allow the development of anti-missile shields, leading to a viable Strategic Defense Initiative. Should the worst happen, artificial food synthesis could allow survival in a nuclear winter. The portfolio of nuclear resilience technologies shall be developed in parallel to the paths of organic mitigation of existential risk. AI can accelerate them even if we do not succeed in producing the AGI that can solve the human alignment problem.
The risks that AGI implies for Humanity are serious, but they should not be assessed without considering that it is the most promising path out of the age of acute existential risk. Those who support a ban of this technology shall at least propose their own alternative exit strategy.
In my view, we are already in the brink of destruction, so we shall recklessly gamble for resurrection.
15 comments
Comments sorted by top scores.
comment by RHollerith (rhollerith_dot_com) · 2023-04-12T18:41:22.174Z · LW(p) · GW(p)
The public discourse on nuclear war is in the deplorable state.
For example, a large fraction of the participants in the discourse uncritically cite the "fact" that the nations of the world have enough nuclear weapons to kill every person on earth 5 times over. That "fact" is based on a calculation that assumes that the population of earth would obligingly arrange themselves in circles of just the right size directly under the detonations of the nukes packed shoulder-to-shoulder whereas of course in reality people are spread out over the earth (and inside buildings that offer enough protection that even some of the people right under the detonation of an air burst would survive especially in dense urban areas). I.e., it is a useless calculation, but the conclusion of the calculation is repeated by many authors.
My guess (since you do not show any signs of realizing that the existential-riskiness of nuclear war needs defending or explaining) is that you have uncritically accepted the general public discourse on nuclear war. In reality, nuclear war is a very minor existential risk compared to continuing AI research. ("Existential risk" means a risk that the human population goes to zero.)
Replies from: arturo-macias↑ comment by Arturo Macias (arturo-macias) · 2023-04-12T20:33:15.408Z · LW(p) · GW(p)
Well, after a complete NATO Russia exchange, direct deaths would be in dozens millions in the first week, and the electromagnetic pulse would left the power production systems and the majority of electronics destroyed.
On top of that, you have nuclear winter, that put the deaths in the billions (see link in the text).
And then what social system is left? A second wave of wars would be inevitable, and inevitably nuclear.
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2023-04-13T00:12:35.041Z · LW(p) · GW(p)
And then what social system is left? A second wave of wars would be inevitable, and inevitably nuclear.
Ah, yes, societal collapse. Where was the societal collapse in Europe during WWI (which unluckily coincided with a flu pandemic that killed a lot of society's most productive members, namely people in their 20s) or WWII? Where was the societal collapse in China during its civil war (the 20th Century's third most deadly war) and then to top it off, during the civil war, Japan invaded China?
When you were writing this -- let us be specific: when you were writing, "Despite fears about the alignment of interests between AI and Humanity, in reality what we know for sure is that the most intractable problem is the alignment among humans, and that [the] problem with nuclear weapons is also existential," did you know that "existential risk" is usually used to mean risk of the human population becoming zero? I.e., literally no human left alive whatsoever? If so, you haven't explained how the nuclear war would bring that about.
Suppose a nuclear war kills 80% of the human population (which I consider barely possible, but at the extreme tail of the distribution of outcomes of a nuclear war, entailing some vulnerability in human civilization that I am probably currently completely unaware of). What is the mechanism by which all of the remaining 20% die? If for example there is widespread societal collapse (which again I consider very unlikely) how would that bring about the death of literally everyone? Why wouldn't for example some people survive as hunters, gatherers and small-scale farmers?
I think that there is a lot of uncertainty over the effects of EMP on electronics because electronics has changed drastically since the end of the Cold War, and when the Cold War ended, the number of very competent and committed people studying nuclear war decreased drastically. But even with this uncertainty, I think we can say that your "the majority of electronics destroyed" is very unlikely: the wikipedia page on nuclear EMP asserts that most vehicles and cell phones would survive nuclear EMP. The power grid I will concede will be damaged by EMP, but not as damaged as it will be by the other effects of the nuclear detonations (fires, overpressure strong enough to knock down most of the buildings). Again, how does that bring the human population of the world to zero? If the entire power grid became non-operational and stayed that way for 6 months or 3 years, what prevents the survivors of the nuclear war from temporarily adopting a social organization that feeds everyone and performs a few other socially-necessary functions until the electrical grid is restored? Or suppose that is unachievable, and only 50% of the survivors get enough food to survive (unlikely because for example the US has right now about 3 years worth of food mostly stored in grain elevators and intended to be fed to cows and pigs and such, but which could be diverted in an emergency for human use): how does that prevent the survivors of the starvation from painfully rebuilding post-industrial civilization?
I think that our civilization (and my country, the US) should drastically increase its efforts to prevent and to prepare for nuclear war. Its just the notion of turning to AGI research of all things to do so that I disagree with.
Replies from: arturo-macias↑ comment by Arturo Macias (arturo-macias) · 2023-04-13T06:38:10.574Z · LW(p) · GW(p)
We would not be destroyed in the first large nuclear war (while effects of radioactivity in the food chains in my view are under researched). But not a single open society would survive.
A world of malthusian masses, and a military aristocracy desperately trying to keep as much firepower as possible is the natural post nuclear war outcome. Then, history happens again, and in 2000 years luckily we are back into deciding if we allow AGI to be developed. What is the point?
The baseline is that our governance systems are completely unaccurate for nuclear weapons. Even in this fortunate age of hegemonic republics.
We need to solve the human alignement problem. Do you have any better suggestion than AGI?
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2023-04-13T15:01:34.951Z · LW(p) · GW(p)
I find it frustrating to correspond with you. You have become attached to an argument for what we should do. To support this argument, you send out many "soldiers": nuclear war is a potent existential risk; electromagnetic pulse would destroy most electronic devices; not a single open society would survive a nuclear war; nuclear war inevitably leads to more nuclear war; nuclear war will cause widespread societal collapse. And now we have a new soldier, namely, the effect of radiation on the food chain.
Each of these soldiers seems plausible if one's epistemology consists mostly in noticing how often something is repeated in the press and online. But I haven't seen a single attempt by you to support any of these assertions / soldiers. When I say that Wikipedia says that most vehicles and cellphones would continue to operate after the electromagnetic pulses of a nuclear war, you ignore that. I offer you an opening to explain why you believe that nuclear war will lead to societal collapse whereas WWI, WWII and the Chinese civil war did not; you decline to engage on that. I still do not know whether you accept the conventional definition of "existential risk" (even after I asked you a direct question): when you wrote that nuclear war is a potent existential risk, maybe you thought that the possibility that half of the human population might die constitutes an existential risk. I.e., maybe you have been using an unconventional definition of existential risk. Your readers (including me) still do not know.
If I continue corresponding with you, I expect you would send out a few more soldiers, but it take me a lot more work to explain why a soldier does not in fact support your argument than it takes you to find the next soldier and to send it out.
Have you ever tried to learn about the effects of radiation on the food chain, e.g., by typing the phrase into a search engine and spending 5 minutes (as measured by an actual clock or timer) looking at the results? Science knows much about the subject. The radiation from an accident at a nuclear power plant is very different from the radiation from a nuclear weapon, so you'd have to be careful not to generalize from the first case to the second. (A much higher fraction of the radiation in the first case comes from long-half-life isotopes.)
Replies from: arturo-macias↑ comment by Arturo Macias (arturo-macias) · 2023-04-14T06:26:53.734Z · LW(p) · GW(p)
Nuclear winter:
https://climate.envsci.rutgers.edu/pdf/WiresClimateChangeNW.pdf
Electromagnetic pulse would destroy most electronic devices:
https://doh.wa.gov/sites/default/files/legacy/Documents/Pubs/320-090_elecpuls_fs.pdf
"Commercial computer equipment is particularly vulnerable to EMP effects. Computers used in data processing systems, communications systems, displays, industrial control applications, including road and rail signaling, and those embedded in military equipment, such as signal processors, electronic flight controls and digital engine control systems, are all potentially vulnerable to the EMP effect. Other electronic devices and electrical equipment may also be destroyed by the EMP effect. Telecommunications equipment can be highly vulnerable and receivers of all varieties are particularly sensitive to EMP. Therefore radar and electronic warfare equipment, satellite, microwave, UHF, VHF, HF and low band communications equipment and television equipment are all potentially vulnerable to the EMP effect. Cars with electronic ignition systems/ and ignition chips are also vulnerable."
not a single open society would survive a nuclear war; nuclear war inevitably leads to more nuclear war; nuclear war will cause widespread societal collapse
Do you expect a paper? I have this one:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC300808/
:-) After total economic disruption, no electricity nor electronics, and a few billion deaths... its like the parachute randomized trial. Too obvious to be argued.
Now let´s compare with peer reviewed literature on AGI:
https://marginalrevolution.com/marginalrevolution/2023/04/from-the-comments-on-ai-safety.html
"The only peer-reviewed paper making the case for AI risk that I know of is: https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064. Though note that my paper (the second you linked) is currently under review at a top ML conference."
comment by Seth Herd · 2023-04-12T15:08:11.107Z · LW(p) · GW(p)
I think this is a powerful point, and one that is raised too rarely. To bring it home, we need a decent estimate range of nuclear war risk per year, estimates of alignment risk reduction per year of alignment work, and a calculator spreadsheet to help our feeble monkey brains. Anyone?
Replies from: jkraybill↑ comment by jkraybill · 2023-04-12T16:26:09.073Z · LW(p) · GW(p)
The Doomsday Clock is at 23:58:30, but maybe that's not what you meant? I think they were way off in the Cuban Missile Crisis era, but these days it seems more accurate and maybe more optimistic than I would give it. They do accommodate x-risk of various types.
Replies from: Seth Herdcomment by Arturo Macias (arturo-macias) · 2023-04-14T06:30:02.110Z · LW(p) · GW(p)
Dear all,
I will keep any remaining discussion on the EA Forum. It is the version of the article that has been commented in Marginal Revolution (point 6, second link):
https://marginalrevolution.com/marginalrevolution/2023/04/thursday-assorted-links-400.html
comment by Seth Herd · 2023-04-12T15:18:06.057Z · LW(p) · GW(p)
The other factor here is that our AGI risk choices could affect other intelligent species. If we create an unaligned maximizer, it's like to wipe out everything in its light cone. To be fair, soft maximizers are looking more likely, and I don't know how much a thing would spread. Nuclear war only gets most of the species on this planet. So making this point has always felt a bit species centric to me.
There's also the possibility that a nuclear war wouldn't wipe out the human race. It seems to be unknown even by experts, I think. I'm thinking that building an AGI in the ashes of a civilization fallen to hubris might make our second round attempts more cautious.
I sure don't want to die and let everyone I know die when we could've tried to get it right and extend our lives indefinitely. But I realize I'm biased. I don't want to be so selfish as to kill an unimaginably large and perhaps bright future.
Replies from: arturo-macias↑ comment by Arturo Macias (arturo-macias) · 2023-04-13T06:42:01.637Z · LW(p) · GW(p)
Why do you think AGI would necessarily be worse than us? I think we really don't know.
Replies from: Seth Herd↑ comment by Seth Herd · 2023-04-13T07:10:17.385Z · LW(p) · GW(p)
If it wiped us out, it will probably wipe them out too.
Replies from: arturo-macias↑ comment by Arturo Macias (arturo-macias) · 2023-04-13T07:43:54.016Z · LW(p) · GW(p)
But what is the probability that AGI wipes us? Why would AGI be more aggresive than humans? Specially if we carefully nurture her to be our Queen!
Replies from: Seth Herd↑ comment by Seth Herd · 2023-04-14T18:03:25.036Z · LW(p) · GW(p)
That's the alignment problem, the primary topic of this site. Opinions vary and arguments are plentiful. The general consensus is that there's tons of reasons it might wipe us, the informed average something like a 50‰ estimate, and overconfidence is usually from ignorance of those arguments. I won't try to restate them all, and I don't know of a place they're all collected, but they're all over this site.