Great Power Conflict

post by Zach Stein-Perlman · 2021-09-17T15:00:17.039Z · LW · GW · 6 comments

Contents

  I. Effects
  II. Causes
  III. Risk factors
None
7 comments

Crossposted from the EA Forum [EA · GW].

No longer endorsed.

Imagine it's 2030 or 2040 and there's a catastrophic great power conflict. What caused it? Probably AI and emerging technology, directly or indirectly. But how?

I've found almost nothing written on this. In particular, the relevant 80K and EA Forum [? · GW] pages don't seem to have relevant links. If you know of work on how AI might cause great power conflict, please let me know. For now, I'll start brainstorming. Specifically:

  1. How could great power conflict affect the long-term future? (I am very uncertain.)
  2. What could cause great power conflict? (I list some possible scenarios.[1])
  3. What factors increase the risk of those scenarios? (I list some plausible factors.)

Epistemic status: brainstorm; not sure about framing or details.

 

I. Effects

Alternative formulations are encouraged; thinking about risks from different perspectives can help highlight different aspects of those risks. But here's how I think of this risk:

Emerging technology enables one or more powerful actors (presumably states) to produce civilization-devastating harms, and they do so (either because they are incentivized to or because their decisionmaking processes fail to respond to their incentives).[2]

Significant (in expectation) effects of great power conflict on the long-term future include:

Human extinction would be bad. Civilizational collapse would be prima facie bad, but its long-term consequences are very unclear. Effects on relative power are difficult to evaluate in advance. Overall, the long-term consequences of great power conflict are difficult to evaluate because it is unclear what technological progress and AI safety look like in a post-collapse world or in a post-conflict, no-collapse world.

Current military capabilities don't seem [LW · GW] to pose a direct existential risk. More concerning for the long-term future are future military technologies and side effects of conflict, such as on AI development.

 

II. Causes

How could AI and the technology it enables lead to great power conflict? Here are the scenarios that I imagine, for great powers called "Albania" and "Botswana":

 

III. Risk factors

Great power conflict is generally bad, and we can list high-level scenarios to avoid, such as those in the previous section. But what can we do more specifically to prevent great power conflict?

Off the top of my head, risk factors for the above scenarios include:

It also matters what and how regular people and political elites think about AI and emerging technology. Spreading better memes may be generally more tractable than reducing the risk factors above, because it's pulling the rope sideways, although the benefits of better memes are limited.

 

Finally, the same forces from emerging technology, international relations, and beliefs and modes of thinking about AI that affect great power conflict will also affect:

Interventions affecting the probability and nature of great power conflict will also have implications for these variables.

 

Please comment on what should be added or changed, and please alert me to any relevant sources you've found useful. Thanks!


  1. My analysis is abstract. Consideration of more specific factors, such as what conflict might look like between specific states or involving specific technologies, is also valuable but is not my goal here. ↩︎

  2. Adapted from Nick Bostrom's Vulnerable World Hypothesis, section "Type-2a." My definition includes scenarios in which a single actor chooses to devastate civilization; while this may not technically be great power conflict, I believe it is sufficiently similar that its inclusion is analytically prudent. ↩︎

  3. Eliezer Yudkowsky's Cognitive Biases Potentially Affecting Judgment of Global Risks. ↩︎

  4. Future weapons will likely be on hair trigger for the same reasons that nukes have been: swifter second strike capabilities could help states counterattack and thus defend themselves better in some circumstances, it makes others less likely to attack since the decision to use hair trigger is somewhat transparent, and there is emotional/psychological/political pressure to take them down with us. ↩︎

  5. Currently the world doesn't include large, powerful groups, coordinated at the state level, that totally despise and want to destroy each other. If it ever does, devastation occurs by default. ↩︎

  6. Another potential desideratum is differential technological progress [? · GW]. Avoiding military development is infeasible to do unilaterally, but perhaps we can avoid some particularly dangerous capabilities or do multilateral arms control. Unfortunately, this is unlikely: avoiding certain technologies is costly because you don't know what you'll find, and effective multilateral arms control is really hard. ↩︎

6 comments

Comments sorted by top scores.

comment by steven0461 · 2021-09-17T20:14:02.460Z · LW(p) · GW(p)

Preemptive attack. Albania thinks that Botswana will soon become much more powerful and that this would be very bad. Calculating that it can win—or accepting a large chance of devastation rather than simply letting Botswana get ahead—Albania attacks preemptively.

FWIW, many distinguish between preemptive and preventive war, where the scenario you described falls under "preventive", and "preemptive" implies an imminent attack from the other side.

Replies from: Zach Stein-Perlman
comment by Zach Stein-Perlman · 2021-09-17T20:17:53.775Z · LW(p) · GW(p)

Ha, I took intro IR last semester so I should have caught this. Fixed, thanks.

comment by ChristianKl · 2021-09-18T22:04:54.615Z · LW(p) · GW(p)

There's no clear line between war and peace. We live in a world that's already in constant cyberwar. AI gets deployed in the existing cyberwar and likely will be more so in the future.

It's unclear how strongly the control about the individual actors are controlled by their respective governments. Arkhipov's submarine didn't get attacked because anyone up the chain ordered it. Attribution of attacks is hard. 

The countries that are players are all different, so you lose insight when you talk about Albania and Botswana instead of the real players.

Given Russia tolerating all the ransomware attacks being launched from their soil, it could be that one US president says "Enough, if Russia doesn't do anything against attacks from their soil on the West, let's decrimilize hacking Russian targets". 

Replies from: Zach Stein-Perlman
comment by Zach Stein-Perlman · 2021-09-18T23:00:42.529Z · LW(p) · GW(p)

Thanks for your comment.

It's unclear how strongly the control about the individual actors are controlled by their respective governments.

Good point. If I understand right, this is an additional risk factor: there's a risk of violence that neither state wants due to imperfect internal coordination, and this risk generally increases with international tension, number of humans in a position to choose to act hostile or attack, general confusion, and perhaps the speed at which conflict occurs. Please let me know if you were thinking something else.

The countries that are players are all different, so you lose insight when you talk about Albania and Botswana instead of the real players.

Of course. I did acknowledge this: "Consideration of more specific factors, such as what conflict might look like between specific states or involving specific technologies, is also valuable but is not my goal here." I think we can usefully think about conflict without considering specific states. Focusing on, say, US-China conflict might obscure more general conclusions.

Given Russia tolerating all the ransomware attacks being launched from their soil, it could be that one US president says "Enough, if Russia doesn't do anything against attacks from their soil on the West, let's decrimilize hacking Russian targets".

Hmm, I haven't heard this suggested before. This would greatly surprise me (indeed, I'm not familiar with domestic or international law for cyber stuff, but I would be surprised to learn that US criminal law was the thing stopping cyberattacks on Russian organizations from US hackers or organizations). And I'm not sure how this would change the conflict landscape.

Replies from: ChristianKl
comment by ChristianKl · 2021-09-19T07:59:59.300Z · LW(p) · GW(p)

Speaking about states wanting things obscures a lot. 

I expect that there's a good chance that Microsoft, Amazon, Facebook, Google, IBM, Cisco, Palantir and maybe a few other private entities are likely to have strong offensive capabilities. 

Then there are a bunch of different three letter agencies who are likely having offensive capabilities. 

This would greatly surprise me (indeed, I'm not familiar with domestic or international law for cyber stuff, but I would be surprised to learn that US criminal law was the thing stopping cyberattacks on Russian organizations from US hackers or organizations)

The US government of course hacks Russian targets but sophisticated private actors won't simply attack Russia and demand ransom to be payed to them. There are plenty of people who currently do mainly do penetration testing for companies and who are very capable at actually attacking who might consider it worthwhile to attack Russian targets for money if that would be possible without legal repercussions.

US government sponsored attacks aren't about causing damage in the way attacks targed at getting ransom are.

And I'm not sure how this would change the conflict landscape.

It would get more serious private players involved in attacking who are outside of government control. Take someone like https://www.fortalicesolutions.com/services . Are those people currently going to attack Russian targets outside of retaliation? Likely not.

Replies from: Zach Stein-Perlman
comment by Zach Stein-Perlman · 2021-09-19T12:00:50.339Z · LW(p) · GW(p)

Oh, interesting.

Speaking about states wanting things obscures a lot.

So I assume you would frame states as less agenty and frame the source of conflict as decentralized — arising from the complex interactions of many humans, which are less predictable than "what states want" but still predictably affected by factors like bilateral tension/hostility, general chaos, and various technologies in various ways?

comment by lukeprog · 2021-09-17T19:59:40.840Z · LW(p) · GW(p)