Thoughts on "The Offense-Defense Balance Rarely Changes"

post by Cullen (Cullen_OKeefe) · 2024-02-12T03:26:50.662Z · LW · GW · 4 comments

Contents

4 comments

4 comments

Comments sorted by top scores.

comment by Dagon · 2024-02-12T04:33:28.690Z · LW(p) · GW(p)

This (whether and how much AI can progress offensive tech before humans learn to defend) shouldn't be a huge crux, IMO.  The offensive technology to make the planet unsuitable for current human civilization ALREADY exists - the defense so far has consisted of convincing people not to use it.

We just can't learn much from human-human conflict, where at almost any scale, the victor hopes to have a hospitable environment remaining afterward.  We might be able to extrapolate from human-rainforest or human-buffalo or human-rat conflict, which doesn't really fit into offense/defense, but more resource competition vs adaptability in shared environments.

Replies from: Cullen_OKeefe
comment by Cullen (Cullen_OKeefe) · 2024-02-12T17:58:11.166Z · LW(p) · GW(p)

Thanks!

The offensive technology to make the planet unsuitable for current human civilization ALREADY exists - the defense so far has consisted of convincing people not to use it.

I think this is true in the limit (assuming you're referring primarily to nukes). But I think offense-defense reasoning is still very relevant here: For example, to know when/how much to worry about AIs using nuclear technology to cause human extinction, you would want to ask under what circumstances can humans defend command and control of nuclear weapons from AIs that want to seize them.

We just can't learn much from human-human conflict, where at almost any scale, the victor hopes to have a hospitable environment remaining afterward.

I agree that the calculus changes dramatically if you assume that the AI does not need or want the earth to remain inhabitable by humans. I also agree that, in the limit, interspecies interactions are plausibly a better model than human-human conflicts. But I don't agree that either of these implies that offense-defense logic is totally irrelevant.

Humans, as incumbents, inherently occupy the position of defenders as against the misaligned AIs in these scenarios, at least if we're aware of the conflict (which I grant we might not be). The worry is that AIs will try to gain control in certain ways. Offense-defense thinking is important if we ask questions like:

  1. Can we predict how AIs might try to seize control? I.e., what does control consist in from their perspective, and how might they achieve that given parties' starting positions.
  2. If we have some model of how AIs try to seize control, what does that imply about humanity's ability to defend itself?
comment by Ben (ben-lang) · 2024-02-13T13:13:32.573Z · LW(p) · GW(p)

Really interesting post. One detail I thought might be off:

"Conflicts that have killed more than 20 people per 100,000 appear to have become steadily more common since 1600. This is what you’d predict if there was a long-run increase in the destructiveness ... of military technology"

I would draw a different conclusion. I think its related to the fact that four political units control 30% of the landmass. I imagine that any given war fought in 1600's is just as destructive for the states involved in that war, but by 1914 a war involves more of the planet. For example, in the 1600's there was the English Civil war. Google says that 4.5% of the UK population was killed. But that was not particularly significant to the global deaths by war because it was just the UK. However the world wars both involved a sizable fraction of the entire planet, so with even a much lower than 4.5% death rate for the states involved the world was noticeably depopulated. I think the main shift is from a world where each area fights destructive local wars in an uncorrelated way to one where everyone does their fighting at the same time.

Replies from: Cullen_OKeefe
comment by Cullen (Cullen_OKeefe) · 2024-02-13T15:33:46.627Z · LW(p) · GW(p)

Good point!