post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by AnthonyC · 2025-02-05T14:43:45.924Z · LW(p) · GW(p)

I very much agree with the value of not expecting a silver bullet, not accelerating arms race dynamics, fostering cooperation, and recognizing in what ways AGI realism represents a stark break from the impacts of typical technological advances. The kind of world you're describing is a possibility, maybe a strong one, and we don't want to repeat arrogant past mistakes or get caught flat footed.

That said, I think this chain of logic hinges closely on just what "…at least for a while" means in practice, yes? If one side has enough of an AI lead to increase its general technological advantage over adversaries by a matter of what would be centuries of effort at the adversaries' then-current capability levels, then that's very different than if the leader is only a few minutes or months ahead. We should be planning for many eventualities, but as long as the former scenario is a possibility, I'm not sure how we can plan for it effectively without also trying to be first. As you note, technological advantage has rarely been necessary or sufficient, but not never. I don't like it one bit, but I'm not sure what to actually do about it.

The reason I say that is just that in the event that AGI-->ASI really does turn out to be very fast and enable extremely rapid technological advancement, then I'm not sure how the rest of the dynamics end up playing a role in that timeline. In that world, military action against an adversary could easily look like "Every attempt anyone else makes to increase their own AI capabilities any further gets pre-emptively and remotely shut down or just mysteriously fails. If ASI decides to act offensively, then near-immediately their every government and military official simultaneously falls unconscious, while every weapon system, vehicle, or computer they have is inoperable or no longer under there control. They no longer have a functioning electric grid or other infrastructure, either." In such a world, the political will to wage war no longer centers on a need to expend money, time, or lives. There's nothing Homo habilis can do to take down an F-35.

Again, I agree with you that no one should just assume the world will look like that to the exclusion of other paths. But if we want to avoid arms race dynamics, and that world is a plausible path, I don't think any proposed approach I've seen or heard of works convincingly enough that it could or should sway government and military strategy.

Replies from: None
comment by [deleted] · 2025-02-05T16:12:18.738Z · LW(p) · GW(p)

I think this is very fair! In a world where (i) AGI -> ASI is super fast; (ii) the military diffusion of ASI is exceptionally quick; and (iii) the marginal costs of scaling offensive capability is extremely low, then any sense of a limited/total war distinction does indeed break down and ASI will be the defining factor of military capability much, much sooner than we'd expect. 

I think I'm instinctually sceptical of (iii) at least for a couple years after the advent of ASI though (the critical juncture for this strategy), where I think the modal outcome still looks like ASIs engage in routine cyberoperations all the time; are autonomously responsible for handling aerial warfare; and are fundamental to military operations/planning. But it's still really costly to engage in a total war scenario aimed at completely crippling a state such as China. It could play out as the need to engineer tons of drones/UAVs, the extremely costly development of a superweapon, the costs of having to secure every datacentre, etc. Within the period where we have to reckon with the effects of ASI, my guess is that the modal war - even with China - is still more a function of commitment than military advantage (which makes AGI realist rhetoric a risk amplifier). 

Although I wouldn't say I'm hugely confident here, and I definitely don't feel very calibrated on just how likely this world is where the rapid diffusion of ASI also means very little/low marginal cost of scaling offensive capabilities. Though in is world, frankly, I don't think we avoid war at all unless there happen to be strong norms and sentiments against this kind of deployment. I guess the "maximise our ability to deploy ASI offensively" approach makes sense if the approach is "we must win the eventual war with China" built on relatively high credences we're in this rapid-diffusion-low-marginal-costs worlds. But given uncertainties about whether we're in this world; the potentially catastrophic consequences of war; and the fact that at least maintaining a competitive advantage isn't mutually exclusive from equally attempting strong norm-forming against war - the AGI realist rhetoric still makes me uneasy.

But I at least share that no other proposed approach seems great. I'm just conscious it seems not enough people in the relevant circles are even thinking about other approaches because they've already bought into a frame I think will only worsen the chances of catastrophic risk.

Replies from: AnthonyC
comment by AnthonyC · 2025-02-05T21:21:14.570Z · LW(p) · GW(p)

I'd say I agree with just about all of that, and I'm glad to see it laid out so clearly!

I just also wouldn't be hugely surprised if it turns out something like designing and building remote-controllable self-replicating globally-deployable nanotech (as one example) is in some sense fundamentally "easy" for even an early ASI/modestly superhuman AGI. Say that's the case, and we build a few for the ASI, and then we distribute them across the world, in a matter of weeks. They do what controlled self-replicating nanobots do. Then after a few months the ASI already has an off switch or sleep mode button buried in everyone's brain. My guess is that then none of those hard steps of a war with China come into play. 

To be clear, I don't think this story is likely. But in a broad sense, I am generally of the opinion that most people greatly overestimate how much new data we need to answer new questions or create (some kinds of) new things, and underestimate what can be done with clever use of existing data, even among humans, let alone as we approach the limits of cleverness. 

comment by noggin-scratcher · 2025-02-02T21:32:52.883Z · LW(p) · GW(p)

Looks like #6 in the TL;DRs section is accidentally duplicated (with the repeat numbered as #7)

Replies from: None
comment by [deleted] · 2025-02-03T06:09:12.234Z · LW(p) · GW(p)

Thank you!