Comment by abukeki on Some abstract, non-technical reasons to be non-maximally-pessimistic about AI alignment · 2021-12-13T16:56:27.915Z · LW · GW

MIRI had a strategic explanation in their 2017 fundraiser post which I found very insightful. This was called the "acute risk period".

Comment by abukeki on I currently translate AGI-related texts to Russian. Is that useful? · 2021-11-27T22:21:59.595Z · LW · GW

Yes, but I think much more useful might be for someone to do this for Chinese.

Comment by abukeki on First Strike and Second Strike · 2021-11-25T22:28:35.167Z · LW · GW

Those 3 new silo fields are the most visible but I'd guess China is expanding the mobile arm of its land-based DF-41 force (TELs) a similar amount. You just don't see that on satellite images. The infrastructure enabling Launch on Warning is also being implemented which will make those silos much more survivable, though this also of course greatly increases the risk of accidental nuclear war. I'd argue that those silo fields are destabilizing, especially if China decides to deploy the majority of their land-based force that way, because even with a Launch on Warning posture there will be at least some use-it-or-lose-it pressure during a conflict, while the mobile and sea-based deterrent are stabilizing because they for the most part lack that issue. Similarly, hypersonic weapons including the much-discussed recent tests are stabilizing because they shatter US delusions of any protection offered by its BMD system, now and future. There are no practical differences with regular ICBM warheads besides the ability to better penetrate defences; they're in fact slower.

The issue with China's current SSBN (the Type 094) is twofold: more noisy and the SLBM they carry has relatively low range, so they have to venture further into the Pacific to hit much of the US mainland, both of which render it more vulnerable to detection. The upcoming 096 solves this, both being quieter and allowing it to fire from a protected "bastion" in Chinese coastal waters.

I'm willing to bet the Pentagon's projection that China will have 700 warheads by 2027 and 1000 by 2030 will be revised upward again next year, and some in the US military seem to agree with me. In light of this I'd strongly suggest those in the community working on nuclear risks (e.g. Rethink) shift their main focus from the US-Russia scenario to China, especially with how hard everyone in the West is dying to go to war with China these days haha.

Comment by abukeki on Postmodern Warfare · 2021-10-25T23:46:01.060Z · LW · GW

Can you give some examples of who in the "rationalist-adjacent spheres" are discussing it?

Comment by abukeki on Good AI alignment online class? · 2021-10-11T20:24:21.238Z · LW · GW

A bunch of links here and here.

Comment by abukeki on How to think about and deal with OpenAI · 2021-10-10T19:58:26.721Z · LW · GW

I'm aware. I'm just saying a new effort is still needed because his thoughts on alignment/AI risk are still clearly very misguided listening to all his recent public comments on the topic and what he's trying to do with Neuralink etc. so someone really needs to reach out and set him straight.

Comment by abukeki on How to think about and deal with OpenAI · 2021-10-09T14:21:33.910Z · LW · GW

Agree with we should reach out to him & the community is connected enough to do so. If he's concerned about AI risk but either being misguided or doing harm (see e.g. here/here and here), then someone should just... talk to him about it? The richest man in the world can do a lot either way. (Especially someone as addicted to launching things as him, who knows what detrimental thing he might do next if we're not more proactive.)

I get the impression the folks at FLI are closest to him so maybe they are the best ones to do that.