Posts

Nuclear Espionage and AI Governance 2021-10-04T23:04:14.253Z

Comments

Comment by Guive (GAA) on Habryka's Shortform Feed · 2024-12-08T00:19:00.381Z · LW · GW

This is good. Please consider making it a top level post. 

Comment by Guive (GAA) on Why I think there's a one-in-six chance of an imminent global nuclear war · 2022-11-12T00:56:48.943Z · LW · GW

Not the main point here, but the US was not the only country with nuclear weapons during the Korean War. The Soviet Union tested it's first nuclear weapon on 29 August, 1949, and the Korean War began on 25 June, 1950.

Comment by Guive (GAA) on The case for turning glowfic into Sequences · 2022-04-27T23:47:59.995Z · LW · GW

Perhaps this is a stupid suggestion, but if trolls in the comments annoy him, can he post somewhere where no comments are allowed? You can turn off comments on wordpress, for example.

Comment by Guive (GAA) on Nuclear Espionage and AI Governance · 2021-10-05T13:08:00.937Z · LW · GW

Here is an unpaywalled version of the first model.

Also, it seems like there's a bit of a contradiction between the idea that a clear leader may feel it has breathing room to work on safety, and the idea of restricting information about the state of play. If there were secrecy and no effective spying, then how would you know whether you were the leader? Without information about what the other side was actually up to, the conservative assumption would be that they were at least as far along as you were, so you should make the minimum supportable investment in safety, and at the same time consider dramatic "outside the game" actions.

In the first model, the effect of a close race increasing risk through corner cutting only happens when projects know how they are doing relative to their competitors. I think it is useful to distinguish two different kinds of secrecy. It is possible for the achievements of a project to be secret, or the techniques of a project to be secret, or both. In the Manhattan Project case, the existence of the Manhattan Project and the techniques for building nuclear bombs were both secret. But you can easily imagine an AI arms race where techniques are secret but the existence of competing projects or their general level of capabilities is not secret. In such a situation you can know about the size of leads without espionage. And adding espionage could decrease the size of leads and increase enmity, making a bad situation worse. 

I think the "outside the game" criticism is interesting. I'm not sure whether it is correct or not, and I'm not sure if these models should be modified to account for it, but I will think about it.

I've seen private sector actors get pretty incensed about industrial espionage... but I'm not sure it changed their actual level of competition very much. On the government side, there's a whole ritual of talking about being upset when you find a spy, but it seems like it's basically just that.

I don't think it's fair to say that governments getting upset about spies is just talk. Or rather, governments assume that they are being spied on most of the time. When they find spies that they have already priced in, they don't really react to that. But discovering a hitherto unsuspected spy in an especially sensitive role probably increases enmity a lot (but of course the amount will vary based on the nature of the government doing the discovering, the strategic situation, and the details of the case).

Comment by Guive (GAA) on [Book Review] "The Vital Question" by Nick Lane · 2021-09-28T21:46:42.450Z · LW · GW

Thanks for this review. I particularly appreciated the explanation of why the transition from primordial soup to cell is hard to explain. Do you know how Lane's book has been received by other biochemists?