The Manhattan Trap: Why a Race to Artificial Superintelligence is Self-Defeating
post by Corin Katzke (corin-katzke), GideonF · 2025-01-21T16:57:00.998Z · LW · GW · 6 commentsThis is a link post for https://www.convergenceanalysis.org/research/the-manhattan-trap-why-a-race-to-artificial-superintelligence-is-self-defeating
Contents
6 comments
6 comments
Comments sorted by top scores.
comment by t14n (tommy-nguyen-1) · 2025-01-22T01:09:02.131Z · LW(p) · GW(p)
re: 1b (development likely impossible to kept secret given scale required)
I'm remind of Dylan Patel's comments (semianalysis) on a recent episode of the Dwarkesh Podcast which goes something like:
if you're Xi Jinping and you're scaling pilled, you can just centralize all the compute and build all the substations for it. You can just hide it inside one of the factories you already have that's drawing power for steel production and re-purpose it as a data center.
Given the success we've seen in training SOTA models with constrained GPU resources (Deepseek), I don't think it's far fetched to think you can hide bleeding edge development. It turns out all you need is a few hundred of the smartest people in your country and a few thousand GPUs.
Hrm...sounds like the size of the Manhattan Project.
comment by cousin_it · 2025-01-22T08:49:06.505Z · LW(p) · GW(p)
An ASI project would be highly distinguishable from civilian AI applications and not integrated with a state’s economy
Why? I think there's a smooth ramp from economically useful AI to superintelligence: AIs gradually become better at many tasks, and these tasks help more and more with improving AI in turn.
comment by Nathan Helm-Burger (nathan-helm-burger) · 2025-01-22T13:45:14.825Z · LW(p) · GW(p)
This argument neglects the option of racing -with-plausible-deniability. I would argue that both the US and China are already doing this. We haven't gone to war yet.
Is 'Stargate' not racing?
Many have argued that nationalizing the major AI companies would substantially slow down progress because of bureaucratic overhead, reorganization costs, loss of immigrant personnel, and stifling of creativity.
If this were my working model of the world, and I wanted to help the US win the race I might:
Invest in on-shoring the means of production for the AI vertical (in progress)
Invest in expanding supporting infrastructure (in progress)
Place plausibly state-disconnected but trusted people in positions of power, like board seats (some done, some in progress).
Award lucrative military contracts for not-explicitly-offensive-purposes. (Some done, probably more in progress).
Bring heads of the major labs to the head of government for secret meetings, and establish close working relationships (done, and on going).
Place secret operatives and surveillance tech within the key companies (unobservable unless caught).
Place restrictions on export of key materials and technological information (done and on going).
Create a national government org explicitly to monitor progress in AI (done).
Purchase large amounts of computer equipment under false pretenses, hide it in secret facilities, and refuse to discuss the location or purpose of confronted. (Done) https://www.lesswrong.com/posts/grvJay8Cv3TBhXz3a/secret-us-natsec-project-with-intel-revealed [LW · GW]
Seems like a lot of evidence in favor of a plausibly deniable soft-nationalization race. Can you present any counter-evidence?
comment by momom2 (amaury-lorin) · 2025-01-21T22:19:09.868Z · LW(p) · GW(p)
Contra 2:
ASI might provide a strategic advantage of a kind which doesn't negatively impact the losers of the race, e.g. it increases GDP by x10 and locks competitors out of having an ASI.
Then, losing control of the ASI could [not being able of] posing an existential risk to the US.
I think it's quite likely this is what some policymakers have in mind: some sort of innovation which will make everything better for the country by providing a lot cheap labor and generally improving productivity, the way we see AI applications do right now but on a bigger scale.
Comment on 3:
Not sure who your target audience is; I assume it would be policymakers, in which case I'm not sure how much weight that kind of argument has? I'm not a US citizen, but from international news I got the impression that current US officials would rather relish the option to undermine the liberal democracy they purport to defend.
comment by Convolutions (adam-nelles-boulton) · 2025-01-22T06:54:30.860Z · LW(p) · GW(p)
Very informative piece that does a lot in the right direction. Articles like this can have a real impact on policy demonstrating “there be dragons”.
A criticism would be that it doesn’t account for the state of the board in reality - the trust dilemma fails under circumstances where domestic commercial incentives overwhelm international cooperative concerns and collapses the situation to a prisoners dilemma, unfortunately, I think. I hope there are trust based solutions, and I’m mistaken.
comment by Corin Katzke (corin-katzke) · 2025-01-21T17:34:36.144Z · LW(p) · GW(p)
our
Note: coauthored by Gideon Futerman.