An AI Manhattan Project is Not Inevitable
post by Maxwell Tabarrok (maxwell-tabarrok) · 2024-07-06T16:42:35.920Z · LW · GW · 25 commentsThis is a link post for https://www.maximum-progress.com/p/an-ai-manhattan-project-is-not-inevitable
Contents
25 comments
Early last month, Leopold Aschenbrenner released a long essay and podcast outlining his projections for the future of AI. Both of these sources are full of interesting arguments and evidence, for a comprehensive summary see Zvi’s post here. Rather than going point by point I will instead accept the major premises of Leopold’s essay but contest some of his conclusions.
So what are the major premises of his piece?
- There will be several orders of magnitude increase in investment into AI. 100x more spending, 100x more compute, 100x more efficient algorithms, and an order of magnitude or two gains from some form of “learning by doing” or “unhobbling” on top.
- This investment scale up will be sufficient to achieve AGI. This means the models on the other side of the predicted compute scale up will be able to automate all cognitive jobs with vast scale and speed.
- These capabilities will be essential to international military competition.
All of these premises are believable to me and well-argued for in Leopold’s piece.
Leopold contends that these premises imply that the national security state will take over AI research and the major data centers, locking down national secrets in a race against China, akin to the Manhattan project.
Ultimately, my main claim here is descriptive: whether we like it or not, superintelligence won’t look like an SF startup, and in some way will be primarily in the domain of national security.
By late 26/27/28 … the core AGI research team (a few hundred researchers) will move to a secure location; the trillion-dollar cluster will be built in record-speed; The Project will be on.
The main problem is that Leopold’s premises can be applied to conclude that other technologies will also inevitably lead to a Manhattan project, but these projects never arrived. Consider electricity. It's an incredibly powerful technology with rapid scale up, sufficient to empower those who have it far beyond those who don’t and it is essential to military competition. Every tank and missile and all the tech to manufacture them relies on electricity. But there was never a Manhattan project for this technology. It’s initial invention and spread was private and decentralized. The current sources of production and use are mostly private.
This is true of most other technologies with military uses: explosives, steel, computing, the internet, etc. All of these technologies are essential in the government’s monopoly on violence and it’s ability to exert power over other nations and prevent coups from internal actors. But the government remains a mere customer of these technologies and often not even the largest one.
Why is this? Large scale nationalization is costly and unnecessary for maintaining national secrets and technological superiority. Electricity and jet engines are essential for B-2 bombers, but if you don't have the particular engineers and blueprints, you can't build it. So, the government doesn’t need to worry about locking down the secrets of electricity production and sending all of the engineers to Los Alamos. They can keep the first several steps of the production process completely open and mix the outputs with a final few steps that are easier to keep secret.
To be clear, I am confident that governments and militaries will be extremely interested in AI. They will be important customers for many AI firms, they will create internal AI tools, and AI will become an important input into every major military. But this does not mean that most or all of the AI supply chain, from semi-conductors to data-centers to AI research, must be controlled by governments.
Nuclear weapons are outliers among weapons technology in terms of the proportion of the supply chain and final demand directly overseen by governments. Most military technologies rely on an open industrial base mixed with some secret knowledge in the final few production steps.
So should we expect AGI to be more like nuclear weapons or like a new form of industrial capacity? This depends on how much extra scaffolding you need on top of the base model computation that’s piped out of data centers to achieve militarily relevant goals.
Leopold’s unhobbling story supports a view where the intelligence produced by massive datacenters is more like the raw input of electricity, which needs to be combined with other materials and processes to make a weapon, than a nuclear bomb which is a weapon and only a weapon right out the box.
Leopold on base models says:
“out of the box, they’re hobbled: they’re using their incredible internal representations merely to predict the next token in random internet text, and rather than applying them in the best way to actually try to solve your problem.”
“On SWE-Bench (a benchmark of solving real-world software engineering tasks), GPT4 can only solve ~2% correctly, while with Devin’s agent scaffolding it jumps to 14-23%. (Unlocking agency is only in its infancy though)”
Devin can have a product without a proprietary model because they have a scaffold. They can safely contract for and pipe in the raw resource latent model and put it through a production process to get something uniquely tooled out the other end, without needing to in-house and lock-down the base model to maintain a unique product.
Of current chatbots Leopold says:
“They’re mostly not personalized to you or your application (just a generic chatbot with a short prompt, rather than having all the relevant background on your company and your work)”
Context is that which is scarce! The national security state needn’t lock down the base models if the models are hobbled without the context of their secret applications. That context is already something they’re already extremely skilled at locking down, and it doesn’t require enlisting an entire industry.
“In a few years, it will be clear that the AGI secrets are the United States’ most important national defense secrets—deserving treatment on par with B-21 bomber or Columbia-class submarine blueprints”
Leopold is predicting the nationalization of an entire industrial base based on an analogy to submarines and bombers, but a large fraction of the supply chain for these vehicles are private and open. It’s not clear why he thinks military applications of AGI can’t be similarly protected without control over the majority of the supply chain and final demand.
If you imagine AGI as this single, powerful oracle and army that can complete any and all tasks on command, then Leopold is right: governments will fight hard to lock everyone else out. If instead, AGI is a sort of “intelligence on tap” which is an input to thousands of different production processes where it’s mixed with different infrastructure, context, and tools to create lots of different products, then governments don’t need to control the entire industrial base producing this intelligence to keep their secrets. Leopold leans hard on the Manhattan project as a close analogy to the first situation, but most military technologies are in the second camp.
25 comments
Comments sorted by top scores.
comment by RogerDearnaley (roger-d-1) · 2024-07-06T22:23:30.254Z · LW(p) · GW(p)
Leopold's thesis is that there is, or soon will be, a race between the US and China. Certainly there has been talk about this in Washington. The question then is, if the US government wants to keep the Chinese government from having access to AGI tech, what's the hard step we can block them from reproducing? Cognitive scaffolding doesn't seem like a good candidate: so far it's been relatively cheap and easy (the AI startup I work for built pretty effective cognitive scaffolding with a few engineers working for a few months). Making the chips and building an O($100 billion) cluster of them to train the model is Leopold's candidate for that, and current US GPU and chip-making technology export controls suggest the US government already agrees. If that's the case, then an adversary stealing the weights to the AGI model we trained becomes a huge national security concern, so security precautions against that become vital, thus the government gets closely involved (perhaps starting by arranging that the ex-head of the NSA joins the board of one leading lab).
Leopold also thinks that the algorithmic efficiency advances, as divisors on the cost of the training cluster required, will become vital national security secrets too — thus the researchers whose heads they're in will become part of this secretive Manhattan-like project.
To me, this all sounds rather plausible, and distinctly different from the situation for other industries like electricity or aerospace or the Internet that you cite, which all lack such a "hard part" choke-point. The analogy with nuclear weapons, where the hard/monitorable part is the isotope enrichment, seems pretty valid to me. And there are at least early signs that the US government is taking AI seriously, and looking at the situation through the lenses of some combination of AI risk and international competition.
So I see Leopold's suggestion as coming in two parts a) the military/intelligence services will get involved (which I see as inevitable, and already see evidence is happening) and b) the expense involved, degree of secrecy required, and degree of power that ASI represents means their response will look like more like nationalization followed by the Manhattan Project than a more typical part military-industrial complex. That I view as also fairly plausible, but not indisputable. However, Leopold's essay has certainly moved the Overton Window on this.
comment by Vladimir_Nesov · 2024-07-06T17:29:24.263Z · LW(p) · GW(p)
I think the main reason governments may fail to take control (no comment on keeping it) is that TAI might be both the first effective wakeup call and the point when it's too late to take control. It can be too late if there is already too much proliferation, sufficient-if-not-optimal code and theory and models already widely available, sufficient compute to compete with potential government projects already abundant and impossible to sufficiently take down. So even if the first provider of TAI is taken down, in a year everyone has TAI, and the government fails to take sufficient advantage of its year of lead time to dissuade the rest of the world.
The alternative where government control is more plausible is first making a long-horizon task capable AI that can do many jobs, but can't itself do research or design AIs, and a little bit of further scaling or development isn't sufficient to get there. The economic impact then acts as a wakeup call, but the AI itself isn't yet a crucial advantage, can be somewhat safely used by all sides, and doesn't inevitably lead to ASI a few years later. At this point governments might get themselves a monopoly on serious compute, so that any TAI projects would need to go through them.
Replies from: Seth Herd, nathan-helm-burger↑ comment by Seth Herd · 2024-07-09T05:42:21.880Z · LW(p) · GW(p)
I agree that this might happen too fast to develop a manhattan project, but do you really see a way the government fails to even seize effective control of AGI once it's developed? It's pretty much their job to manage huge security concerns like the one AGI presents, even if they kept their hands off the immense economic potential. The scenarios in which the government just politely stands aside or doesn't notice even when they see human-level systems with their own eyes seem highly unlikely to me.
Seizing control of a project while it's taking off from roughly the human to superhuman level isn't as good as taking control of the compute to build it, but it's better than nothing, and it feels like the type of move governments often make. They don't even need to be public about it, just show up and say "hey let's work together so nobody needs to discuss laws around sharing security-breaking technology with our enemies"
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-07-09T15:39:37.484Z · LW(p) · GW(p)
It depends on how much time there is between the first impactful demonstration of long-horizon task capabilities (doing many jobs) and commoditization of research capable TAI, even with governments waking up during this interval and working to extend it. It might be that by default this is already at least few years [LW(p) · GW(p)], and if the bulk of compute is seized, it extends to even longer. This seems to require long-horizon task capabilities to be found at the limits of scaling, and TAI significantly further.
But we don't know until it's tried if even a $3 billion training run won't already enable long-horizon task capabilities (with appropriate post-training, even if it arrives a bit later), and we don't know if the first long-horizon task capable AI won't immediately be capable of research, with no need for further scaling (even if it helps). And if it's not immediately obvious how to elicit these capabilities with post-training, there will be an overhang of sufficient compute and sufficiently strong base models in many places before the alarm is sounded. If enough of such things align, there won't be time for anyone to prevent prompt commoditization of research capable TAI. And then there's ASI 1-2 years later, with the least possible time for anyone to steer any of this.
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-07-06T23:13:04.555Z · LW(p) · GW(p)
long-horizon task capable AI that can do many jobs, but can't itself do research or design AIs, and a little bit of further scaling or development isn't sufficient to get there
This seems like something very unlikely to be possible. You didn't say it was likely, but I think it's worth pointing out that I have a hard time imagining this existing. I think a long-horizon task capable AI with otherwise similar capabilities as current LLMs would be quite capable of researching and creating stronger AIs. I think we are very close indeed to the threshold of capability beyond which recursive improvement will be possible.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-07-07T00:21:04.740Z · LW(p) · GW(p)
The relevant distinction is between compute that proliferated before there were long-horizon task capable AIs, and compute that's necessary to train autonomous researcher AIs. A lot of compute might even be needed to maintain their ability to keep working on novel problems, since an AI trained on data that didn't include the very recent progress might be unable to make further progress, and continued training isn't necessarily helpful enough compared to full retraining, so that stolen weights would be relatively useless for getting researcher AIs to do deep work.
There are only 2-3 OOMs of compute scaling left to explore [LW · GW] if capabilities of AIs don't dramatically improve, and LLMs at current scale robustly fail at long-horizon tasks. If AIs don't become very useful at something, there won't be further OOMs until many years pass and there are larger datacenters, possibly well-tested scalable asynchronous distributed training algorithms, more energy-efficient AI accelerators, more efficient training, ways of generating more high quality data. Now imagine if long-horizon task capable AIs were developed just before or even during this regime of stalled scaling, it took more than a year, $100 billion, and 8 gigawatts to train one, and it's barely working well enough to unlock the extreme value of there being cheap and fast autonomous digital workers capable of routine jobs, going through long sequences of unreliable or meandering reasoning but eventually catching the systematic problems in a particular train of thought, recovering well enough to do their thing. And further scaling resulting from a new investment boom still fails to produce a researcher AI, as it might take another 2-3 OOMs and we are all out of AI accelerators and gigawatts for the time being.
In this scenario, which seems somewhat plausible, the governments both finally actually notice the astronomical power of AI, and have multiple years to get all large quantities of compute under control, so that the compute available for arbitrary non-government use gets somewhat lower than what it takes to train even a barely long-horizon task capable AI that's not at all a researcher. Research-capable TAI then by default won't appear in all these years, and after the transition to centralized control over compute is done, future progress towards such AI can only happen under government control.
Replies from: roger-d-1↑ comment by RogerDearnaley (roger-d-1) · 2024-07-11T07:21:23.256Z · LW(p) · GW(p)
By Leopold's detailed analysis, of the ongoing rate of advance of training run effective compute capacities, ~40% is coming from increases in willingness to invest more money, ~20% from Moore's Law, and ~40% from algorithmic improvements. As you correctly point out (before TAI-caused growth spikes) the current size of the economy provides a pretty clear upper bound on how long the first factor can continue, probably not very long after 2027. Moore's Law has fairly visibly been slowing for a while (admittedly perhaps less so for GPUs than CPUs, as they're more parallelizable): likely it will continue to gradually slow, at least until there is some major technological leap. Algorithmic improvements must eventually hit diminishing returns, but recent progress suggests to Leopold (and me) that there's still plenty of low-hanging fruit. If one or two of those three contributing factors stops dead in the next few years, any remaining AGI-timeline at that point then moves out by roughly a factor of two (unless the only one left is Moore's Law, when they move out five-fold, but this seems the least plausible combination to me). So for example, if Leopold is wrong about GPT-6 being AGI and it's actually GPT-7 (a fairly plausible inference from extrapolating on his own graph with straight lines and bands on it), so that we would if effective compute capacities increase rates stayed steady hit that in 2029 not 2027 as he's suggesting, but we run out of willingness/capacity to invest more money in 2028, then that factor of two slowdown only pushes AGI out a year to 2030 (not a difference that anyone with a high P(DOOM) is going to be very relived by).
[I think seeing how much closer GPT-5 feels to AGI compared to GPT-4 may be very informative here: I'd hope to be able to get an impression fairly fast after it comes out on whether if feels like we're now half-way there, or only a third of the way. Of course, that won't include a couple of years' worth of scaffolding improvements or other things in the category Leopold calls "unhobbling", so our initial estimate may be an underestimate, but then we may also be underestimating difficulties around more complex/abstract but important things like long-tern planning ability and the tradeoff-considerations involved in doing the experimental design part of the scientific method — the former is something that GPT-4 is so bad at that I rather suspect we're going to need an unhobbling here. Planning ability seems plausible as something evolution might have rather heavily optimized humans for.]
You appear to be assuming either that increasing investment is the only factor driving OOM increases in effective compute, or that all three factors will stop at the same time.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-07-11T16:04:29.212Z · LW(p) · GW(p)
The question is if research capable TAI can lag behind government-alarming long-horizon task capable AI (that does many jobs and so even Robin Hanson starts paying attention). These are two different thresholds that might both be called "AGI", so it's worth making a careful distinction. Even if it turns out that in practice they coincide and the same system becomes the first to qualify for both, for now we don't know if that's the case, and conceptually they are different.
If this lag is sufficient, governments might be able to succeed in locking down enough compute to prevent independent development of research capable TAI for many more years. This includes stopping or even reversing improvements in AI accelerators. If govenments only become alarmed once there is a research capable TAI, that gives the other possibility [LW(p) · GW(p)] where TAI is developed by everyone very quickly and the opportunity to do it more carefully is lost.
Increasing investment is the crucial consideration in the sense that if research capable TAI is possible with modest investment, then there is no preventing its independent development. But if the necessary investment turns out to be sufficiently outrageous, controlling development of TAI by controlling hardware becomes feasible. Advancements in hardware are easy to control if most governments are alarmed, the supply chains are large, the datacenters are large. And algorithmic improvements have a sufficiently low ceiling to keep what would otherwise be $10 trillion training runs infeasible for independent actors even if done with better methods. The hypothetical I was describing [LW(p) · GW(p)] has research capable TAI 2-3 OOMs above the $100 billion necessary for long-horizon task capable AI, which as a barrier for feasibility can survive some algorithmic improvements.
I also think the improvements themselves are probably running out. There's only about 5x improvement in all these years for the dense transformer, a significant improvement from MoE, possibly some improvement from Mixture of Depths. All attention alternatives remain in the ballpark despite having very different architectures. Something significantly non-transformer-like is probably necessary to get more OOMs of algorithmic progress, which is also the case if LLMs can't be scaled to research capable TAI at all.
(Recent unusually fast improvement in hardware was mostly driven by moving to lower precision, first BF16, then FP8 with H100s, and now Microscaling (FP4, FP6) with Blackwell. This process is also at an end, lower-level hardware improvement will be slower. But unlike algorithmic improvements, this point is irrelevant to the argument, since improvement in hardware available to independent actors can be stopped or reversed by governments, unlike algorithmic improvements.)
Replies from: roger-d-1↑ comment by RogerDearnaley (roger-d-1) · 2024-07-11T22:37:36.614Z · LW(p) · GW(p)
I also think the improvements themselves are probably running out.
I disagree, though this is based on some guesswork (and Leopold's analysis, as a recently-ex-insider). I don't know exactly how they're doing it (improvements in training data filtering is probably part of it), but the foundation model companies have all been putting out models with lower inference costs and latencies for the same capability level (OpenAI; GPT-4 Turbo, GPT-4o vs GPT-4; Anthropic Claude 3.5 Sonnet vs. the Claude 3 generation; Google: Gemini 1.5 vs 1). I am assuming that the reason for this performance improvement is that the newer models actually had lower parameter counts (which is supported by some rumored parameter count numbers), and I'm then also assuming that means these had lower total compute to train. (The latter assumption would be false for smaller models trained via distillation from a larger model, as some of the smaller Google models almost certainly are, or heavily overtrained by Chinchilla standards, as has recently become popular for models that are not the largest member of a model family.)
Things like the effectiveness of model pruning methods suggest that there are a lot of wasted parameters inside current models, which would suggest there's still a lot of room for performance improvements. The huge context lengths that foundation model companies are now advertising without huge cost differentials also rather suggest something architectural has happened there, which isn't just full attention quadratic-cost classical transformers. What combination of the techniques from the academic literature, or ones not in the academic literature, that's based on is unclear, but clearly something improved there.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-07-11T23:21:27.101Z · LW(p) · GW(p)
Algorithmic improvements relevant to my argument are those that happen after long-horizon task capable AIs are demonstrated, in particular it doesn't matter how much progress is happening now, other than as evidence about what happens later.
heavily overtrained by Chinchilla standards
This is necessarily part of it. It involves using more compute, not less, which is natural given that new training environments are getting online, and doesn't need any algorithmic improvements at all to produce models that are both cheaper for inference and smarter. You can take a Chinchilla optimal model, make it 3x smaller and train it on 9x data, expending 3x more compute, and get approximately the same result. If you up the compute and data a bit more, the model will become more capable. Some current improvements are probably due to better use of pre-training data, but these things won't survive significant further scaling intact. There are also improvements in post-training, but they are even less relevant to my argument, assuming they are not lagging behind too badly in unlocking the key thresholds of capability.
Replies from: roger-d-1↑ comment by RogerDearnaley (roger-d-1) · 2024-07-12T00:46:09.529Z · LW(p) · GW(p)
Algorithmic improvements relevant to my argument are those that happen after long-horizon task capable AIs are demonstrated, in particular it doesn't matter how much progress is happening now, other than as evidence about what happens later
My apologies, you're right, I had misunderstood you, and thus we've been talking at cross-purposes. You were discussing
…if research capable TAI can lag behind government-alarming long-horizon task capable AI (that does many jobs and so even Robin Hanson starts paying attention)
while I was instead talking about how likely it was that running out of additional money to invest slowed reaching either of these forms of AGI (which I personally view as being likely to happen quite close together, as Leopold also assumes) by enough to make more than a year-or-two's difference.
comment by Seth Herd · 2024-07-09T05:36:32.872Z · LW(p) · GW(p)
It's not at all inevitable. I don't think Aschenbrenner painted it as inevitable. He was more arguing that the government should do this than that they'd agree with him or act in time.
Let me just point out that it doesn't need to be a Manhattan project for the goverment to take control of AGI if it's developed at an American company. Once the national security implications are clear, I don't think they'd even need to pass new laws or even executive orders. And there are lots of arrangements like "the government is going to help you with network security, in exchange for a seat at the table when decisions are made". Or alternately, they'll help with nothing and just threaten harsh consequences if their advice isn't followed in how this potential game-changer is used.
I see very little chance that governments don't seize control of AGI. The only question is when, which governments, and where they are relative to rival powers' development of AGI at that point.
comment by mako yass (MakoYass) · 2024-07-07T00:00:55.467Z · LW(p) · GW(p)
I think I agree that this is possible, and it's closely connected to the reasons I think making alignable ASI open source definitely wouldn't lead to egalitarian outcomes:
Artificial Superintelligence, lacking the equalizing human frailties of internal opacity, corruption (principle-agent problems), and senescence, is going to be even more prone to rich-get-richer monopoly effects than corporate agency was.
You can give your opponent last-gen ASI, but if they have only a fraction of the hardware to run it on, and only a fraction of the advanced manufacturing armatures. Being behind on the superexponential curve of recursively accelerating progress leaves them with roughly zero part of the pie.
(Remember that the megacorporations who advocate for open source AI in the name of democratization while holding enduring capital advantages in reserve all realize this.)
comment by RogerDearnaley (roger-d-1) · 2024-07-06T21:59:03.236Z · LW(p) · GW(p)
…there was never a Manhattan project for [electricity]…
This is true of most other technologies with military uses: explosives, steel, computing, the internet, etc.
The Internet started as a military-sponsored project, funded by DARPA (the Defence Advanced Research Projects Agency). It used to be called ARPANet, and .mil
is a top-level domain, along with .edu
and .gov
. The actual work on it was mostly done by universities, with Defense funding. The original aim was to automate the routing of electronic messages through a network in a way that could survive massive damage, for example in the case of nuclear war. The resulting routing protocols proved quite resilient and flexible. But yes, at no point did it look anything like the Manhattan Project: it wasn't anything like that expensive or secretive.
I suspect a great deal of research on explosives was also conducted or sponsored by various militaries, but I'm less familiar with the history there. As for aerospace, that's a huge chunk of the military-industrial complex, But yes, again, they generally don't look much like the Manhattan Project.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2024-07-07T00:37:33.155Z · LW(p) · GW(p)
Humorously, in retrospect, it turns out we actually did need a much larger government project to realize the potential of the internet. Licklider's visions of a new infrastructures for trade and collective intelligence were all totally realistic, those things could have been built, the internet would have been very different and much more useful if the standards had been designed, the distributed database design problems had been solved, but no one did that work! There was no way to turn it into a business! So now the internet is mostly just two or three entertainment/advertising tubes.
comment by Anders Lindström (anders-lindstroem) · 2024-07-07T10:45:47.522Z · LW(p) · GW(p)
To be clear, I am confident that governments and militaries will be extremely interested in AI.
It makes perfect sense that it will turn into a Manhattan project, and it probably (p>0.999999...) already has. The idea that the government, military, and intelligence agencies have not yet received the memo about AI/AGI/ASI is beyond naive.
Just like the extreme advantages of being the first to develop a nuclear bomb, being the first to achieve AGI might carry the same EXTREME advantages.
↑ comment by Seth Herd · 2024-07-09T05:31:47.315Z · LW(p) · GW(p)
We're pretty sure it hasn't already become a manhattan project because top-tier researchers haven't visibly left the public sphere. Also, the government really seems quite clueless about this. Congressional hearing have the appearance of everyone grappling with these ideas for the first time, and getting a lot of it quite wrong.
Replies from: anders-lindstroem↑ comment by Anders Lindström (anders-lindstroem) · 2024-07-09T15:33:36.619Z · LW(p) · GW(p)
Well, how many of in congress and the senate heard about the Manhattan project?
"Keeping 120,000 people quiet would be impossible; therefore only a small privileged cadre of inner scientists and officials knew about the atomic bomb's development. In fact, Vice-President Truman had never heard of the Manhattan Project until he became President Truman."
https://www.ushistory.org/us/51f.asp
When it comes to the scientists, we have no idea if the work they do in "private" companies is a part of a bigger government led effort. Which would be the most efficient way I suppose.
I don't really understand why some people seem to get so upset about the idea that the government/military is involved in developing cutting edge technology. Like AI is some thing that governments/militaries are not aloud to touch? The military industrial complex has been and will always be involved in this kind of endeavors.
↑ comment by Seth Herd · 2024-07-09T21:16:03.441Z · LW(p) · GW(p)
These are good points. I fully agree that the military will be involved. I think it's almost inevitable that the government and national security apparatus will see the potential before AGI is smart enough to totally outmaneuver their attempts to take control. I still don't think they are very involved yet. Every possible sign says no. The addition of a former NSA director to OpenAI's board may signal the start of government involvement. But the board position is not actually a good spot to keep tabs on what's happending, let alone direct it (according to the breakdown of the last board incident). So I'd guess that's the first involvement. But you're correct that we can't know.
Replies from: roger-d-1↑ comment by RogerDearnaley (roger-d-1) · 2024-07-11T07:26:19.898Z · LW(p) · GW(p)
It's entirely clear from the Chinese government's actions and investments that they regard developing the capacity to make better GPUs for AI-trainig/inference purposes as a high priority. That doesn't make it clear that they're yet thinking seriously about AGI or ASI.
Replies from: gwern↑ comment by gwern · 2024-07-11T22:58:47.775Z · LW(p) · GW(p)
I don't think that's clear at all. What investments have been made into GPUs specifically have been fairly minor, discussion at the state level has been general and as focused on other kinds of chips (eg. avoiding the Russian shortages) in order to gain general economic & military resilience to Taiwanese sanctions & ensure high tempo high-tech warfare, with chips being but one of many advanced technologies that Xi has designated as priorities (which means they're not really priorities) and the US GPU embargo has been as focused on sabotaging weapons development like hypersonic missiles as it is on AI (you can do other things with supercomputers, you know, and historically, that's what they have been doing).
↑ comment by RedMan · 2024-07-09T07:32:11.579Z · LW(p) · GW(p)
What extreme advantages were those? What nuclear age conquests are comparable to the era immediately before?
Replies from: anders-lindstroem↑ comment by Anders Lindström (anders-lindstroem) · 2024-07-09T15:54:59.024Z · LW(p) · GW(p)
For starters it could be used as a diplomatic tool with tremendous bargin power as well as a deterrent to anyone that wanted to challenge US post war dominance in all fields.
Now imagine what a machine that is better in solving any problem in all of science than all the smartest people and scientists in world. Would not this machine give the owners EXTREME advantages in all things related to government/military/intelligence?!
↑ comment by RedMan · 2024-07-09T21:04:45.413Z · LW(p) · GW(p)
States that have nuclear weapons are generally less able to successfully make compellent threats than states that do not. Citation: https://uva.theopenscholar.com/todd-sechser/publications/militarized-compellent-threats-1918%E2%80%932001
The USA was the dominant industrial power in the post-war world, was this obvious and massive advantage 'extremely' enhanced by its' possession of nuclear weapons? As a reminder, these weapons were not decisive (or even useful) in any of the wars the USA actually fought, the USA has been repeatedly and continuously challenged by non-nuclear regional powers.
Sure, AI might provide an extreme advantage, but I'm not clear on why nuclear weapons do.
Replies from: roger-d-1↑ comment by RogerDearnaley (roger-d-1) · 2024-07-11T07:29:36.709Z · LW(p) · GW(p)
No one ever seriously considered invading the US, since 1945. The Viet Cong merely succeeded in making the Americans leave, once the cost for them of continuing the war exceeded the loss of face from losing it. Likewise for the Afghans defeating the Russians.
However, I agree, nuclear weapons are in some sense a defensive technology, not an offensive one: the consequences (geopolitical and environmental) of using one are so bad that no one since WW2 has been willing to use one as part of a war of conquest, even when nuclear powers were fighting non-nuclear powers.
One strongly suspects that the same will not be true of ASI, and that it will unlock many technologies, offensive, defensive, and perhaps also persuasive, probably including some much more subtle than nuclear weapons (which are monumentally unsubtle).