Issues with uneven AI resource distribution

post by User_Luke · 2022-12-24T01:18:00.295Z · LW · GW · 9 comments

This is a link post for https://temporal.substack.com/p/issues-with-uneven-ai-resource-distribution

Contents

  Uneven resource distribution:
    International security and political economy competition:
  Geography of AI innovation:
  Security and information hazards
  First strike to restrain AGI
  Summary:
None
9 comments

Uneven resource distribution:

The uneven distribution of the resources needed to produce and use AI in a state-based system is a long-term challenge to developing international AI policy and raises international security risks. 

Resources include factors such as the skills, knowledge, compute, industry, people, and education and other factors of production used to build or develop AI systems, as well as access and ability to use AI systems themselves. 

This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI. The reason for focussing on the end point is the assumption that it would create an event horizon where the state that develops AGI archives runaway power over its rivals economically, culturally and militarily. But many points before this could be equally as valid depending on circumstances within the international system. 

The lack of resource distribution has a twofold problem:

The desire to access the economic and military benefits of AI will drive competition between states. Even if the benefits of AI development were evenly distributed, the places holding the greater number of AI resources will accrue disproportionate power over other geographies, particularly as AI moves toward the level of general intelligence. 

The resources needed to develop and use AI cannot be evenly distributed in a zero-sum international system with layers of economic and security competition mixed with technical and research disparities. Attempting to evenly distribute resources may even have a negative impact on the development of AI if it makes research and innovation less effective. 

International security and political economy competition:

There are some basic tenets to the concept of international security, drawing from the wider field of international relations. 

Briefly: 

Given its anarchic nature, the international system is often seen as being adversarial by nature.

The analysis of the balance of power by states feeds into the way they develop military capabilities. These will be dependent on a state’s industrial and technological capabilities, as well as that of its allies. States are also in economic competition with each other. Each state has competing national systems of innovation, and therefore there is an unevenly spread set of competencies, specialisms and comparative advantages. The dual-use nature of AI will see competition for its development across both the economic and security domains. Under a state-based system individual states will seek to maintain a state of readiness based on perceived threats of other actors. Even through international cooperation around the development of AI, many states may at the very least want to attain a latent capacity and industrial-skills base to develop AI systems for their own security. A near concept is that of nuclear latency. These factors are a driver of AI innovation and competition as well as AI proliferation through commercial means. 

The development and diffusion of technologies, like AI, has become a more salient topic in recent years through a focus on great power competition. This changes the relationship between state and private power. For example, the banning of exporting computer chips may make for good security policy, but conflicts with economic interests. It is extremely difficult to imagine a situation where a nuclear power agrees to share all resources and know-how to build a nuclear weapon with a state they regard as an adversary. Understanding why AI resource distribution would be different to existing international security concerns, such as nuclear, under the current adversarial structure of the global system should be a priority. 

Geography of AI innovation:

Research and innovation is often placed based, with geographical concentrations of connected businesses and organisations, known as clusters. This is true of AI, where top end research is clustered across a small number of places. These clusters of expertise are not evenly spread across the world, nor are the benefits they produce. This is practically noticeable at the high-end of the field. While the number of people with AI skills are continuing to grow globally, the most advanced research takes place within a select few universities and companies. It is possible that the development of ever more powerful AI systems would increase the relative power of these clusters. One caveat to this argument is that the nature of science and innovation radically changes into something more distributed and decentralised. Factors that favour this include, the increased use of open-source, distributed teams and platforms, such as Github, and decentralised organisations, such as decentralised autonomous organisations and compute. However, States will have an incentive to limit the way this takes place in a way that skews the benefits toward them as a geographic entity. For example, they could enforce legal mechanisms over the ownership of research, ban exports and limit access to people and skills. Therefore, there is a good chance that the concentration of AI researchers and research clusters is not going to be spread more evenly over time even while the adoption and diffusion of AI increases. 

Security and information hazards

The closer to the goal of AGI, actors involved may be less willing to share information about the development of AI less it causes an economic or security risk. In addition to this, there are information hazards around sharing AI progress. 

Bostrom defines an information hazard as: “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” The act of sharing information can constitute a security threat, hence the need for secrecy around certain forms of government information. Sharing AI resources may enable some agents to cause harm or increase risk to other participants in the international system. Therefore, States nearing the capacity for AGI will have an imperative to close down the amount of information they are willing to share with agents they deem likely to cause harm. This is likely to increase just at the point in which the need to share information to lessen any negative impacts of AGI also increases. 

First strike to restrain AGI

If state A achieves AGI and an adversary thinks it will give state A an incomparable advantage which they won't be able to overcome, then it is logical for state B to prevent this outcome or be subject to a world order dominated by state A.

If state B has no chance of achieving AGI then the closer A gets to achieving AGI the more it makes sense for B to build up its military capacity. This would allow state B to strike state state A, or to attempt to alleviate their AGI advantage through non-AGI means. 

Conversely, there could be a need for state A to hide the development of AGI from state B to prevent it from pre-emptively striking if it fears state A has developed AGI or even just developments AI that gives it an incomprehensible advantage it cannot catch up with. This reduced transparency may reduce the overall capacity to prevent negative outcomes of AGI.

Without pre-agreed mechanisms to share AI resources, discontinuous and rapid advances could shorten any window of time that states could use to assess the relative security implications of AI advances. It could also make differential technology development impossible

Summary:

AI development in a state-based system is a zero-sum game. Therefore, cooperation without a fair distribution of AI resources is likely to result in one party owning, or at least controlling access to AI resources within a certain geographical boundary - likely to be a state given the chance of the continuation of the current state-based system for the foreseeable future. I see this mostly continuing to be the case if 1) the global system is competitive by nature 2) there remains a security dilemma 3) unbridgeable differences between states in the international system exist, Washington vs Beijing consensus etc. 

AI resources tend to be geographically clustered and therefore unevenly distributed. Along a path toward AGI those with access to these resources are likely to accumulate the most power within the international system. 

Working toward AGI without a pre-agreed distribution of AI resources will be a destabilising event for the global order consisting of a state-based system of governance. The only counterpoint I can think to this is along the lines of the Waltz argument that all states should have nuclear weapons because it will reduce the risk of them being used, but this defaults back to increasing risks around informational hazards and the misuse of AI. 

States could respond with increased conventional weapons systems or powerful AI systems to compensate for their lack of AGI. Given the potential power of AGI, it would make sense for this to be a first strike capability. This could increase non-AGI risks.

(Small edit to clarify risk of AGI ruin)


Posted for the defunct Future Fund Worldview Prize. Crossposted from: https://temporal.substack.com/

9 comments

Comments sorted by top scores.

comment by jimrandomh · 2022-12-24T01:20:38.969Z · LW(p) · GW(p)

Not everyone agrees with the AGI Ruin [LW · GW] arguments. That's fine. But... I think in order to write a post like this, you need to at least be aware of the argument that AI will kill literally everyone. Because if you accept that thesis, or if you assign it even moderate probability, then arguing for equal distribution of AI capabilities seems like a dumb idea, since this increases the number of actors who may accidentally destroy the world.

Replies from: User_Luke
comment by User_Luke · 2022-12-24T09:56:38.013Z · LW(p) · GW(p)

I've covered that, did you read it? 

"The lack of resource distribution has a twofold problem:

  • There is a need for agreement on the distribution of AI resources. However, a wider diffusion of AI could increase the risk of misuse leading to a possible reduction in diffusion."

Your argument is that only certain states should develop AGI, and while that makes sense on the one hand, you're not accounting for the increase in how others will react to the non-diffusion of AI. I'm not arguing for the wider distribution of AI, rather I'm pointing out how others will react to being disadvantaged. Which is also a dumb idea since it will just cause increased competition between states and less regulation at the international level, therefore increasing the risks from AGI and AI in general. 

Replies from: jimrandomh
comment by jimrandomh · 2022-12-24T10:41:36.737Z · LW(p) · GW(p)

The AGI Ruin argument isn't about misuse. The problem isn't misuse, it's uncontrollability. Basically, each incremental step along the way towards AGI is benefits whoever builds it, except that after a certain point AGI takes a treacherous turn and kills literally-everyone. So the claim isn't that only certain states should develop AGI, it's that no one should develop AGI (until alignment research is considerably further along).

Replies from: User_Luke
comment by User_Luke · 2022-12-24T11:00:27.049Z · LW(p) · GW(p)

I'll take the point about misuse not being clear, and I've made a 3 word edit to the text to cover your point. 

However, I do also state prior to this that: 

"This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI." 


If anything your post above bolsters my argument. If states do not share resources they'll be in competition with each other to work toward AGI, and everything before it, creating risks. If they do share resources, they create risks. However, it is logical for disadvantaged states to increase other security risks, through restoring a balance of power, in response to the >AGI ones. If state A manages to nail alignment research and develop AGI, my framework is still valid, state B may respond defensibly if it doesn't have access to resources. 

comment by [deleted] · 2023-01-09T18:46:05.417Z · LW(p) · GW(p)

Note that the scenario you mentioned is unrealistic. A state with the resources to develop AGI ahead of others has vast technical and economic resources. There are only 3 powers that have a reasonable hope of this and probably only ever will be prior to successful AGI.

States with vast technical and economic resources can and do afford vast armies and a nuclear arsenal. A conventional attack is not feasible - it would rapidly turn into mutually assured destruction.

I don't know how AI progress will be destabilizing because it looks like endgame geopolitics if AGI is controllable. By endgame I mean a single winner could likely conquer the rest by force. And might be compelled to do so.

Replies from: User_Luke
comment by User_Luke · 2023-01-10T15:23:58.433Z · LW(p) · GW(p)

The scenario only needs two states in competition with each other to work. The entire Cold War and its associated nuclear risks was driven by a bipolar world order. Therefore, by your own metrics of three powers capable of this, the scenario is realistic. By three powers, I am assuming you mean China the US and the UK? Or were you perhaps thinking of China, the US and the EU? The latter doesn't have nuclear weapons because it doesn't have an army, unless you were including the French nuclear arsenal into your calculation?

"By endgame I mean a single winner could likely conquer the rest by force. And might be compelled to do so." How would a winner conquer the rest by force if my scenario is unrealistic because of mutually assured destruction? 

It would seem as if a part of your line of thinking is just reiterating my entire post. If there is disagreement about a single state ruling over everyone via the development of AGI, then other states will respond in kind, it is a classic security dilemma. States would seek to increase their security, even if MAD prevents certain forms of response. My final paragraph summarises this: "States could respond with increased conventional weapons systems or powerful AI systems to compensate for their lack of AGI. Given the potential power of AGI, it would make sense for this to be a first strike capability. This could increase non-AGI risks." 

Replies from: None
comment by [deleted] · 2023-01-10T16:59:19.927Z · LW(p) · GW(p)

By three powers, I am assuming you mean China the US and the UK? Or were you perhaps thinking of China, the US and the EU? 

The latter

"States could respond with increased conventional weapons systems or powerful AI systems to compensate for their lack of AGI. Given the potential power of AGI, it would make sense for this to be a first strike capability. This could increase non-AGI risks." 

It wouldn't make any difference.  That was my point.  Buying more ICBMs that humans still have to hand build is a loser move if your rivals can make the counter weapons exponentially.  Missile defense is possible if you have something like a 10-100 times advantage in resources over your opponent.

A major power could start developing AGI and robotic infrastructures.  They would not be deterred by anyone else building conventional arms.   Once they begin to hit the exponential ramp phase they could gain a runaway advantage.  Theoretically with lunar manufacturing or just truly using the full resources of a major land mass (the entire land mass is completely covered in underground factories) the one country could have orders of magnitude, thousands or millions more the effective weapons manufacturing throughput of the rest of the world combined.

Conquering the planet is a relatively simple problem if you have exponential resources.  You just need an overwhelming number of defense weapons for every nuclear delivery vehicle in the hands of the combination of everyone else, and you need a sufficient number of robotic occupying troops to occupy all the land and I guess suicide drones to eliminate all the conventional armies, all at once.

My other thought is that suppose one power gets that kind of exponential advantage.  Do they attack or let their rivals clone the tech and catch up?

I would assume in scenarios where the advantage is large but temporary I am not sure there is a correct move other than attack.  By letting the rivals catch up you are choosing to let their values possibly control the future.  And their values include a bunch of things you don't like.

This is true from the perspective of all 3 powers, though the US/EU are close enough in value space they might not attack each other.

Replies from: User_Luke
comment by User_Luke · 2023-01-10T18:31:30.849Z · LW(p) · GW(p)

A reminder that the post articulates that: 

"This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI. The reason for focussing on the end point is the assumption that it would create an event horizon where the state that develops AGI archives runaway power over its rivals economically, culturally and militarily. But many points before this could be equally as valid depending on circumstances within the international system."

This being said, I would agree that you are entirely right that at some point after AGI that the scale of advantage would become overwhelmingly large and possibly undefeatable, but that doesn't negate any of the points raised in the post. States will respond to the scenario, doing so will increase non-AGI risks. 

"Conquering the planet is a relatively simple problem if you have exponential resources.  You just need an overwhelming number of defense weapons for every nuclear delivery vehicle in the hands of the combination of everyone else, and you need a sufficient number of robotic occupying troops to occupy all the land and I guess suicide drones to eliminate all the conventional armies, all at once.

This capacity is not likely to appear overnight, is it? And baring in mind this could potentially mean having to defeat more than one state at the same time. During the interregnum between AGI and building up capacity to do this, other states will react in their own self interest. Yes there is a possible scenario is one where a state pre-builds a certain capacity to overwhelm every other state once it achieves AGI, but in doing so it will create a security dilemma, and other states respond, in doing so this will also increase non-AGI risks. 

"I would assume in scenarios where the advantage is large but temporary I am not sure there is a correct move other than attack.  By letting the rivals catch up you are choosing to let their values possibly control the future.  And their values include a bunch of things you don't like."

Indeed, but the same logic also applies to states worried about being attacked. How do you think they'll respond to this threat? 

Replies from: None
comment by [deleted] · 2023-01-10T19:39:37.074Z · LW(p) · GW(p)

It's another world war no matter how you analyze it.

Even EYs proposal of a strong pivotal act - where some power builds a controllable AI (an 'aligned' one though real alignment may be a set of methodical rules to define tightly how the system is trained and what it's goals are) - and then acts to shut everyone else out from their own AI.

There's only one way to do that.