[Linkpost] Hawkish nationalism vs international AI power and benefit sharing

post by jakub_krys (kryjak), Naci Cankaya · 2024-10-18T18:13:19.425Z · LW · GW · 5 comments

This is a link post for https://nacicankaya.substack.com/p/hawkish-nationalism-vs-international

Contents

5 comments

TLDR: In response to Leopold Aschenbrenner’s ‘Situational Awareness’ and its accelerationist national ambitions, we argue against the claim that artificial superintelligence will inevitably be weaponised and turn its country of origin into an untouchable hegemony. Not only do we see this narrative as extremely dangerous, but also expect that the grandest AI challenges call for global coordination between rivalling nations and companies. We lay out ways in which sharing the benefits of – and even some power over – the most capable AI systems can help to build positive-sum partnerships with the best chance of maximally good outcomes of the AI revolution. Finally, we present the multitude of challenges associated with such collaboration and discuss possible solutions and reasons for optimism.

 

We want to thank @Jan [LW · GW], @TheManxLoiner [LW · GW], @Jordan Taylor [LW · GW] and others who prefer to stay anonymous for their insightful feedback on an early draft of this article. It helped us with identifying and plugging gaps in our knowledge and adding some important considerations. 
Feedback was given from personal capacity and not in representation of any company or organisation.

5 comments

Comments sorted by top scores.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-20T03:25:43.292Z · LW(p) · GW(p)

In the scope of this article, we will consider Transformative Artificial Intelligence (TAI) according to a common (albeit vague) definition where TAI is ‘an AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution’. We will not consider the impact of ASI which achieves performance at a level far above humans. The enormous power of ASI means that social interactions, economic mechanisms and geopolitical dynamics are likely to be transformed in ways so significant that current forecasting is marred by uncertainty. In particular, if one actor obtains a ‘decisive strategic advantage’, the decisions made by that actor plausibly make it very hard to predict the outcome. We believe that the relevance of this article increases with longer TAI timelines and longer transition periods between TAI and ASI. This is because slower development speeds increase the probability of multipolar outcomes and give more time for governance and collaboration to come into effect. While in a fast takeoff scenario the systemic and misuse risks will certainly be severe, the potential misalignment risks stemming from a hastily developed ASI might dwarf them.

I like your article, it has some pleasant forecasts and nice recommendations for what to do if our near future turns out to be serendipitously comfortable. Makes me feel good to read it, and imagine such a future in store for us.

I also like that you are explicit about the fact that the recommendations in this article apply only to a particular subset of possible futures.

Where we get to transformative AI, but not further. Either humanity comes together on a global pause, or the technology does not allow for rapid advancement. The TAI in this possible future does not unlock the ability of defectors-from-the-pause to defeat the entire rest of the world combined.

Also, the possible weirdness is kept in check. Autonomous seafloor or subterranean robots don't kickstart a new industrial base. Local space isn't colonized by autonomous robotic probes. Humanity has immense world-changing power at its fingertips, and chooses to proceed slowly and carefully, in peaceful cooperation.

I'm working on a similar article, which aims to address a wider set of future possibilities. I described this path that you outlined here "The Gentle Path". I'll cite your article as an example of this hopeful option.

I think someone more pessimistic than me might describe this article with an analogy along the lines of: "How we recommend arranging our new garden furniture after we win the lottery."

That would be a bit unfair. I'd give this space of future possibilities better than a 1:1000 chance. Still less than 1%, but worth keeping in mind.

Replies from: Naci Cankaya, kryjak
comment by Naci Cankaya · 2024-10-20T17:50:09.702Z · LW(p) · GW(p)

Hello Nathan, thank you for your comment! Really looking forward to your post and how it expands on the idea.

Regarding your estimates: I personally do not think that assigning probabilities to preferable outcomes is very useful. On the contrary, one can argue that the worldviews held by influential people can become self fulfilling prophecies. That is especially applicable to prisoner's dilemmas. One can either believe the dilemma is inevitable and therefore choose to defect, or instead see the situation itself as the problem, not the other prisoner. That was the point we were trying to make.

Rather than suggesting what to do under the assumption that things turn out well, we were arguing for a shift in perspective and what measures could help with increasing the chances of a good outcome of the advent of TAI. Fixing the game, instead of accepting the rules and hoping for the best.

In the disclaimer, when we said that the relevance of the article increases with longer timelines, we did not mean to narrow its applicability down towards a scenario of an agreed upon AI pause. The purpose was mainly to not appear too confident in our assumptions before being proven wrong by unpredictable impacts of the technology.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-21T20:46:44.849Z · LW(p) · GW(p)

Thanks Naci, that's helpful clarification. An active call to change the odds, rather than a passive hoping that things will go well, does seem like a more robust plan.

comment by jakub_krys (kryjak) · 2024-10-20T18:24:37.851Z · LW(p) · GW(p)

Thanks for the comments, I'm looking forward to reading your article. Is 'The Gentle Path' a reference to 'The Narrow Path' or just a naming coincidence?

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-10-25T17:29:07.841Z · LW(p) · GW(p)

Both are talking about paths, so in that sense it's a correlated coincidence. I wrote my piece before 'The Narrow Path' was published, but since there has recently been a rash of essays about what humanity should do about AI, I've been rewriting to try to take some of these other viewpoints into account.