Avoiding perpetual risk from TAI

post by scasper · 2022-12-26T22:34:48.565Z · LW · GW · 6 comments

Contents

  TL;DR
  Intro
  Five requirements for avoiding perpetual risk
      TAI regime = whatever set of actors controls TAI
      Exclusivity
      Benevolence
      Stability
      Alignment
      Resilience
  Each of these five may be difficult 
      Exclusivity
      Benevolence
      Stability
      Alignment
      Resilience
  What might this mean for AI safety work?
    Non-alignment aspects of AI safety are key.
None
6 comments

Stephen Casper, scasper@mit.edu. Thanks to Rose Hadshar and Daniel Dewey for some discussions and feedback. 

The goal of this post is to sort through some questions involving the difficulty of avoiding perpetual risk from transformative AI. Feedback in the comments is welcome!

TL;DR

Getting AI to go well means that at some point in time, the acute period of risk posed by the onset of transformative AI must end. Ending that period will require establishing a regime for transformative AI that is exclusive, benevolent, stable, and successful at alignment. This post argues that this type of regime may be very difficult to establish and that this will largely be an AI governance problem. If so, this gives an argument for emphasizing work on AI safety challenges other than just alignment.

Intro

Often–at least in conversation–we talk about how we need “a solution” to AI safety. And maybe AI safety is a problem that has a once-and-for-all solution. If we can avoid doom, maybe we can use aligned AI to solve our key challenges, avoid all other X-risks, and set ourselves on a sustainable course for the future. This seems possible, and it would be great. Thinking in these terms stems back at least to Nick Bostrom’s Superintelligence. Bostrom discusses the possibility that in the same way that extinction is an attractor state, meeting our cosmic endowment may be as well. If we can get AI right just once, maybe that’s the last key challenge we need to solve. 

From Superintelligence:

However, achieving immortality would mean a lot of things would have to go right. Establishing a regime that avoids perpetual risk will be complex, and it will definitely involve more than just figuring out how to align AI systems with our goals. 

Five requirements for avoiding perpetual risk

Suppose that highly transformative AI (TAI) technologies are someday developed and that they are powerful enough to do catastrophically dangerous things. By definition I will refer to whatever set of institutions that control the TAI and could cause catastrophic risks with it as the TAI regime. The regime could consist of human institutions, AI institutions, or both:

TAI regime = whatever set of actors controls TAI

To avoid perpetual AI risk, five things need to be true about the TAI regime. And to the extent that any are false, the probability of perpetual risk will substantially increase. Note that this only applies within some sort of cosmic sphere of influence. 

Exclusivity

There will almost certainly exist bad actors who would cause major risks if they had the ability to create and deploy the TAI without being stopped at some point in the process. (For example, the same applies with nuclear weapons.) To avoid this, the regime needs to be effectively closed to those who would cause havoc if they joined it. 

Benevolence

The TAI regime needs to be one that will try to lock in a stable and good future instead of pursuing more evil or myopic goals.

Stability

The regime needs to be stable over time and not be overthrown or degenerate into an unaligned one.  

Alignment

The TAI regime needs to solve outer alignment and inner alignment sufficiently well to ensure that their TAI does not itself cause major risks.  

Resilience 

The ability to respond to and survive disasters is also really nice to have if/when risks become reality. 

Each of these five may be difficult 

Exclusivity

Developing TAI may offer substantial first-mover advantages that gives the first members of the regime a lot of influence. And there is some precedent for certain sectors that are really hard for new actors to break into and be competitive in such as the search engine industry. But exclusivity still seems hard. 

Benevolence

Extreme power might make one more inclined to be altruistic, but this seems to be a tenuous hope at best. 

Stability

Structures such as constitutions that create checks and balances for regimes seem good and probably necessary for a tenable TAI regime. But things could go wrong even if the TAI regime is well-structured. 

Alignment

These troubles are simple.

Hopefully though, containment, corrigibility, and off-switches can temper the risks of failures here. 

Resilience

This can be considered hard because it is such a large and complex problem and involves problems with politics, governments, logistics, bio-risk, cybersecurity, etc. 

What might this mean for AI safety work?

Consider a taxonomy of AI safety strategies that groups them into three types.

Strategy 1: Making it easier to make safer AI. The whole field of AI alignment is the key example of this. But this also includes governance strategies that promote safer work or establish healthy norms in the research and development ecosystems.

Strategy 2: Making it harder to make less safe AI. Examples include establishing regulatory agencies, auditing companies, auditing models, creating painful bureaucracy around building risky AI systems, influencing hardware supply chains to slow things down, and avoiding arms races.

Strategy 3: Responding to problems as they arise. There might be a decent amount of time to act between the creation of an X-risky AI and extinction from it. This is especially true if the extinction happens via a cascade of globally-destabilizing events. It would probably be hard for TAI systems to gain influence over the world for some of the same reasons it’s hard for people/companies/countries to do the same. The world seems awfully big and able to adapt/fight-back for the path to extinction to be super short. Given this, some strategies to make us more resilient might be very useful including giving governments powers to rapidly detect and respond to firms doing risky things with TAI, hitting killswitches involving global finance or the internet, cybersecurity, and generally being more resilient to catastrophes as a global community. 

Note that we could also consider a fourth category: meta work that aims to build good paradigms and healthy institutions. But this isn’t direct work, so I'll only mention it here on the side.

Non-alignment aspects of AI safety are key.

Strategy 1 only addresses ensuring that the TAI regime is successful at alignment. Strategy 2 is key for exclusivity and benevolence. And Strategy 3 is useful for stability and resilience against disasters if we end up in a regime of significant risk.

I think this is important to bear in mind because the most common and most interesting types of AI safety work that are the easiest to nerd-snipe researchers with seem to fall into Strategy 1. But Strategies 2 and 3 may be at least as important to work on if not more. As someone who works on problems in Strategy 1, I am currently thinking about is whether I should work more toward 2 and 3. These seem to be mostly (but not entirely) governance-related problems that are relatively neglected. I’d appreciate feedback and discussion in the comments.


 

6 comments

Comments sorted by top scores.

comment by paulfchristiano · 2022-12-26T23:56:12.290Z · LW(p) · GW(p)

I'm a bit skeptical about calling this an "AI governance" problem. This sounds more like "governance" or maybe "existential risk governance"---if future technologies make irreversible destruction increasingly easy, how can we govern the world to avoid certain eventual doom?

Handling that involves political challenges, fundamental tradeoffs, institutional design problems, etc., but I don't think it's distinctive to risks posed by AI, don't think that a solution necessarily involves AI, don't think it's right to view "access to TAI" as the only or primary lever of political power to prevent destructive acts, and I'm not convinced that this problem should be addressed by a community focused on AI in particular.

It seems good for people to think about the general long-term challenge as well as to think about the concrete possible destructive technologies on the horizon, in case there is narrower work that can help mitigate the risks they pose and thereby delay the need to implement a general solution. But in some sense this is just "delaying the inevitable."

I wrote some of my thoughts on this relationship in Handling destructive technology.

One potential difference is that I don't see TAI as automatically posing a catastrophic risk. Alignment itself could pose a catastrophic risk. But if we resolve that, then I think we get some (unknown) amount of subjective time until the next thing goes wrong, which might be AI enabling access to destructive physical technology or might be something more conceptually gnarly. The further off that next risk is, the more political change is likely to happen in the interim.

Replies from: scasper
comment by scasper · 2022-12-27T02:46:55.729Z · LW(p) · GW(p)

This is an interesting point. But I'm not convinced, at least immediately, that this isn't likely to be largely a matter of AI governance. 

There is a long list of governance strategies that aren't specific to AI that can help us handle perpetual risk. But there is also a long list of strategies that are. I think that all of the things I mentioned under strategy 2 have AI specific examples:

establishing regulatory agencies, auditing companies, auditing models, creating painful bureaucracy around building risky AI systems, influencing hardware supply chains to slow things down, and avoiding arms races.

And I think that some of the things I mentioned for strategy 3 do too:

giving governments powers to rapidly detect and respond to firms doing risky things with TAI, hitting killswitches involving global finance or the internet, cybersecurity, and generally being more resilient to catastrophes as a global community.

So ultimately, I won't make claims about whether avoiding perpetual risk is mostly an AI governance problem or mostly a more general governance problem, but certainly there are a bunch of AI specific things in this domain. I also think they might be a bit neglected relative to some of the strategy 1 stuff. 

comment by Slider · 2022-12-27T01:43:17.373Z · LW(p) · GW(p)

The opposites of those four requirements sound also pretty good.

Exclusivity - Corrigibility

Humans that are being harmed should be able to effectively steer the AI to cease hurting them.

Benevolence - Servitude

The AI should serve humans and not put its own goals ahead of others.

Stability - Responsitivity

The AI should stay relevant and answer challenges to its existence. It should keep up with the world and not become out of distribution by turning into a relic.

Success at aligment - Fallibility

A minor mistake should not spell doom to the world. The setup should fail gracefully and accept fixes.

Replies from: scasper
comment by scasper · 2022-12-27T02:32:33.891Z · LW(p) · GW(p)

I'm not sure I understand what you mean. As I understand it, this comment seems a bit non sequitur to the post. First, I don't agree with any of the four pairs you mentioned at all as being opposites. Second, it seems to me like you're talking about an specific AI system, and not a TAI regime like I am.

comment by Noosphere89 (sharmake-farah) · 2022-12-26T23:22:02.849Z · LW(p) · GW(p)

I'm going to say that while strategy 1 isn't going to solve all the problems, it might also solve benevolence, primarily because I think the AI Alignment problem is far far more general than say, the alignment problem of states aligned to their citizens. It's much more like the problem of humans aligning to animals, and here the evidence is mostly depressing here, with the exception of pets, more or less.

Compared to states being aligned to citizens, where we actually have mechanisms that work imperfectly, in human-to-animal alignment, there aren't mechanisms that work at all, short of pets.

I think several factors contribute to the problem:

  1. A much more capable party can ignore restraints like laws or contracts, for the most part, and thus depends on their own goals, which are usually misaligned.

  2. We depend on the fact that there aren't that much differences in behavior, intelligence, and so on, and thus if you break it, things get bad fast. This is also known as the IID distribution on capabilities assumption.

Thus, success on strategy 1, especially if it can be extended to arbitrarily large inequalities in capabilities like intelligence, can essentially solve many of the special cases of alignment problems like states aligned to citizens.

Replies from: scasper
comment by scasper · 2022-12-27T02:38:57.192Z · LW(p) · GW(p)

I see this point about how making it easier to build safer AI can help to solve the benevolence problem by making the benevolent agents more competitive and this lowering the effective alignment tax. This is a good point. 

But I would note that this only applies to the extent that one's approach to strategy 1 means focusing on helping people working on safer AI do it more effectively. This does not include AI alignment goals. Ultimately, if a terrorist has a powerful AI system that is well-aligned with their goals, that's very bad.