Survey on AI existential risk scenarios

post by Sam Clarke, Alexis Carlier (alexis-carlier), Jonas Schuett · 2021-06-08T17:12:42.026Z · LW · GW · 10 comments

Contents

  Summary
  Motivation
  The survey
  Key results
    was considerable disagreement among researchers about which risk scenarios are most likely
    are uncertain about which risk scenarios are most likely
    put substantial credence on “other scenarios”
  Key takeaway
  Caveats
  Other notable results
  Full version
  Acknowledgements
None
10 comments

Cross-posted to the EA forum [EA · GW].

Summary

Motivation

It has been argued that AI could pose an existential risk. The original risk scenarios were described by Nick Bostrom and Eliezer Yudkowsky. More recently, these have [? · GW] been criticised, and a [AF · GW] number of [EA · GW] alternative scenarios [AF · GW] have [AF · GW] been [AF · GW] proposed [AF · GW]. There has been some useful work exploring these alternative scenarios, but much of this is informal. Most pieces are only presented as blog posts, with neither the detail of a book, nor the rigour of a peer-reviewed publication. For further discussion of this dynamic, see work by Ben [EA · GW] Garfinkel, Richard [AF · GW] Ngo [? · GW] and Tom Adamczewski.

The result is that it is no longer clear which AI risk scenarios experts find most plausible. We think this state of affairs is unsatisfactory for at least two reasons. First, since many of the proposed scenarios seem underdeveloped, there is room for further work analyzing them in more detail. But this is time-consuming and there are a wide range of scenarios that could be analysed, so knowing which scenarios leading experts find most plausible is useful for prioritising this work. Second, since the views of top researchers will influence the views of the broader AI safety and governance community, it is important to make the full spectrum of views more widely available. The survey is intended to be a first step in this direction.

The survey

We asked researchers to estimate the probability of five AI risk scenarios, conditional on an existential catastrophe due to AI having occurred. There was also a catch-all “other scenarios” option.

These were the five scenarios we asked about, and the descriptions we gave in the survey:

We chose these five scenarios because they have been most prominent in previous discussions about different AI risk scenarios. For more details about the survey, you can find a copy of it at this link.

Key results

There was considerable disagreement among researchers about which risk scenarios are most likely

Researchers are uncertain about which risk scenarios are most likely

The median self-reported confidence level given by respondents was 2, on a seven point Likert scale from 0 to 6, where:

Researchers put substantial credence on “other scenarios”

The “other scenarios” option had the highest median probability, at 20%. Some researchers left free-form comments describing these other scenarios. Most of them have seen no public write-up, and the others have been explored in less detail than the five scenarios we asked about.

Key takeaway

Together, these three results suggest that there is a lot of value in exploring the likelihood of different risk scenarios in more detail. This could look like:

Recent “failure stories” by Andrew Critch [AF · GW] and Paul Christiano [AF · GW] - which seem to have been well-received and appreciated - also suggest that there is value in exploring different risk scenarios in more detail. Likewise, Rohin Shah advocates for this kind of work, and AI Impacts has recently compiled a collection of stories to clarify, explore or appreciate possible future AI scenarios.

Caveats

One important caveat is the tractability of exploring the likelihood of different AI risk scenarios in more detail. The existence of considerable disagreement, despite there having been some attempts to clarify and discuss these issues, could suggest that making progress on this is difficult. However, we think there has been relatively little effort towards this kind of work so far, and that there is still a lot of low-hanging fruit.

Additionally, there were a number of limitations in the survey design, which are summarised in this document. If we were to run the survey again, we would do many things differently. Whilst we think that our main findings stand up to these limitations, we nonetheless advise taking them cautiously, and as just one piece of evidence - among many - about researchers’ views on AI risk.

Other notable results

Most of this community’s discussion about existential risk from AI focuses on scenarios involving one or more powerful, misaligned AI systems that take control of the future. This kind of concern is articulated most prominently in “Superintelligence” and “What failure looks like [AF · GW]”, corresponding to three scenarios in our survey (the “Superintelligence” scenario, part 1 and part 2 of “What failure looks like”). The median respondent’s total (conditional) probability on these three scenarios was 50%, suggesting that this kind of concern about AI risk is still prevalent, but far from the only kind of risk that researchers are concerned about today.

69% of respondents reported that they have lowered their probability estimate in the “Superintelligence” scenario (as described above)[7] since the first year they were involved in AI safety/governance. This may be because they now assign relatively higher probabilities to other risk scenarios happening first, and not necessarily because they think that fast takeoff or other premises of the “Superintelligence” scenario are less plausible than they originally did.

Full version

At this time, we are only publishing this abbreviated version of the results. We have a version of the full results that we may publish at a later date. Please contact one of us if you would like access to this, and include a sentence on why the results would be helpful or what you intend to use them for.

Acknowledgements

We would like to thank all researchers who participated in the survey. We are also grateful for valuable comments and feedback from JJ Hepburn, Richard Ngo, Ben Garfinkel, Max Daniel, Rohin Shah, Jess Whittlestone, Rafe Kennedy, Spencer Greenberg, Linda Linsefors, David Manheim, Ross Gruetzemacher, Adam Shimi, Markus Anderljung, Chris McDonald, David Kreuger, Paolo Bova, Vael Gates, Michael Aird, Lewis Hammond, Alex Holness-Tofts, Nicholas Goldowsky-Dill, the GovAI team, the AI:FAR group, and anyone else we ought to have mentioned here. This project grew out of AISC and FHI SRF. All errors are our own.


  1. We will not look at any responses from now on; this is intended just to show what questions were asked, and in case any readers are interested in thinking through their own responses. ↩︎

  2. AI existential risk scenarios are sometimes called threat models [? · GW]. ↩︎

  3. Bostrom describes many scenarios in the book “Superintelligence”. We think that this scenario is the one that most people remember from the book, but nonetheless, we think it was probably a mistake to refer to this particular scenario by this name. ↩︎

  4. Likewise, the mean responses for the five given scenarios are all between 15% and 18%, and the mean response for “other scenarios” was 25%. ↩︎

  5. Other similar results: 77% of respondents assigned ≤5% (conditional) probability to at least one scenario; 51% of respondents assigned ≤5% (conditional) probability to at least two scenarios. ↩︎

  6. For another way of interpreting this, consider that if respondents were evenly split into six completely “polarised” camps, each of which put 100% probability on one option and 0% on the others, then the mean absolute deviation for each scenario would be ~28%. ↩︎

  7. As per footnote 3, the particular scenario we are referring to here is not the only scenario described in “Superintelligence”. ↩︎

10 comments

Comments sorted by top scores.

comment by steven0461 · 2021-06-08T21:06:23.158Z · LW(p) · GW(p)

Define an existential catastrophe due to AI as an existential catastrophe that could have been avoided had humanity's development, deployment or governance of AI been otherwise. This includes cases where:

AI directly causes the catastrophe.

AI is a significant risk factor in the catastrophe, such that no catastrophe would have occurred without the involvement of AI.

Humanity survives but its suboptimal use of AI means that we fall permanently and drastically short of our full potential.

This technically seems to include cases like: AGI is not developed by 2050, and a nuclear war in the year 2050 causes an existential catastrophe, but if an aligned AGI had been developed by then, it would have prevented the nuclear war. I don't know if respondents interpreted it that way.

Replies from: Sam Clarke, Jonas Schuett, DanielFilan, Ericf
comment by Sam Clarke · 2021-06-11T13:03:48.713Z · LW(p) · GW(p)

Thanks for pointing this out. We did intend for cases like this to be included, but I agree that it's unclear if respondents interpreted it that way. We should have clarified this in the survey instructions.

comment by Jonas Schuett · 2021-06-11T07:52:09.508Z · LW(p) · GW(p)

Thanks for your comment! I think your critique is justified.

My best guess is that this consideration was not salient for most participants and probably didn't distort the results in meaningful ways, but it's of course hard to tell and DanielFilan's comment suggests that it was not irrelevant.

We are aware of a number of other limitations, especially with regards to the mutual exclusivity of different scenarios. We've summarized these limitations here.

Overall, you should take the results with a grain of salt. They should only be seen as signposts indicating which scenarios people find most plausible.

comment by DanielFilan · 2021-06-08T21:31:38.314Z · LW(p) · GW(p)

As a respondent, I remember being unsure whether I should include those catastrophes.

comment by Ericf · 2021-06-09T00:18:57.946Z · LW(p) · GW(p)

That seems like a really bad conflation? Is one question combining the risk of "too much" AI use and "too little" AI use?

That's even worse than the already widely smashed distinctions between "can we?" "should we?" And "will we?"

Replies from: Sam Clarke
comment by Sam Clarke · 2021-06-11T13:00:36.743Z · LW(p) · GW(p)

Is one question combining the risk of "too much" AI use and "too little" AI use?

Yes, it is. Combining these cases seems reasonable to me, though we definitely should have clarified this in the survey instructions. They're both cases where humanity could avoided an existential catastrophe by making different decisions with respect to AI.

Replies from: Ericf
comment by Ericf · 2021-06-11T15:32:11.458Z · LW(p) · GW(p)

But the action needed to avoid/mitigate in those cases is very different, so it doesn't seem useful to get a feeling for "how far off of ideal are we likely to be" when that is composed of:
1. What is the possible range of AI functionality (as constrained by physics)? - ie what can we do?

2. What is the range of desirable outcomes within that range? - ie what should we do?

3. How will politics, incumbent interests, etc. play out? - ie what will we actually do?

Knowing that experts think we have a (say) 10% chance of hitting the ideal window says nothing about what an interested party should do to improve those chances. It could be "attempt to shut down all AI research" or "put more funding into AI research" or "it doesn't matter because the two majority cases are "General AI is impossible - 40%" and "General AI is inevitable and will wreck us - 50%""

Replies from: Sam Clarke
comment by Sam Clarke · 2021-06-14T08:53:25.944Z · LW(p) · GW(p)

Thanks for the reply - a couple of responses:

it doesn't seem useful to get a feeling for "how far off of ideal are we likely to be" when that is composed of: 1. What is the possible range of AI functionality (as constrained by physics)? - ie what can we do?

No, these cases aren't included. The definition is: "an existential catastrophe that could have been avoided had humanity's development, deployment or governance of AI been otherwise". Physics cannot be changed by humanity's development/deployment/governance decisions. (I agree that cases 2 and 3 are included).

Knowing that experts think we have a (say) 10% chance of hitting the ideal window says nothing about what an interested party should do to improve those chances.

That's correct. The survey wasn't intended to understand respondents' views on interventions. It was only intended to understand: if something goes wrong, what do respondents think that was? Someone could run another survey that asks about interventions (in fact, this other recent survey [LW · GW] does that). For the reasons given in the Motivation section of this post, we chose to limit our scope to threat models, rather than interventions.

comment by rohinmshah · 2021-06-13T21:20:19.143Z · LW(p) · GW(p)

Planned summary for the Alignment Newsletter:

While the previous survey asked respondents about the overall probability of existential catastrophe, this survey seeks to find which particular risk scenarios respondents find more likely. The survey was sent to 135 researchers, of which 75 responded. The survey presented five scenarios along with an “other”, and asked people to allocate probabilities across them (effectively, conditioning on an AI-caused existential catastrophe, and then asking which scenario happened).

The headline result is that all of the scenarios were roughly equally likely, even though individual researchers were opinionated (i.e. they didn’t just give uniform probabilities over all scenarios). Thus, there is quite a lot of disagreement over which risk scenarios are most likely (which is yet another reason not to take the results of the previous survey too seriously).

comment by evhub · 2021-06-09T02:03:16.268Z · LW(p) · GW(p)

(Moderation note: added to the Alignment Forum from LessWrong.)