Takeaways from safety by default interviews

post by AI Impacts (AI Imacts), abergal · 2020-04-03T17:20:02.163Z · LW · GW · 2 comments

Contents

  Relative optimism in AI often comes from the belief that AGI will be developed gradually, and problems will be fixed as they are found rather than neglected.
  Many of the arguments I heard around relative optimism weren’t based on inside-view technical arguments.
  There are lots of calls for individuals with views around AI risk to engage with each other and understand the reasoning behind  fundamental disagreements. 
None
2 comments

Last year, several researchers at AI Impacts (primarily Robert Long and I) interviewed prominent researchers inside and outside of the AI safety field who are relatively optimistic about advanced AI being developed safely. These interviews were originally intended to focus narrowly on reasons for optimism, but we ended up covering a variety of topics, including AGI timelines, the likelihood of current techniques leading to AGI, and what the right things to do in AI safety are right now. 

We talked to Ernest Davis, Paul Christiano, Rohin Shah, Adam Gleave, and Robin Hanson.

Here are some more general things I personally found noteworthy while conducting these interviews. For interview-specific summaries, check out our Interviews Page.

Relative optimism in AI often comes from the belief that AGI will be developed gradually, and problems will be fixed as they are found rather than neglected.

All of the researchers we talked to seemed to believe in non-discontinuous takeoff.1 Rohin gave ‘problems will likely be fixed as they come up’ as his primary reason for optimism,2 Adam3 and Paul4 both mentioned it as a reason.

Relatedly, both Rohin5 and Paul6 said one thing that could update their views was gaining information about how institutions relevant to AI will handle AI safety problems– potentially by seeing them solve relevant problems, or by looking at historical examples.

I think this is a pretty big crux around the optimism view; my impression is that MIRI researchers generally think that 1) the development of human-level AI will likely be fast and potentially discontinuous and 2) people will be incentivized to hack around and redeploy AI when they encounter problems. See Likelihood of discontinuous progress around the development of AGI for more on 1). I think 2) could be a fruitful avenue for research; in particular, it might be interesting to look at recent examples of people in technology, particularly ML, correcting software issues, perhaps when they’re against their short-term profit incentives. Adam said he thought the AI research community wasn’t paying enough attention to building safe, reliable, systems.7

Many of the arguments I heard around relative optimism weren’t based on inside-view technical arguments.

This isn’t that surprising in hindsight, but it seems interesting to me that though we interviewed largely technical researchers, a lot of their reasoning wasn’t based particularly on inside-view technical knowledge of the safety problems. See the interviews for more evidence of this, but here’s a small sample of the not-particularly-technical claims made by interviewees:

My instinct when thinking about AGI is to defer largely to safety researchers, but these reasons felt noteworthy to me in that they seemed like questions that were perhaps better answered by economists or sociologists (or for the latter case, neuroscientists) than safety researchers. I really appreciated Robin’s efforts to operationalize and analyze the second claim above.

(Of course, many of the claims were also more specific to machine learning and AI safety.)

There are lots of calls for individuals with views around AI risk to engage with each other and understand the reasoning behind  fundamental disagreements. 

This is especially true around views that MIRI have, which many optimistic researchers reported not having a good understanding of.

This isn’t particularly surprising, but there was a strong universal and unprompted theme that there wasn’t enough engagement around AI safety arguments. Adam and Rohin both said they had a much worse understanding than they would like of others viewpoints.12 Robin13 and Paul14 both pointed to some existing but meaningful unfinished debate in the space.

By Asya Bergal

2 comments

Comments sorted by top scores.

comment by Zack_M_Davis · 2020-04-04T22:46:39.912Z · LW(p) · GW(p)

AI researchers are likely to stop and correct broken systems rather than hack around and redeploy them.

Ordinary computer programmers don't do this. (As it is written, "move fast and break things.") What will spur AI developers to greater caution?

comment by riceissa · 2020-04-04T01:23:42.211Z · LW(p) · GW(p)

What is the plan going forward for interviews? Are you planning to interview people who are more pessimistic?