Where are people thinking and talking about global coordination for AI safety?

post by Wei_Dai · 2019-05-22T06:24:02.425Z · score: 93 (25 votes) · LW · GW · 9 comments

This is a question post.

Contents

  Answers
    18 Davidmanheim
    17 ofer
    15 Wei_Dai
    13 G Gordon Worley III
    6 TheWakalix
None
No comments

Many AI safety researchers these days are not aiming for a full solution to AI safety (e.g., the classic Friendly AI), but just trying to find good enough partial solutions that would buy time for or otherwise help improve global coordination on AI research (which in turn would buy more time for AI safety work), or trying to obtain partial solutions that would only make a difference if the world had a higher level of global coordination than it does today.

My question is, who is thinking directly about how to achieve such coordination (aside from FHI's Center for the Governance of AI, which I'm aware of) and where are they talking about it? I personally have a bunch of questions related to this topic (see below) and I'm not sure what's a good place to ask them. If there's not an existing online forum, it seems a good idea to start thinking about building one (which could perhaps be modeled after the AI Alignment Forum, or follow some other model).

  1. What are the implications of the current US-China trade war?
  2. Human coordination ability seems within an order of magnitude of what's needed for AI safety. Why the coincidence? (Why isn’t it much higher or lower?)
  3. When humans made advances in coordination ability in the past, how was that accomplished? What are the best places to apply leverage today?
  4. Information technology has massively increased certain kinds of coordination (e.g., email, eBay, Facebook, Uber), but at the international relations level, IT seems to have made very little impact. Why?
  5. Certain kinds of AI safety work could seemingly make global coordination harder, by reducing perceived risks or increasing perceived gains from non-cooperation. Is this a realistic concern?
  6. What are the best intellectual tools for thinking about this stuff? Just study massive amounts of history and let one's brain's learning algorithms build what models it can?

Answers

answer by Davidmanheim · 2019-05-22T09:14:52.364Z · score: 18 (8 votes) · LW · GW

I want to focus on your second question: "Human coordination ability seems within an order of magnitude of what's needed for AI safety. Why the coincidence? (Why isn’t it much higher or lower?)"

Bottom line up front: Humanity has faced a few potentially existential crises in the past; world wars, nuclear standoffs, and the threat of biological warfare. The fact that we survived those, plus selection bias, seems like a sufficient explanation of why we are near the threshold for our current crises.

I think this is a straightforward argument. At the same time, I'm not going to get deep into the anthropic reasoning, which is critical here, but I'm not clear enough on to discuss clearly. (Side note: Stuart Armstrong recently mentioned to me that there are reasons I'm not yet familiar with for why anthropic shadows aren't large, which is assumed in the below model.)

If we assume that large scale risks are distributed in some manner, such as from Bostrom's urn of technologies (See: Vulnerable World Hypothesis - PDF,) we should expect that the attributes of the problems, including the coordination needed to withstand / avoid them, are distributed with some mean and variance. Whatever that mean and variance is, we expect that there should be more "easy" risks (near or below the mean) than "hard" ones. Unless the tail is very, very fat, this means that we are likely to see several moderate risks before we see more extreme ones. For a toy model, let's assume risks show up at random yearly, and follow a standard normal distribution in terms of capability needed. If we had capability in the low single digits, we would be wiped out already with high probability. Given that we've come worryingly close, however, it seems clear that we aren't in the high double digits either.

Given all of that, and the selection bias of asking the question when faced with larger risks, I think it's a posteriori likely that most salient risks we face are close to our level of ability to overcome.

answer by ofer · 2019-05-22T10:39:53.186Z · score: 17 (9 votes) · LW · GW
My question is, who is thinking directly about how to achieve such coordination (aside from FHI's Center for the Governance of AI, which I'm aware of) and where are they talking about it?

OpenAI has a policy team (this 80,000 Hours podcast episode is an interview with three people from that team), and I think their research areas include models for coordination between top AI labs, and improving publication norms in AI (e.g. maybe striving for norms that are more like those in computer security, where people are expected to follow some responsible disclosure process when publishing about new vulnerabilities). For example, the way OpenAI is releasing their new language model GPT-2 seems like a useful way to learn about the usefulness/feasibility of new publication norms in AI (see the "Release Strategy" section here).

I think related work is also being done at the Centre for the Study of Existential Risk (CSER).

answer by Wei_Dai · 2019-06-17T04:26:20.198Z · score: 15 (5 votes) · LW · GW

A source tells me there's a fair bit of non-public discussion of AGI-safety-relevant strategy/policy/governance issues, but it often takes a while for those discussions to cohere into a form that is released publicly (e.g. in a book or paper), and some of it is kept under wraps due to worries about infohazards (and worries about the unilateralist's curse w.r.t. infohazards).

answer by G Gordon Worley III · 2019-05-22T21:17:40.297Z · score: 13 (7 votes) · LW · GW

Last year there was a prize for papers and the authors spoke on a panel about this subject at HLAI 2018.

comment by Wei_Dai · 2019-05-23T04:03:12.768Z · score: 5 (2 votes) · LW · GW

Oh interesting, I wasn't aware of this prize. Where are these papers being discussed? It seems like it's mostly in person, at conferences, and through published papers? Are you aware of an online forum similar to LW/AF where such papers and ideas are being discussed?

ETA: Are the papers being discussed, or are people just publishing their own papers and not really commenting on each other's ideas?

comment by G Gordon Worley III (gworley) · 2019-05-23T17:41:32.641Z · score: 5 (2 votes) · LW · GW

There was some in-person conversation about the papers among us, but that's about it. I've not seen a strong community develop around this so far; mostly people just publish things one-off and then they go into a void where no one builds on each others work. I think this mostly represents the early stage of the field and the lack of anyone very dedicated to it, though, as I got the impression that most of us were just dabbling in this topic because it was near-by things we were already interested in and had some ideas about it.

comment by Wei_Dai · 2019-05-24T01:31:07.292Z · score: 5 (2 votes) · LW · GW

Ok, that's what I was afraid of, and what I'm hoping to see change. Since you seem to have thought about this for longer than I have, do you have any suggestions about what to do?

comment by G Gordon Worley III (gworley) · 2019-06-24T19:02:14.782Z · score: 4 (2 votes) · LW · GW

Having just come back from EA Global in SF I will say I have a much stronger sense that there are a decent number of people hoping to start thinking and talking about coordination for AI safety and there were at least a significant number of people there (maybe as many as 30) talking to each other at the conference about it. I'd now update my answer to say I am more confident that there is some dedicated effort happening in this direction, including from Center for Emerging Technologies, Global Catastrophic Risk Initiative, and others spread out over multiple organizations.

answer by TheWakalix · 2019-05-27T23:55:00.558Z · score: 6 (3 votes) · LW · GW

For question 2, I think the human-initiated nature of AI risk could partially explain the small distance between ability and need. If we were completely incapable of working as a civilization, other civilizations might be a threat, but we wouldn’t have any AIs of our own, let alone general AIs.

9 comments

Comments sorted by top scores.