Ask AI companies about what they are doing for AI safety?

post by mic (michael-chen) · 2022-03-09T15:14:29.352Z · LW · GW · 0 comments

Contents

No comments

Cross-posted from the EA Forum [EA · GW].

Today, I had the pleasure of attending a talk by Jeff Dean at my university (Georgia Tech). Titled “Five Exciting Trends in Machine Learning,” it was a fascinating, engaging presentation. Midway through the talk, I started to idly wonder, if there ends up being a Q&A after the talk, could I ask a question about AI safety? I started drafting a question on my phone.

After the talk, there was in fact time for questions. The moderator took one question from one person sitting a few rows in front of me, and I wished I had raised my hand earlier and sat closer to the front. Then, the moderator read aloud two questions from people watching the livestream, and they asked their own question. Jeff Dean was still available for questions after the talk, however. I ended up asking him something like the following:

“One major focus of DeepMind is aligning ML models to follow what humans want, rather than narrowly pursuing objectives that are easy to specify. You mentioned the trend of how AI is becoming increasingly capable and general. If this continues, and if we had a highly advanced general AI and want it to cure cancer, one solution to that objective would be to kill everyone, but that would be pretty bad. So it seems important to be able to figure out how to specify the objectives that we actually want, and some exciting approaches to this include reward modeling or iterated amplification. Is this a problem that Google AI is working on or plans to work on, like its sister company DeepMind?”

I don’t think that was the optimal way of asking about AI alignment, but that’s what I ended up asking. (If anyone has suggestions on how to talk about long-term AI risk in a better way, please leave a comment!)

His response was essentially, Google AI is doing some work on AI safety. Google AI focuses more on near-term stuff, while DeepMind starts from the perspective of thinking about super AGI. He’s more optimistic that we'll be able to solve these issues as we go and that we'll have the constraints necessary to prevent an AI from killing everyone, while he does appreciate that some people are approaching this from a more paranoid perspective.

I thanked him for his thoughts. In total, I think Jeff Dean was asked just around ten questions.

I remember reading a quote that went something like, “At every town hall, ask your representatives what they are doing to address the climate crisis.” I don’t know how often Jeff Dean visits universities to give a talk, but if every time, just two or three students from the local EA group asked him a polite, thoughtful question about AI alignment, I think he might pay closer attention to it.

More generally, EAs in a local area may want to have a coordinated effort to ask speakers (carefully considered) questions about topics relevant to EA. Many social movements have gained traction and created change from just a handful of people raising awareness of an issue. Though there are many pitfalls we would want to avoid – appearing uneducated, out-of-touch, aggressive, or polemical, for example – I think we could do more to adopt helpful strategies from traditional social movements.

0 comments

Comments sorted by top scores.