Does iterated amplification tackle the inner alignment problem?

post by JanB (JanBrauner) · 2020-02-15T12:58:02.956Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    13 Evan Hubinger
    3 ofer
    3 rmoehn
None
1 comment

When iterated distillation and amplification (IDA) was published, some people described it described as "the first comprehensive proposal for training safe AI". Having read a bit more about it, it seems that IDA is mainly a proposal for outer alignment and doesn't deal with the inner alignment problem at all. Am I missing something?

Answers

answer by evhub (Evan Hubinger) · 2020-02-15T19:24:38.032Z · LW(p) · GW(p)

You are correct that amplification is primarily a proposal for how to solve outer alignment, not inner alignment. That being said, Paul has previously talked about how you might solve inner alignment in an amplification-style setting. For an up-to-date, comprehensive analysis of how to do something like that, see “Relaxed adversarial training for inner alignment [AF · GW].”

answer by Ofer (ofer) · 2020-02-16T07:32:38.473Z · LW(p) · GW(p)

My understanding is that amplification-based approaches are meant to tackle inner alignment by using the amplified systems that are already trusted (e.g. humans + many invocations of a trusted model) to mitigate inner alignment problems in the next (slightly more powerful) models that are being trained. A few approaches for this have already been suggested (I'm not aware of published empirical results), see Evan's comment [LW(p) · GW(p)] for some pointers.

I hope a lot more research will be done on this topic. It's not clear to me whether we should expect to have amplified systems that allow us to mitigate inner alignment risks to a satisfactory extent before the point where we have x-risk posing systems, how can we make that more likely, and if it's not feasible how do we realize that as soon as possible?

answer by rmoehn · 2020-02-15T23:35:08.034Z · LW(p) · GW(p)

IDA includes looking inside the overseen agent: ‘As described here, we would like to augment this oversight by allowing Bⁿ⁻¹ to view the internal state of Aⁿ.’ (ALBA: An explicit proposal for aligned AI) If we can get enough information out of that internal state, we can avoid inner misalignment. This, however, is difficult and written about in The informed oversight problem.

1 comment

Comments sorted by top scores.

comment by JanB (JanBrauner) · 2020-02-16T17:35:59.045Z · LW(p) · GW(p)

Thanks for all your answers :-)