What are the relative speeds of AI capabilities and AI safety?

post by NunoSempere (Radamantis) · 2020-04-24T18:21:58.528Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    technicalities
    William Walker
None
No comments

If you want to solve AI safety before AI capabilities become too great, then it seems that AI safety must have some of the following:

Is this likely to be the case? Why? Another way to ask this question is: Under which scenarios doesn't aligning add time?

Answers

answer by technicalities · 2020-04-24T22:20:12.205Z · LW(p) · GW(p)

Some more ways:

If it turns out that capabilities and safety are not so dichotomous, and so robustness / interpretability / safe exploration / maybe even impact regularisation get solved by the capabilities lot.

If early success with a date-competitive performance-competitive safety programme (e.g. IDA) puts capabilities research onto a safe path.

answer by William Walker · 2020-04-28T22:38:44.714Z · LW(p) · GW(p)

Let's just save time by jumping to to the place where the AI in charge of AI Safety goes Foom and takes us back to the Stone Age "for safety" ;)

No comments

Comments sorted by top scores.