What are the relative speeds of AI capabilities and AI safety?post by NunoSempere (Radamantis) · 2020-04-24T18:21:58.528Z · LW · GW · 2 comments
This is a question post.
If you want to solve AI safety before AI capabilities become too great, then it seems that AI safety must have some of the following:
- More researchers
- Better researchers
- Less necessary insights
- Easier necessary insights
- Ability to steal insights from AI capability research more than the reverse.
Is this likely to be the case? Why? Another way to ask this question is: Under which scenarios doesn't aligning add time?
Comments sorted by top scores.