Responses to Christiano on takeoff speeds?
post by Richard_Ngo (ricraz) · 2020-10-30T15:16:02.898Z · LW · GW · 4 commentsThis is a question post.
Contents
Answers 9 rohinmshah 6 riceissa 1 meerpirat 1 Søren Elverlin None 4 comments
It's been over two and a half years since Paul put this blog post on takeoff speeds online. In particular, it argues that the "fast takeoff" undergone by humans is not very strong evidence that AIs will also undergo a fast takeoff, because evolution wasn't "optimising for" humans taking over the world.
I think this argument has been fairly influential - possibly disproportionately influential, given its brevity. I find it moderately persuasive, but not entirely so, and I'm currently working on a post explaining why. What I'm wondering is: have there been other critiques or responses to this argument? Because it currently seems to me like there's been very little public engagement with it.
Answers
There was "My Thoughts on Takeoff Speeds [LW · GW]" by tristanm.
Thanks for asking, I just read the post and was also interested in other people's thoughts.
My thoughts while reading:
- Is the emergence of humans really a good example for a significantly discontinuous jump? I spontaneously imagined that the first humans weren't actually performing much better than other apes, and that it took a lot of time of cultural development before humans started clearly dominating via using their increased strategizing/planning/coordinating capabilities.
- Paul seemed unconvinced of the potential for major insights (or a "secret sauce") about how to design discontinuously superior AIs. He wondered about analogous examples were major insights led to significant technological advances. This probably is covered well by the AI Impacts project on discontinuous technological developments, which found 10 relatively clear instances, and e.g. the bridge length discontinuity was "based on a new theory of bridge design".
- Regarding his argument why recursive self-improvement doesn't lead to fast takeoff: "Summary of my response: Before there is AI that is great at self-improvement there will be AI that is mediocre at self-improvement."I had the thought that there might be a "capability overhang" regarding self-improvement, because ML might currently underrate the progress that can be had here and rather spends time on other applications. I personally also find it plausible that a stable recursively self-improving architecture might be a candidate for a major insight that somebody might have someday.
The AISafety.com Reading Group discussed this blog post when it was posted. There is a fair bit of commentary here: https://youtu.be/7ogJuXNmAIw
4 comments
Comments sorted by top scores.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-30T18:56:21.157Z · LW(p) · GW(p)
I agree; the argument has been surprisingly influential & there has been surprisingly little critique/pushback, at least in public. I intended to write a critique myself but never got around to it; now it's climbing the ranks in my list of priorities because of exchanges like this. [LW(p) · GW(p)] I'd love to give feedback on your version if you want! Could even collaborate.
Replies from: ofer↑ comment by Ofer (ofer) · 2020-10-30T19:55:26.829Z · LW(p) · GW(p)
I'd love to give feedback on your version if you want! Could even collaborate.
Ditto for me!
Replies from: riceissa