Responses to Christiano on takeoff speeds?

post by Richard_Ngo (ricraz) · 2020-10-30T15:16:02.898Z · LW · GW · 4 comments

This is a question post.

Contents

  Answers
    9 rohinmshah
    6 riceissa
    1 meerpirat
    1 Søren Elverlin
None
4 comments

It's been over two and a half years since Paul put this blog post on takeoff speeds online. In particular, it argues that the "fast takeoff" undergone by humans is not very strong evidence that AIs will also undergo a fast takeoff, because evolution wasn't "optimising for" humans taking over the world.

I think this argument has been fairly influential - possibly disproportionately influential, given its brevity. I find it moderately persuasive, but not entirely so, and I'm currently working on a post explaining why. What I'm wondering is: have there been other critiques or responses to this argument? Because it currently seems to me like there's been very little public engagement with it.

Answers

answer by riceissa · 2020-10-30T22:41:52.691Z · LW(p) · GW(p)

There was "My Thoughts on Takeoff Speeds [LW · GW]" by tristanm.

answer by Max Ra (meerpirat) · 2021-11-16T19:13:36.729Z · LW(p) · GW(p)

Thanks for asking, I just read the post and was also interested in other people's thoughts.

My thoughts while reading:

  1. Is the emergence of humans really a good example for a significantly discontinuous jump? I spontaneously imagined that the first humans weren't actually performing much better than other apes, and that it took a lot of time of cultural development before humans started clearly dominating via using their increased strategizing/planning/coordinating capabilities.
  2. Paul seemed unconvinced of the potential for major insights (or a "secret sauce") about how to design discontinuously superior AIs. He wondered about analogous examples were major insights led to significant technological advances. This probably is covered well by the AI Impacts project on discontinuous technological developments, which found 10 relatively clear instances, and e.g. the bridge length discontinuity was "based on a new theory of bridge design". 
  3. Regarding his argument why recursive self-improvement doesn't lead to fast takeoff: "Summary of my response: Before there is AI that is great at self-improvement there will be AI that is mediocre at self-improvement."I had the thought that there might be a "capability overhang" regarding self-improvement, because ML might currently underrate the progress that can be had here and rather spends time on other applications. I personally also find it plausible that a stable recursively self-improving architecture might be a candidate for a major insight that somebody might have someday.
answer by Søren Elverlin · 2020-11-03T09:07:32.240Z · LW(p) · GW(p)

The AISafety.com Reading Group discussed this blog post when it was posted. There is a fair bit of commentary here: https://youtu.be/7ogJuXNmAIw

4 comments

Comments sorted by top scores.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-30T18:56:21.157Z · LW(p) · GW(p)

I agree; the argument has been surprisingly influential & there has been surprisingly little critique/pushback, at least in public. I intended to write a critique myself but never got around to it; now it's climbing the ranks in my list of priorities because of exchanges like this. [LW(p) · GW(p)] I'd love to give feedback on your version if you want! Could even collaborate.

Replies from: ofer
comment by Ofer (ofer) · 2020-10-30T19:55:26.829Z · LW(p) · GW(p)

I'd love to give feedback on your version if you want! Could even collaborate.

Ditto for me!

Replies from: riceissa
comment by riceissa · 2020-10-30T22:46:53.153Z · LW(p) · GW(p)

I am also interested in this.

comment by algon33 · 2020-10-30T16:30:00.453Z · LW(p) · GW(p)