Takes on Takeoff

post by atharva · 2025-03-25T00:20:07.915Z · LW · GW · 0 comments

Contents

  What will AGI look like?
  Who will make it?
  When will they make it?
  Takeoff itself
  What comes next?
None
No comments

Epistemic Status: Exploratory [LW · GW]

I wrote this as part of an application for the Chicago Symposium on Transformative AI, where I try & sketch out what takeoff might look like. I’m making a lot of claims, across a range of domains, so I’d expect there to be many places I’m wrong. But on the whole, I hope this is more well-thought-out than not.[1] 

Many thanks to Nikola Jurković & Tao Burga for thoughtful comments on this writeup. Any errors are, of course, my own.

Prompt: Please outline an AGI takeoff scenario in detail, noting your key uncertainties

What will AGI look like?

Who will make it?

When will they make it?

Takeoff itself

What comes next?

  1. ^

    First time posting here – any feedback is appreciated!

  2. ^

     Or at least, that’s the default outcome with our progress as it stands today. I think that’s what folks are worried about with the sharp left turn [LW · GW].

  3. ^

     Or at least, there’s some moderate amount of time before we go from labor-automators to paperclip-maximizers.

  4. ^

     This isn’t a claim about when this would happen / how much compute would be required / etc. But it will likely be unexpected, whenever it may be.

  5. ^

     I think this post [LW · GW] raises some good points.

  6. ^

     I think Gwern had a good take [LW · GW] on this.

  7. ^

     I’m uncertain about this claim. I don’t have in-depth knowledge, but this is my impression.

  8. ^

     Labs still have to run evals / implement safety measures / etc – ie. perform tasks that need contact time with the system.

  9. ^

     I don’t have many plausible examples of these, but they seem pretty likely on my inner sim. Not very confident about this point, and would love to hear other thoughts.

0 comments

Comments sorted by top scores.