List of concrete hypotheticals for AI takeover?

post by Yitz (yitz) · 2022-04-07T16:54:12.934Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    8 Daniel Kokotajlo
    7 Josephm
    7 Dagon
None
1 comment

I was wondering if anyone’s compiled a list of posts which give a concrete description (with scenario-specific details) of a hypothetical future where humanity suffers an X or S scale disaster due to AGI takeover. If such a list does not already exist (or does exist but needs to be updated), please put links to specific posts of this kind in the comments!

Answers

answer by Joseph Miller (Josephm) · 2022-04-07T20:09:25.786Z · LW(p) · GW(p)

Gwern recently had a popular post that was exactly that kind of thing: https://www.gwern.net/Clippy

answer by Dagon · 2022-04-07T19:56:46.245Z · LW(p) · GW(p)

Amusingly, I was writing https://www.lesswrong.com/posts/BkHRpF2cafyaoWxaT/believable-near-term-ai-disaster [LW · GW] at the same time as you were posting the question, based on an earlier brainstorming exploration at https://www.lesswrong.com/posts/KTbGuLTnycA6wKBza/ [? · GW] .

comment by Yitz (yitz) · 2022-04-07T20:18:42.119Z · LW(p) · GW(p)

Funny, I think we're both coming from similar sources of inspiration :)

1 comment

Comments sorted by top scores.

comment by Joseph Miller (Josephm) · 2022-04-07T20:33:02.094Z · LW(p) · GW(p)

All the stories I've read, even Gwern's recent one feel surprisingly abstract. To me the obvious, very concrete story for an intelligence explosion looks like this:

  1. Run a program that does the following:
    while true:
    1. Run Codex on its own source with the prompt: "Improve the performance and efficiency of this coding model"
    2. Train a new version of Codex using the modified source code.
    3. Run tests and benchmarks to check it is actually better. If so, update your local version of Codex
  2. Wait until it is amazing / you are dead

Obviously Codex isn't nearly good enough to do this and you would need the benchmarks to include very difficult tasks, so that as it starts to take off it still has room for improvement. But I don't see why it would require a different kind of model.