Linkpost: A Contra AI FOOM Reading List

post by DavidW (david-wheaton) · 2023-03-13T14:45:57.695Z · LW · GW · 4 comments

This is a link post for https://magnusvinding.com/2017/12/16/a-contra-ai-foom-reading-list/

This is a linkpost to a list of skeptical takes on AI FOOM. I haven't read them all and probably disagree with some of them, but it's valuable to put these arguments in one place. 

4 comments

Comments sorted by top scores.

comment by Jeffrey Ladish (jeff-ladish) · 2023-03-14T05:39:02.817Z · LW(p) · GW(p)

It seems nice to have these in one place but I'd love it if someone highlighted a top 10 or something.

comment by mishka · 2023-03-14T01:06:18.664Z · LW(p) · GW(p)

Thanks, this is quite useful.

Still, it is rather difficult to imagine that they can be right. The standard argument seems to be quite compact.

Consider an ecosystem of human-equivalent artificial software engineers and artificial AI researchers. Take a population of those and make them work on producing a better, faster, more competent next generation of artificial software engineers and artificial AI researchers. Repeat using a population of better, faster, more competent entities, etc... If this saturates, it would probably saturate very far above human level...

(Of course, if people still believe that human-equivalent artificial software engineers and human-equivalent artificial AI researchers are a tall order, then skepticism is quite justified. But it's getting more and more difficult to believe that...) 

Replies from: nora-belrose
comment by Nora Belrose (nora-belrose) · 2023-11-04T16:13:38.721Z · LW(p) · GW(p)

If this saturates, it would probably saturate very far above human level...

Foom is a much stronger claim than this. It's saying that there will be an incredibly fast, localized intelligence explosion involving literally one single AI system improving itself. Your scenario of an "ecosystem" of independent AI researchers working together sounds more like the "slow" takeoff of Christiano or Hanson than EY-style fast takeoff.

Replies from: mishka
comment by mishka · 2023-11-05T02:08:04.505Z · LW(p) · GW(p)

That depends on the dynamics, not on whether it is localized or distributed. E.g. if it includes a take-over of a large part of Internet, it will end up very distributed, so presumably a successful foom will get more distributed as it unfolds... But initially, a company will have it on its own local cluster, so it might be fairly localized for a while, depending on how they structure it...

(The monolithic abstractions, like a "singleton", are very questionable. Even a single human is fruitfully decomposed into a "society of minds" following Minsky. It might look "monolithic" or a "singleton" from the outside, but it will have all kinds of non-trivial internal dynamics, internal discourse, internal disagreements, and so on; this rich internal structure might be somewhat observable from the outside, or might be hidden.)

The real uncertainty is time: what timeframe for an "intelligence explosition" people are ready to call "foom" vs slow takeoff? https://www.lesswrong.com/tag/ai-takeoff [? · GW] makes a choice of putting this boundary between months and years:

A soft takeoff refers to an AGI that would self-improve over a period of years or decades.

A hard takeoff (or an AI going "FOOM" [2]) refers to AGI expansion in a matter of minutes, days, or months.

I certainly don't think the scheme I described would work in minutes, I am less certain about days, and I am mostly thinking in terms of weeks (months do feel a bit too long to me, although who knows).