Five Worlds of AI (by Scott Aaronson and Boaz Barak)
post by mishka · 2023-05-02T13:23:41.544Z · LW · GW · 5 commentsThis is a link post for https://scottaaronson.blog/?p=7266
Contents
5 comments
A relatively short April 27, 2023 post attempting to classify AI outcomes into 5 main cases:
- AI-Fizzle
- Not-AI-Fizzle
- Civilization recognizably continues
- Futurama
- AI-Dystopia
- Civilization does not recognizably continue
- Singularia
- Paperclipalypse
- Civilization recognizably continues
In other words, no one has done for AI what Russell Impagliazzo did for complexity theory in 1995, when he defined the five worlds Algorithmica, Heuristica, Pessiland, Minicrypt, and Cryptomania, corresponding to five possible resolutions of the P vs. NP problem along with the central unsolved problems of cryptography.
The authors elaborate:
Like in Impagliazzo’s 1995 paper on the five potential worlds of the difficulty of NP problems, we will not try to be exhaustive but rather concentrate on extreme cases. It’s possible that we’ll end up in a mixture of worlds or a situation not described by any of the worlds. Indeed, one crucial difference between our setting and Impagliazzo’s, is that in the complexity case, the worlds corresponded to concrete (and mutually exclusive) mathematical conjectures. So in some sense, the question wasn’t “which world will we live in?” but “which world have we Platonically always lived in, without knowing it?” In contrast, the impact of AI will be a complex mix of mathematical bounds, computational capabilities, human discoveries, and social and legal issues. Hence, the worlds we describe depend on more than just the fundamental capabilities and limitations of artificial intelligence, and humanity could also shift from one of these worlds to another over time.
Extensive discussion in the comments. Eliezer writes a comment, and some of the replies to that comment are quite informative.
5 comments
Comments sorted by top scores.
comment by Daniel Paleka · 2023-05-02T19:13:44.867Z · LW(p) · GW(p)
I don't think this framework is good, and overall I expected much more given the title. The name "five worlds" is associated with a seminal paper that materialized and gave names to important concepts in the latent space... and this is just a list of outcomes of AI development, with that categorization by itself providing very little insight for actual work on AI.
Repeating my comment from Shtetl-Optimized, to which they didn't reply:
Replies from: nikolas-kuhnIt appears that you’re taking collections of worlds and categorizing them based on the “outcome” projection, labeling the categories according to what you believe is the modal representative underlying world of those categories.
By selecting the representative worlds to be “far away” from each other, it gives the impression that these categories of worlds are clearly well-separated. But, we do not have any guarantees that the outcome map is robust at all! The “decision boundary” is complex, and two worlds which are very similar (say, they differ in a single decision made by a single human somewhere) might map to very different outcomes.
The classification describes *outcomes* rather than actual worlds in which these outcomes come from.
Some classifications of the possible worlds would make sense if we could condition on those to make decisions; but this classification doesn’t provide any actionable information.
↑ comment by Amalthea (nikolas-kuhn) · 2023-05-02T19:38:47.457Z · LW(p) · GW(p)
I agree that it seems like a pretty low value addition to the discourse and neither provides any additional insight, not do their categories structure the problem in a particularly helpful way. That may be exaggerated, but it feels like a plug to insert yourself into a conversation where you have nothing to contribute otherwise.
Replies from: Daniel Paleka↑ comment by Daniel Paleka · 2023-05-02T19:59:08.876Z · LW(p) · GW(p)
I didn't mean to go there, as I believe there are many reasons to think both authors are well-intentioned and that they wanted to describe something genuinely useful.
It's just that this contribution fails to live up to its title or to sentences like "In other words, no one has done for AI what Russell Impagliazzo did for complexity theory in 1995...". My original comment would be the same if it was an anonymous post.
comment by mishka · 2023-05-05T21:01:45.193Z · LW(p) · GW(p)
Zvi discusses this in detail in Section 16, "Potential Future Scenario Naming" of his May 4, 2023 post AI #10: Code Interpreter and Geoff Hinton [LW · GW]
comment by mishka · 2023-05-03T12:48:14.465Z · LW(p) · GW(p)
I found it interesting to compare their map of AI outcomes with a very differently structured map (of 7 broad bins linearly ordered by the outcome quality) shared by Nate Soares on Oct 31, 2022 in the following post:
Superintelligent AI is necessary for an amazing future, but far from sufficient [LW · GW]