Realistic near-future scenarios of AI doom understandable for non-techy people?

post by RomanS · 2023-04-28T14:45:16.791Z · LW · GW · 4 comments

This is a question post.

I would like to compile a list of the AI doom scenarios that most people (especially politicians) will understand, and will agree that the scenarios are realistic and facts-based. A few examples:

What are some other such scenarios?

Answers

4 comments

Comments sorted by top scores.

comment by Dagon · 2023-04-28T16:08:56.188Z · LW(p) · GW(p)

It's worth realizing that a lot of "this doesn't seem like a problem" reaction from politicians and "the public" is actually cover for "I don't want this to be a problem, and I see lots of visible and immediate harm from proposed solutions".  That second part is the True Objection - without a relatively painless and/or near-guaranteed solution, there's no incentive to acknowledge the problem.

comment by shminux · 2023-04-28T17:19:35.766Z · LW(p) · GW(p)

I think https://scottaaronson.blog/?p=7266 might be a good overview. It is kind of pointless to focus on specific ways everyone dies, since it is easy to argue with each specific one. The whole point is that Doom is disjunctive. Like trying to dam or plug a single channel of a river's delta, there will be another stream flowing somewhat differently to flood the basin, and the process is adversarial and anti-inductive.

Replies from: faul_sname
comment by faul_sname · 2023-04-28T19:05:34.015Z · LW(p) · GW(p)

I don't think it is pointless to focus on specific ways everyone dies, unless there is a single strategy that addresses every possible way everyone dies.

If FOOM isn't likely but something like this [AF · GW] is likely, it seems really unlikely to me that the approach of "continue to focus on strategies that rely on a single agent having a high level of control over the world" is still optimal (or, more accurately, it's probably still a good idea to have some people working on that but not all the people).

comment by irving (judith) · 2023-04-28T23:30:55.802Z · LW(p) · GW(p)

I just started a writing contest for detailed scenarios on how we get from our current scenario to AI ending the world. I want to compile the results on a website so we have an easily shareable link with more scenarios than can be ad hoc dismissed, because individual scenarios taken from a huge list are easy to argue against and thus discredit the list, but a critical mass of them presented at once defeats this effect. If anyone has good examples I'll add them to the website.