Ratios's Shortform
post by Ratios · 2024-03-19T09:49:11.796Z · LW · GW · 4 commentsContents
4 comments
4 comments
Comments sorted by top scores.
comment by Ratios · 2024-03-19T09:49:12.057Z · LW(p) · GW(p)
S-risks are barely discussed in LW, is that because:
- People think they are so improbable that it's not worth mentioning.
- People are scared to discuss them.
- Avoiding creating hypersititous textual attractors
- Other reasons?
↑ comment by ChristianKl · 2024-03-19T23:02:17.749Z · LW(p) · GW(p)
See https://web.archive.org/web/20230505191204/https://www.lesswrong.com/posts/5Jmhdun9crJGAJGyy/why-are-we-so-complacent-about-ai-hell for longer previous discussion on it.
↑ comment by Nate Showell · 2024-03-21T03:24:12.667Z · LW(p) · GW(p)
Mostly the first reason. The "made of atoms that can be used for something else" piece of the standard AI x-risk argument also applies to suffering conscious beings, so an AI would be unlikely to keep them around if the standard AI x-risk argument ends up being true.
↑ comment by Dagon · 2024-03-19T15:53:32.044Z · LW(p) · GW(p)
- There's a wide variance in how "suffering" is perceived, weighted, and (dis)valued, and no known resolution to different intuitions about it.
There's no real agreement on what S-risks even are, and whether they're anything but a tiny subset of other X-risks.
- Many people care less about (others) suffering than they do about positive-valence experience (of others). This may or may not be related to the fact that suffering is generally low-status and satisfaction/meaning is high-status.