post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Richard_Ngo (ricraz) · 2023-05-06T16:33:20.733Z · LW(p) · GW(p)

Flagging that Diffractor's work on threat-resistant bargaining [AF · GW] feels like the most important s-risk-related work I've ever seen, but I also haven't thoroughly evaluated it so I'd love for someone to do so and write up their thoughts.

Replies from: Telofy
comment by Dawn Drescher (Telofy) · 2023-05-06T19:16:25.685Z · LW(p) · GW(p)

Woah, thanks! I hadn’t seen it!

comment by UHMWPE-UwU (abukeki) · 2023-05-05T00:43:27.818Z · LW(p) · GW(p)

There's a new forum for this that seeks to increase discussion & coordination, reddit.com/r/sufferingrisk.

comment by Dagon · 2023-05-04T18:23:12.853Z · LW(p) · GW(p)

Not really core to any of those communities, so I don't have specific answers.  But I note that complacency is the human default for ANYTHING that doesn't have direct, obvious, immediate impact on an individual and their loved ones.

From nuclear war risks to repeated financial crises to massive money and power differentials, "why are we so complacent about X" is a common and valid question, rarely answered.

I'd recommend instead you frame it as a recommendation for specific action, not a question about attitude.  "you, dear reader, should do Y next week to reduce expected {average, total, median, whatever} future suffering" would go a lot further than asking why they're not obsessing over the topic.

I will note, though, for myself, I tend to focus on magnitude of positive experience-moments with some declining marginal value for both intensity and quantity) rather than suffering in isolation, so I think about s-risks only when they're so universal as to effectively be x-risks.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-05-05T00:30:53.811Z · LW(p) · GW(p)

I’d recommend instead you frame it as a recommendation for specific action, not a question about attitude. “you, dear reader, should do Y next week to reduce expected {average, total, median, whatever} future suffering” would go a lot further than asking why they’re not obsessing over the topic.

This would seem to be at odds with “aim to inform, not persuade”. (Is that still a rule? I seem to recall it being a rule, but now I can’t easily find it anywhere…)

Replies from: Dagon, Dacyn
comment by Dagon · 2023-05-05T17:10:09.805Z · LW(p) · GW(p)

It's never been a rule, more of a recommendation, and it's more about avoiding "arguments as soldiers" than a literal formation.  There are lots of exceptions, and I'd argue that it really should be "aim to learn" more than "aim to inform", though they're related.

In any case, obfuscating advocacy in the form of a somewhat rhetorical question seems strictly worse than EITHER informing or persuading.  It doesn't seem like anyone's trying to answer literally, they're answering related questions about the implied motivation of getting people to do something about S-risk.

comment by Dacyn · 2023-05-05T17:00:37.326Z · LW(p) · GW(p)

It's part of the "frontpage comment guidelines" that show up every time you make a comment. They don't appear on GreaterWrong though, which is why I guess you can't see them...

comment by gabo96 · 2023-05-06T15:07:40.754Z · LW(p) · GW(p)

I'd like to add another question: 

Why aren't we more concerned about s-risk than x-risk? 

Given that virtually everyone would prefer dying rather than facing an indefinite amount of suffering for an indefinite amount of time, I don't understand why more people are asking this question.

Replies from: elityre
comment by Eli Tyre (elityre) · 2023-05-08T15:25:19.414Z · LW(p) · GW(p)

There's actually pretty large differences of perspective on this claim.

comment by Algon · 2023-05-04T21:04:19.646Z · LW(p) · GW(p)

Personally, I have some deep psychological trauma related to pain and thinking about the topic is ... unproductive for me. Prolonged thinking about S-risks scares me, and I might not be able to think clearly about the topic. But maybe I could. The fear is what keeps me away. This is a flaw, and I'm unsure if it extends to other rationalists/EAs, but I'd guess people in these groups are unusually likely to have such scars because the LW memeplex is attractive to the walking wounded. I wouldn't be suprised if a few alignment researchers avoid s-risks for similair reasons. 

comment by Mitchell_Porter · 2023-05-05T02:03:36.411Z · LW(p) · GW(p)

Averting s-risks mostly means preventing zero-sum AI conflict. If we find a way (or many ways) to do that, every somewhat rational AI will voluntarily adopt them, because who wants to lose out on gains from trade.

You're hoping to come up with an argument for human value, that will be accepted by any AI, no matter what its value system?

Replies from: Telofy
comment by Dawn Drescher (Telofy) · 2023-05-06T19:14:11.377Z · LW(p) · GW(p)

No, just a value-neutral financial instrument such as escrow. If two people can fight or trade, but they can’t trade, because they don’t trust each other, they’ll fight. That loses out on gains from trade, and one of them ends up dead. But once you invent escrow, there’s suddenly, in many cases, an option to do the trade after all, and both can live!