MIRI's "Death with Dignity" in 60 seconds.

post by Cleo Nardo (strawberry calm) · 2022-12-06T17:18:58.387Z · LW · GW · 4 comments

Contents

4 comments

Suppose that, like Yudkowsky, you really care about humanity surviving this century but you think that nothing you can do has a decent chance of achieving that.

It's an unfortunate fact of human psychology that, when faced with this kind of situation, people will often do nothing at all instead of the thing which has the highest chance of achieving their goal. Hence, you might give up on alignment research entirely, and either lie in bed all day with paralysing depression, or convert your FAANG income into short-term pleasures. How can we avoid this trap?

It seems we have three options:

Of course, it's risky to change either your beliefs or your goals, because you might face a situation where the optimal policy after the change differs from the optimal policy before the change. But Yudkowsky thinks that (3) is less optimal-policy-corrupting than (2).

Why's that? Well, if you force yourself to believe something unlikely (e.g. "there's something I can do which makes survival likely"), then the inaccuracy can leak into your other beliefs because your beliefs are connected together by a web of inferences. You'll start making poor predictions about AI, and also make silly decisions.

On the other hand, changing your goal from "survival" to "dignity" is like Trying to Try [LW · GW] rather than trying — it's relatively less optimal-policy-corrupting.

4 comments

Comments sorted by top scores.

comment by jacquesthibs (jacques-thibodeau) · 2022-12-07T12:04:16.825Z · LW(p) · GW(p)

“Oh and btw, and while you are trying to increase the log-odds that humanity survives this century, don’t do anything stupid and rash that is way out-of distribution of normal actions. You are not some God who can do the full utilitarian calculus. If an action you are thinking about is far out-of-distribution and looks probably bad to a lot of people, it’s likely because it is. In other words, don’t naively take rash actions thinking it’s for the good of humanity. Default to 3/4 utilitarian.”

Connor Leahy’s opinion on the post (55:33): 

Replies from: strawberry calm
comment by Cleo Nardo (strawberry calm) · 2022-12-07T23:06:16.636Z · LW(p) · GW(p)

Yeah I mostly agree with Connor's interpretation of Death with Dignity.

I know a lot of the community thought it was a bad post, and some thought it was downright infohazardous, but the concept of "death with dignity" is pretty lindy actually. When a group of soldiers are fighting a battle with awful odds, they don't change their belief to "a miracle with save us", they change their goal to "I'll fight till my last breath".

If people find the mindset harmful, then they won't use it. If people find the mindset helpful, then they will use it. But I think everyone should try out the mindset for an hour or two.

comment by DragonGod · 2022-12-27T01:30:58.578Z · LW(p) · GW(p)

Strongly upvoted. I unironically think it's a pretty good distillation (I listened to the original post in the background).

comment by DragonGod · 2022-12-27T01:27:08.329Z · LW(p) · GW(p)

Strongly upvoted. I unironically think it's a pretty good distillation (I listened to the original post in the background).