Really Extreme Altruism

post by CronoDAS · 2009-03-15T06:51:34.773Z · LW · GW · Legacy · 98 comments

Contents

98 comments

In secret, an unemployed man with poor job prospects uses his savings to buy a large term life insurance policy, and designates a charity as the beneficiary. Two years after the policy is purchased, it will pay out in the event of suicide. The man waits the required two years, and then kills himself, much to the dismay of his surviving relatives. The charity receives the money and saves the lives of many people who would otherwise have died.

Are the actions of this man admirable or shameful?

98 comments

Comments sorted by top scores.

comment by AnnaSalamon · 2011-05-04T06:23:41.495Z · LW(p) · GW(p)

One thing to note is that the man would probably harm, not help, his chosen charity (in expectation).

If it was thought that the charity had encouraged the "really extreme altruism", or if it was simply thought that the charity was the sort of thing that fanatics like that liked, the charity would have serious problems attracting others' work or donations, since most people fear fanatical and suicidal mental states. It would need to refuse the money, and refusing the money wouldn't be enough to prevent serious damage.

Replies from: wedrifid, Mestroyer
comment by wedrifid · 2011-05-04T08:08:02.123Z · LW(p) · GW(p)

One thing to note is that the man would probably harm, not help, his chosen charity (in expectation).

One would hope that in the two years between signing up for the insurance policy and offing himself he took the time to figure out how to make the donation suitably indirect and manage appearances. All it would take is one person you can trust.

Replies from: khafra
comment by khafra · 2011-05-04T15:05:57.389Z · LW(p) · GW(p)

I don't know if "trust" is a sufficiently boolean property for this. One would need an executor trustworthy to

  • Handle large amounts of money with no oversight

  • Deal with the legal system

  • Maintain absolute discretion on the subject, basically forever

  • Deal with the knowledge that a close, trusting friend is going to commit suicide for unconventional reasons

A good lawyer fits some of those criteria, but not all; and is difficult for the unemployed to retain. Frankly, I think that most people who could inspire that kind of loyalty in others could do more good alive.

Replies from: wedrifid
comment by wedrifid · 2011-05-05T03:24:32.106Z · LW(p) · GW(p)

Deal with the knowledge that a close, trusting friend is going to commit suicide for unconventional reasons

They do not need to know this. Their role is to execute your will. That is all.

Frankly, I think that most people who could inspire that kind of loyalty in others could do more good alive.

Will the money to someone else who is obsessed with the cause. In that case you don't need personal trust. Just game theory.

Saying "this will do more harm than good" sounds wise and sends the desired message of 'suicide is bad and I do not encourage it' but isn't actually accurate under examination.

Replies from: rwallace
comment by rwallace · 2011-05-05T14:02:32.370Z · LW(p) · GW(p)

"This will do more harm than good" may not be accurate under examination, but I think it is accurate in reality.

What you're talking about is a flimsy elaborate plan that requires some people to do exactly what they are supposed to do and nobody else to seriously interfere. The probability of such a plan working first time is small enough to be ignored. Something will go wrong that you didn't think of.

In many contexts, that's not a showstopper: you wait until something does go wrong, then you fix it. But if step two of the plan was "you die", it's going to be a bit hard to fix what goes wrong in step three.

Replies from: wedrifid
comment by wedrifid · 2011-05-05T16:52:54.828Z · LW(p) · GW(p)

I disagree. Especially with the way 'flimsy', 'elaborate' and 'reality' are used (or misused) and the straightforward complications of will-execution raised as though this is some sort of special case.

I would consider an argument of the form "This is a f@$%@ing terrible idea because if you kill yourself you DIE" far more persuasive than anything that relied on technical difficulties. Flip. This is two years worth of preparation time. How long does it take to google "suicide look like accident"? The technical problem is utterly trivial. It is just one that you are better off not implementing. On account of life being better than death.

Replies from: rwallace
comment by rwallace · 2011-05-05T18:51:47.093Z · LW(p) · GW(p)

Well I agree with you that "if you kill yourself you die" is a sufficient and primary argument against the proposal. I was merely following the implied "what if somebody is in a suicidal mood and therefore not convinced by the primary argument, what arguments are there against the feasibility of the proposal on its own terms" of this subthread.

comment by Mestroyer · 2012-06-26T15:31:15.817Z · LW(p) · GW(p)

You could just split the money among a whole bunch of different charities. That way no one in particular is shamed by the news stories that result.

comment by MichaelVassar · 2009-03-23T15:29:10.037Z · LW(p) · GW(p)

Shame? Is that the issue? Shame sounds like something that he can't feel because he's dead but that his relatives could feel regarding him because his actions indicate/are their lack of selective fitness. His actions aren't generally admirable because human preferences aren't set up to admire that sort of altruism.
His actions are generally "good" in that they lead to a better rank order world by his criteria than non-action would, but are probably sub-optimal because at the cost of his life he can probably produce a better world rank-ordering (I certainly hope he managed to at least donate all his organs, but unless the recipients are radical altruists too he's still probably nowhere near optimal)

Replies from: TimFreeman
comment by TimFreeman · 2011-05-02T22:10:23.067Z · LW(p) · GW(p)

You can't usefully donate organs if you commit suicide. Suicide leads to autopsy leads to unusable organs.

This covers donating brains too, so to a first approximation, cryonics won't work for you if you suicide.

With that said, I agree that if we assume for the purposes of argument he could have donated his organs, and he cared enough about others to donate his life insurance to charity, he would probably want to donate his organs too.

Replies from: christopherj
comment by christopherj · 2013-10-10T02:13:29.356Z · LW(p) · GW(p)

I wonder how long before an insurance company decides to test cryonics as an excuse. "We respect his belief that he is not dead, but rather in suspended animation."

Replies from: DanielH
comment by DanielH · 2014-06-13T05:41:33.476Z · LW(p) · GW(p)

That would probably be a good thing. I think that the company says they pay out in the event of legal death, so this would mean that they'd have to try to get the person declared "not dead". By extension, all cryonics patients (or at least all future cryonics patients with similar-quality preservations) would be not dead. If I were in charge of the cryonics organization this argument was used against, I would float the costs of the preservation and try to get my lawyers working on the same side as those of the insurance company. If they succeed, cryonics patients aren't legally dead and have more rights, which is well worth the cost of one guy's preservation + legal fees. If they fail, I get the insurance money anyway, so I'm only out the legal fees.

At least most cryonics patients have negligible income, so the IRS isn't likely to get very interested.

comment by 110phil · 2009-03-16T15:23:25.732Z · LW(p) · GW(p)

The man has done nothing shameful: (a) his life is his own; and (b) the insurance company bet, with its eyes open, that sufficient suicide-intenders would back down from their plans within two years that the policies would still be profitable. It lost its bet, but it was a reasonable bet.

The man has done nothing admirable, either; he has taken money from the shareholders of the insurance company, and given it to charity. Presumably this is something the shareholders could have done themselves, if they chose to. So from a libertarian standpoint, this is not an admirable act -- he forced the shareholders to do something they didn't want to do. Even though he did this through "voluntary" means.

However, I can see that if you're of the opinion that it's a good thing to take money from shareholders (who presumably are wealthier than average) and use it to save lives, then I can see how you would think this to be an admirable act.

You could also argue that the insurance company isn't stupid: it may have sold a thousand policies to intended-suiciders, and this was the only one who went through with it. In that case, the insurance company made a profit, and this man actually had a 99.9% probability of being one of the mind-changers. Unless he had strong reason to believe that he'd be the exception, he should have realized that there was a large probability, that, like the others, he was irrationally believing that his probability was higher than 0.1%.

What he should have done was contingently committed to selling his organs on the black market before committing suicide. Then, there would have been a net benefit to his death, instead of it being zero-sum, and his actions would have been admirable.

Replies from: Annoyance, John_Maxwell_IV, Eliezer_Yudkowsky
comment by Annoyance · 2009-03-23T15:37:35.750Z · LW(p) · GW(p)

"So from a libertarian standpoint, this is not an admirable act -- he forced the shareholders to do something they didn't want to do."

No, he didn't. They wanted to offer a life insurance policy. I'm confident that they're not thrilled about having to pay out, but they're not being forced to do anything against their will - only to keep to the obligations they freely entered into.

comment by John_Maxwell (John_Maxwell_IV) · 2009-03-17T04:11:32.420Z · LW(p) · GW(p)

The man has done nothing admirable, either; he has taken money from the shareholders of the insurance company, and given it to charity. Presumably this is something the shareholders could have done themselves, if they chose to. So from a libertarian standpoint, this is not an admirable act -- he forced the shareholders to do something they didn't want to do. Even though he did this through "voluntary" means.

This paragraph indicates that you believe that forcing people to do something they don't want to do is wrong.

What he should have done was contingently committed to selling his organs on the black market before committing suicide. Then, there would have been a net benefit to his death, instead of it being zero-sum, and his actions would have been admirable.

This paragraph indicates that you believe it is morally beneficial to save lives--in this case, by donating organs.

Why is it that when these two moral principles contradict, you let the first one win?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-02T19:10:45.181Z · LW(p) · GW(p)

What he should have done was contingently committed to selling his organs on the black market before committing suicide. Then, there would have been a net benefit to his death, instead of it being zero-sum, and his actions would have been admirable.

Does not follow - the breakup value of your organs is not necessarily greater than your organs working together. Just because someone gets paid doesn't mean that game is positive-sum.

Replies from: 110phil
comment by 110phil · 2011-05-26T02:34:19.241Z · LW(p) · GW(p)

Yes, I assumed that the breakup value of the organs was higher. That seems reasonable to me: two kidneys save two lives, one liver saves a third life, and so on. And only one life is lost, and that one voluntarily.

Also, my argument was not contingent on anyone being paid ... donating organs on the black market works too.

Replies from: wedrifid
comment by wedrifid · 2011-05-26T11:07:18.480Z · LW(p) · GW(p)

That seems reasonable to me: two kidneys save two lives, one liver saves a third life, and so on.

House MD doesn't seem to get that sort of conversion rate from organs to lives saved. Am I generalising from fictional evidence or is your life saving equation absurdly optimistic. Ok, I admit, both.

Replies from: 110phil
comment by 110phil · 2011-05-26T19:11:58.067Z · LW(p) · GW(p)

I guess it's an empirical question. A death creates two kidneys. Are there usually two people on a waiting list who need the kidneys and would otherwise die? If not, then perhaps I am indeed being too optimistic.

Replies from: wedrifid
comment by wedrifid · 2011-05-26T20:23:24.371Z · LW(p) · GW(p)

I guess it's an empirical question.

Yes.

A death creates two kidneys. Are there usually two people on a waiting list who need the kidneys and would otherwise die?

Humans aren't lego. Yes, we can transplant but they don't always work and they don't always last indefinitely. We also don't just use them to flip a nice integer 'life saved' up by one. It's ok if the spare organ just increases someone's chances. Or extends a life for a while. Or drastically improves the quality of life for someone who was scraping by with other measures.

If I recall correctly kidneys are actually the easiest organ to transplant - the least likely to cause rejection. With the right donors it gets up into the 90s(%). But translating that into lives saved or 'years added to life' is a little tricky. Especially when we the patients also happen to require transfusions of donor blood throughout the process. We like to say the blood transfusions are 'saving a life'. There are only so many times you can count a life as 'saved' in a given period of time.

Replies from: 110phil
comment by 110phil · 2011-05-28T00:21:57.773Z · LW(p) · GW(p)

OK, fair enough.

It sounds to me, though, like it should be possible to somehow quantify the benefit of donating a kidney, on some scale, at least. Or do you think the benefit is so small, relative to one suicide, that my original argument doesn't hold?

Replies from: michaelkeenan
comment by michaelkeenan · 2012-01-29T21:43:59.449Z · LW(p) · GW(p)

it should be possible to somehow quantify the benefit of donating a kidney, on some scale, at least.

From Wikipedia:

Kidney transplantation is a life-extending procedure.[24] The typical patient will live 10 to 15 years longer with a kidney transplant than if kept on dialysis.[25] The increase in longevity is greater for younger patients, but even 75-year-old recipients (the oldest group for which there is data) gain an average four more years of life. People generally have more energy, a less restricted diet, and fewer complications with a kidney transplant than if they stay on conventional dialysis.

comment by Kevin · 2010-03-23T17:51:46.867Z · LW(p) · GW(p)

I think this post would count as a public statement that would invalidate your life insurance policy upon suicide. Insurance companies are in the business of not actually paying out their benefits.

However, I think we could do some advocacy related to this on the usenet hardcore suicide newsgroups. We might convince some people to delay their suicides long enough to not actually kill themselves, as this meme sounds different than most other memes trying to convince truly suicidal people to not do it.

Replies from: CronoDAS
comment by CronoDAS · 2010-04-08T04:42:17.201Z · LW(p) · GW(p)

I think this post would count as a public statement that would invalidate your life insurance policy upon suicide. Insurance companies are in the business of not actually paying out their benefits.

Under U.S. law, after two years, life insurance policies can't be revoked for any reason except non-payment of premiums. If they don't cancel the policy in those two years, they have to pay out regardless of how big a liar you were.

However, I think we could do some advocacy related to this on the usenet hardcore suicide newsgroups. We might convince some people to delay their suicides long enough to not actually kill themselves, as this meme sounds different than most other memes trying to convince truly suicidal people to not do it.

And if they kill themselves anyway, after the two years are over, at least they saved a lot of other lives. Do you know of a way to reach actual suicidal people?

comment by abigailgem · 2009-03-15T20:18:01.845Z · LW(p) · GW(p)

I am not sure I can be rational about this at all, because I find suicide repulsive. Yet my society admires the bravery of a soldier who, say, throws himself on a grenade so that it will not kill the others in his dugout. I might see a tincture of dishonesty in the man's actions, and yet he enters a contract, with a free contracting party, and performs his part of the contract.

So. Something to practice Rationality on. To consider the value of an emotional response. Thank you. I am afraid, I still have the emotional response, shameful. I cannot, now, see it as admirable.

Replies from: AlexanderRM
comment by AlexanderRM · 2015-09-02T20:00:02.008Z · LW(p) · GW(p)

I was about to give the exact same example of the soldier throwing himself on a grenade. I don't know where the idea of his actions being "shameful" even comes up.

The one thing I realize from your comment is there's the dishonesty of his actions, and if lots of people did this insurance companies would start catching on and it would stop working plus it would make life insurance that much harder to work with. But it didn't sound like the original post was talking about that with "shameful", it sounds like they were suggesting (or assuming people would think) that there was something inherently wrong with the man's altruism. At least that's what's implied by the title, "really extreme altruism".

Edit: I didn't catch the "Two years after the policy is purchased, it will pay out in the event of suicide." bit until reading others comments- so, indeed, he's not being dishonest, he made a bet with the insurance company (over whether he would still intend suicide two years later) and the insurance company lost. I don't know how many insurance companies have clauses like that, though.

comment by Rain · 2011-05-02T19:25:06.366Z · LW(p) · GW(p)

Why does it matter if the man is admired or shamed?

Do generic charities accept and process suicide insurance payments or estates?

Are you planning to do this?

Note the recent movie Seven Pounds.

comment by John_Maxwell (John_Maxwell_IV) · 2009-03-17T04:05:47.956Z · LW(p) · GW(p)

Admirable, presuming that he expects the lives saved to be happy ones.

comment by AllanCrossman · 2009-03-15T18:21:18.337Z · LW(p) · GW(p)

I'll just come out and say that - if we're allowed to ignore poorly foreseen consequences like insurance premiums going up - then yes, the action is admirable.

Roko: "If I knew someone was capable of this, I wouldn't want them as a friend or partner."

All the more reason for the man to go through with it, since he's so unappreciated and unwelcome.

Marshall: "He needs [...] a real problem to work with."

People dying preventable deaths is not a real problem?

comment by Roko · 2009-03-15T14:11:03.508Z · LW(p) · GW(p)

I like this post, because it nails down my moral preferences quite nicely. I would not, under any circumstances do this. What does that tell me about my goals in life? It tells me that I place a very high priority upon my continued existence, and that even the dnation of £10^6 to a very worthy charity, which might save a thousand lives is not worth dying for.

Replies from: CronoDAS, John_Maxwell_IV
comment by CronoDAS · 2009-03-15T14:51:42.611Z · LW(p) · GW(p)

Yes, but would you object to someone else attempting this?

Replies from: Roko
comment by Roko · 2009-03-15T14:57:07.886Z · LW(p) · GW(p)

No, if random person wants to sacrifice their life for the greater good, then I have no objection.

I would, however, suggest that they are lacking somewhat in humanity. There is such a thing as being altruistic beyond the human norm, and this is an example of it.

If I knew someone was capable of this, I wouldn't want them as a friend or partner. Who knows when they might make one utilitarian calculation too many and kill us both?

Perhaps I am paranoid about this because... I used to be like that.

Replies from: gwern, Nebu
comment by gwern · 2009-03-15T21:32:37.825Z · LW(p) · GW(p)

I would, however, suggest that they are lacking somewhat in humanity. There is such a thing as being altruistic beyond the human norm, and this is an example of it.

Reminds me of one of the 101 Zen Stories http://www.101zenstories.com/index.php?story=13 :

"Hello, brother," Tanzan greeted him. "Won't you have a drink?"

"I never drink!" exclaimed Unsho solemnly.

"One who does not drink is not even human," said Tanzan.

"Do you mean to call me inhuman just because I do not indulge in intoxicating liquids!" exclaimed Unsho in anger. "Then if I am not human, what am I?"

"A Buddha," answered Tanzan.

comment by Nebu · 2009-03-17T16:42:55.668Z · LW(p) · GW(p)

If I knew someone was capable of this, I wouldn't want them as a friend or partner. Who knows when they might make one utilitarian calculation too many and kill us both?

What if the friend shared the same core values as you? If my friend had the same core value as me (e.g. it is worth killing two people to save a billion people from eternal torture), and were utilitarian, then perhaps I'd be "ok"[1] with my friend making "one utilitarian calculation too many" and killing both of us.

1: By "ok", I guess I mean I'd probably be very upset during those final moments where I'm dying, and then my consciousness would cease, my final thoughts to be damning my friend. But if I allow myself to imagine an after-life, I could see eventually (weeks after my death? months?) eventually grudgingly coming to accept that his/her choice was probably the rational one, and agreeing that (s)he "did the right thing".

comment by John_Maxwell (John_Maxwell_IV) · 2009-03-17T04:13:21.946Z · LW(p) · GW(p)

You're not answering the question of whether the man did something admirable or shameful.

comment by AllanCrossman · 2009-03-17T19:04:39.584Z · LW(p) · GW(p)

A significant recurring theme in the comments is that the man is essentially forcing a redistribution of wealth.

Speaking for myself, I have no in-principle problem with that. I broadly support capitalism because it is probably the system that gives the best overall result. But I'm perfectly happy to support redistribution if the benefits genuinely outweigh the costs.

"So from a libertarian standpoint, this is not an admirable act -- he forced the shareholders to do something they didn't want to do."

But he also saved many people from having to do something they didn't want to do, namely, die. The balance is still in his favour: he chooses the lesser evil.

comment by [deleted] · 2012-12-24T08:31:41.701Z · LW(p) · GW(p)

Would a middle ground option such as "permissible but not morally required" (i.e. neither admirable nor shameful) be valid?

comment by MinibearRex · 2011-05-02T19:28:39.916Z · LW(p) · GW(p)

Simple answer: Is the charity going to do more benefit with that money than he caused his family and friends? If so, then his actions were at least a net positive from a utilitarian standpoint. It doesn't necessarily follow that it was the best action, though. Could he have raised a comparable amount of money on his own to help people with, without resorting to killing himself? If so, then I am more inclined to believe that he simply had decided to kill himself, and took advantage of it in order to try to cause some benefit for the world, which I suppose I can give (limited) support to.

comment by Nick_Tarleton · 2009-03-15T14:30:48.476Z · LW(p) · GW(p)

I would be concerned with the charity refusing to take 'blood money', or getting bad press if it does so.

Replies from: Roko
comment by Roko · 2009-03-15T14:38:34.690Z · LW(p) · GW(p)

=! least convenient possible world, but true.

comment by timtyler · 2009-03-15T09:15:59.646Z · LW(p) · GW(p)

Offering insuring against sucicide seems pretty stupid to me. Like offering insurance against someone burning their own house down. So, presumably, this story is fictional.

Replies from: CronoDAS, AllanCrossman
comment by CronoDAS · 2009-03-15T09:42:07.699Z · LW(p) · GW(p)

I don't know of anyone who has actually done this, but it is indeed possible. At least in the United States, life insurance does cover death by suicide, as long as the policy was purchased two years before the suicide took place. Of course, the person purchasing the policy does have to disclose his medical history, including any past or ongoing treatment for depression, which insurers take into account when deciding how much to charge for a policy (or whether to offer one at all).

Yes, it's morbid, but I actually did the research on this; an otherwise healthy young man might be able to get a 10 year term life insurance policy with a payout of $1,000,000 for an annual premium of around $600 (and a $10 million policy for $6000).

Replies from: Mario
comment by Mario · 2009-03-15T10:58:33.485Z · LW(p) · GW(p)

I think, then, that the harm associated with this man's suicide would have to take into account the rise in premiums he would be forcing on people in similar situations. His death may increase the amount a similar man would have to pay, decreasing the likelihood that he could afford insurance and increasing the harm that man's death would cause his dependents. Over time, those effects could swamp any short-term benefit to the charity.

Replies from: Eliezer_Yudkowsky, Nebu, jimmy
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-15T16:50:22.153Z · LW(p) · GW(p)

Or, if the behavior became common, insurance companies could simply decline to cover suicide. The problems would arise if, say, a car accident were accused of being a covert suicide (but wouldn't we have this same problem before the 2-year limit?) Perhaps that's why insurance companies cover suicides - for peace of mind, so that you know they won't accuse your corpse of having done it on purpose.

comment by Nebu · 2009-03-17T16:46:09.600Z · LW(p) · GW(p)

I think we can consider the harm associate with this man's suicide causing a rise in premiums to be relatively negligible, seeing as people have committed suicide while insured in the past, and it hasn't made prices so incredibly high as to stop insurance companies from being able to sell similar policies today.

comment by jimmy · 2009-03-15T19:02:46.971Z · LW(p) · GW(p)

Not only that, but he never generated the wealth in the first place. His savings were his, sure, but the rest of the money was essentially conned from the insurance company.

He did not make the world richer by sacrificing himself, he sacrificed himself to (dishonestly) reallocate resources.

I'd say support his actions iff you would support stealing to give to charity.

Replies from: Nebu, MichaelVassar, John_Maxwell_IV
comment by Nebu · 2009-03-17T16:50:27.810Z · LW(p) · GW(p)

the money was essentially conned from the insurance company.

I don't see it as "conned" (or perhaps I'm inferring some connotations that you don't intend to imply by that word?): The man took "suicide-insurance". That is to say, he signed a contract with the insurance company saying something along the lines of "I'll pay you $X per month for the rest of my life. If I don't commit suicide for 2 years, but then commit suicide after that, then you have to give me 1 million dollars."

I'm sure the insurance company fully understood the terms of the contract (in fact, it is practically certain that it was the insurance company itself which wrote out the contract). The insurance company fully understood the terms of the deal and agreed to it. They employ actuaries and lawyers go over the draft of their contracts to ensure it means exactly what they think it means. No party was mislead or misunderstood the terms. So how is that a con?

Replies from: brazil84
comment by brazil84 · 2011-05-04T10:23:12.224Z · LW(p) · GW(p)

I agree, I don't think it's a con. It only seems like a con because you are betting with the insurance company about the contents of your brain and most people naturally assume that they understand the contents of their own brain better than some outside agency.

However, I think that assumption is pretty clearly false. It seems that institutions have the benefit of a lot of past experience and can use that experience to understand people better (and predict their behavior better) than they understand or could predict themselves.

comment by MichaelVassar · 2009-03-23T15:33:19.883Z · LW(p) · GW(p)

Most people could acquire much more near term wealth via insurance than via work but could not acquire more near term wealth via theft (expected value) than via work.

comment by John_Maxwell (John_Maxwell_IV) · 2009-03-17T04:16:30.969Z · LW(p) · GW(p)

he sacrificed himself to (dishonestly) reallocate resources.

How was he dishonest?

Replies from: MichaelHoward, MichaelHoward
comment by MichaelHoward · 2009-03-17T19:25:46.484Z · LW(p) · GW(p)

Because he didn't disclose to the insurance company that he was planning to commit suicide at the time he took out the policy(!)

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-03-17T23:02:11.695Z · LW(p) · GW(p)

So? Not revealing info != dishonesty. Unless he signed a contract that stated that he had no intent to commit suicide, I don't think he ever lied.

Let's say I am a proficient at counting cards while playing blackjack. I go to the casino to gamble and walk away richer--consistently. This case is actually very similar to the insurance one, in that in both cases I am making a bet with some sort of large organization, and I know more about the nature of the bet than the large organization does.

Anyway, is the card counter dishonest? And if not, how is the man who commits suicide different?

Replies from: JGWeissman, MichaelHoward
comment by JGWeissman · 2011-05-02T20:07:41.698Z · LW(p) · GW(p)

Not revealing info != dishonesty.

Optimizing your decisions so that other people will form less accurate beliefs is dishonesty. Making literally false statements you expect other people to believe is just a special case of this.

If you decide not to reveal info because you predict that info will enable another person to accurately predict your behavior and decline to enter an agreement with you, you are being dishonest.

Replies from: John_Maxwell_IV, Nornagest, rhollerith_dot_com, wedrifid
comment by John_Maxwell (John_Maxwell_IV) · 2011-05-04T00:32:32.183Z · LW(p) · GW(p)

Hm, I wrote that comment two years ago. My new view is that it's not much worth arguing over the definition of "dishonesty" so figuring out whether the guy is "dishonest" or not is just a word game--we should figure out if others having correct beliefs is a terminal value to us, and if so, how it trades off against other terminal values. (Or perhaps individually not acting in ways that give others incorrect beliefs is a terminal value.)

As a consequentialist, I mostly say the ends justify the means. I am a little cautious due to the issues Eliezer discusses in this post, but I don't think I'm as cautious as Eliezer is--I have a fair amount of confidence in my ability to notice when my brain is going in to a failure mode like he describes.

comment by Nornagest · 2011-05-02T21:17:33.450Z · LW(p) · GW(p)

I'm not entirely comfortable with this line of thinking. Drawing a distinction between withholding relevant information and providing false information is such a common feature of moral systems that I can't help but think any heuristic that eliminates the distinction is missing something important. It all has to reduce to normality, after all.

That said, biases do exist, and if we can come up with a plausible mechanism by which it'd be psychologically important without being consequentially important then I think I'd be happier with the conclusion. It might just come down to how difficult it is to prove.

Replies from: Vladimir_Nesov, None
comment by Vladimir_Nesov · 2011-05-02T22:57:54.606Z · LW(p) · GW(p)

Drawing a distinction between withholding relevant information and providing false information is such a common feature of moral systems that I can't help but think any heuristic that eliminates the distinction is missing something important.

The pragmatic distinction is that lies are easier to catch (or make common knowledge), so the lying must be done more carefully than mere withholding of relevant information. Seeing withholding of information as a moral right is a self-delusion part of normal hypocritic reasoning. Breaking it will make you a less effective hypocrite, all else equal.

Replies from: wedrifid
comment by wedrifid · 2011-05-04T04:44:34.006Z · LW(p) · GW(p)

Seeing withholding of information as a moral right is a self-delusion part of normal hypocritic reasoning.

I assert that moral right overtly, embracing all relevant underlying connotations. I am in no way deluding myself regarding the basis for that assertion and it is not relevant to any hypocrisy that I may have.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-04T09:20:08.747Z · LW(p) · GW(p)

You haven't unpacked anything, black box disagreements don't particularly help to change anyone's mind. We are probably even talking about different things (the idea of "moral right" seems confused to me more generally, maybe you have a better interpretation).

Replies from: wedrifid
comment by wedrifid · 2011-05-04T09:47:08.436Z · LW(p) · GW(p)

You haven't unpacked anything, black box disagreements

It seems to be your black box. I just claim the right to withhold information - and am not thereby deluded or hypocritical. (I am deluded and hypocritical in completely different ways.)

the idea of "moral right" seems confused to me more generally, maybe you have a better interpretation

It isn't language I use by preference, even if I am occasionally willing to go along with it when others are using it. I presented my rejection as a personal assertion for that reason. While I don't personally place much stock in objectively phrased morality I can certainly go along with the game of claiming social rights.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-04T13:10:31.894Z · LW(p) · GW(p)

I just claim the right to withhold information - and am not thereby deluded or hypocritical.

Should people in general withhold relevant information more or less? There is only hypocrisy here (bad conduct given a commons problem) if less is better and you act in a way that promotes more, and self-delusion if you also believe this behavior good.

Replies from: wedrifid
comment by wedrifid · 2011-05-05T03:47:24.691Z · LW(p) · GW(p)

Should people in general withhold relevant information more or less? There is only hypocrisy here (bad conduct given a commons problem) if less is better and you act in a way that promotes more, and self-delusion if you also believe this behavior good.

It is no coincidence that one of the most effective solutions to a commons problem is the assignment of individual rights.

People in general should not be obliged to share all relevant information with me, nor I with them. In the same way they should not be obliged to give me their stuff whenever I want it. Because that kind of social structure is unstable and has a predictable failure mode of extreme hypocrisy.

No, my asserted right, if adhered to consistently (and I certainly encourage others to assert the same right for themselves) reduces the need for hypocrisy. This is in contrast to the advocation of superficially 'nice' sounding social rules to be supported by penalty of shaming and labeling - that is where the self delusional lies. I prefer to support conventions that might actually work and that don't unduly penalize those that abide by them.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-05T09:09:02.174Z · LW(p) · GW(p)

Agreed that it's practical.

comment by [deleted] · 2011-05-02T21:39:20.988Z · LW(p) · GW(p)

I'm not entirely comfortable with this line of thinking. Drawing a distinction between withholding relevant information and providing false information is such a common feature of moral systems that I can't help but think any heuristic that eliminates the distinction is missing something important.

I agree that a distinction should be drawn but I disagree about where. I think the morally important distinction is not between withholding information and providing false information, but why and in what context you are misleading the other person. If he's trying to violate your rights, for example, or if he's prying into something that's none of his business, then lie away. If you are trying to screw him over by misleading him, then you are getting into a moral gray area, or possibly worse.

Replies from: Nornagest
comment by Nornagest · 2011-05-02T21:44:55.783Z · LW(p) · GW(p)

Nah, that's just standard deontological vs. consequential thinking. If dishonesty is approached in consequential terms then it becomes just another act of (fully generalized) aggression -- something you don't want to do to someone except in self-defense or unless you'd also slash their tires, to borrow an Eliezer phrase, but not something that's forbidden in all cases. It only becomes problematic in general if there's a deontological prohibition against it.

Looking at it that way doesn't clarify the distinction between lying by commission vs. lying by omission, though. There's something else going on there.

Replies from: None
comment by [deleted] · 2011-05-02T21:54:06.402Z · LW(p) · GW(p)

I don't know what you just said. For example you wrote: "that's just standard deontological vs. consequential thinking." What does that mean? Does that mean that I have in a single comment articulated both deontological and consequentialist thinking and set them at odds, simultaneously arguing both sides? Or are you saying I articulated one of these? If so, which one?

For my part, I don't think my comment takes either side. Whether your view is deontological or consequentialist, you should agree on the basics, which includes that you have a right to self-defense. That is the context I am talking about in deciding whether the deception is moral. So I am not saying anything consequentialist here, if that's your point. A deontologisr should agree on the right to self defense, unless his moral axioms are badly chosen.

Replies from: Nornagest
comment by Nornagest · 2011-05-02T21:56:32.456Z · LW(p) · GW(p)

I think your comment describes a consequentialist take on the subject of dishonesty and implicitly argues that the deontological version is incorrect. I agree with that conclusion, but I don't think it says anything unusual on the subject of dishonesty in particular.

Replies from: None
comment by [deleted] · 2011-05-02T21:58:16.668Z · LW(p) · GW(p)

You think the right to self defense is consequentialist? That's the first I've heard about that.

Replies from: Nornagest
comment by Nornagest · 2011-05-02T22:00:21.332Z · LW(p) · GW(p)

In this context, and as a heuristic rather than a defining feature. Most systems of deontological ethics I've ever heard of don't allow for lying in self-defense; it's possible in principle to come up with one that does, but I've never seen a well-defined one in the wild.

I was really looking more at the structure of your comment than at the specific example of self-defense, though: you described some examples of dishonesty aimed at minimizing harm and contrasted them with unambiguously negative-sum examples, which is a style of argument I associate (pretty strongly) with a pragmatic/consequential approach to ethics. My mistake if that's a bad assumption.

Replies from: None
comment by [deleted] · 2011-05-02T22:16:28.855Z · LW(p) · GW(p)

Most systems of deontological ethics I've ever heard of don't allow for lying in self-defense

It's no different in principle from killing in self defense. If these systems don't allow lying in self defense, then they must not allow self defense at all, because lying in self defense is a trivial application of the general right to self defense.

Anyway, the fact that my point triggered a memory in you of a consequentialist versus deontological dispute does not change my point. If we delete everything you said about deontologists versus consequentialists, have you actually said something to deflect my point?

Replies from: wedrifid, thomblake, Nornagest
comment by wedrifid · 2011-05-04T04:57:12.220Z · LW(p) · GW(p)

It's no different in principle from killing in self defense. If these systems don't allow lying in self defense, then they must not allow self defense at all, because lying in self defense is a trivial application of the general right to self defense.

I don't think that follows. These are deontologists we are talking about. They are in the business of making up a set of arbitrary rules and saying that's what people should do. Remembering to include a rule about being allowed to defend yourself physically doesn't mean they will remember to also allow people to lie in self defense.

We can't assume deontologists are sane or reasonable. They are humans talking about morality!

Replies from: Peterdjones
comment by Peterdjones · 2011-05-04T12:33:45.373Z · LW(p) · GW(p)

These are deontologists we are talking about. They are in the business of making up a set of arbitrary rules and saying that's what people should do. Remembering to include a rule about being allowed to defend yourself physically doesn't mean they will remember to also allow people to lie in self defense.

Well, that' wasn't a caricature...!

Replies from: wedrifid, shokwave
comment by wedrifid · 2011-05-06T01:16:07.833Z · LW(p) · GW(p)

Well, that' wasn't a caricature...!

I don't think it was. Just a fairly simple and non-technical description. A similar simplified description of consequentialist moralizing would not read all that much differently.

The key sentence in the comment in terms of conveying perspective was "They are humans talking about morality!" I actually suggest the description errs on the side of a positive idealized spin. Morality just isn't that nice.

comment by shokwave · 2011-05-04T12:56:44.092Z · LW(p) · GW(p)

That is actually how deontologists work, though. It's not a caricature when the people you're talking about say this is okay because it's Right and this isn't because it's Wrong and when you ask them why some things are Right and other things are Wrong, they try to conjure up the inherent Rightness and Wrongness of actions from nowhere. Seriously!

Replies from: Alicorn
comment by Alicorn · 2011-05-05T22:44:05.592Z · LW(p) · GW(p)

No.

Replies from: shokwave
comment by shokwave · 2011-05-06T04:02:00.665Z · LW(p) · GW(p)

I have discussed this point with a few people, and the two who self-identified as non-religious deontologists explicitly assigned objective rightness and wrongness to actions.

"Murder was wrong before there were human beings, and murder will be wrong after there are human beings. Murder would be wrong even if the universe didn't contain any human beings".

The kind of people who are using this word "deontologist" to refer to themselves actually are doing this.

Replies from: Alicorn, wedrifid
comment by Alicorn · 2011-05-06T04:34:08.642Z · LW(p) · GW(p)

I use the word "deontologist" to refer to myself. I do assign objective rightness and wrongness to things (technically intentions, not actions, though I will talk loosely of actions). There is no meaningful sense in which murder could be wrong in a universe that did not contain any people (humans per se are not called for) because there would be no moral agents to commit wrong acts or be the victims of rights violations. In such an uninhabited universe, it would remain counterfactually wrong for any people to murder any other people if people were to come into existence. ("Counterfactually wrong" in much the same way that it would be wrong for me to steal my roommate's diamond tiara, if she had a diamond tiara, but since she doesn't it's a pointless statement.)

Replies from: Peterdjones
comment by Peterdjones · 2011-05-06T13:30:37.072Z · LW(p) · GW(p)

"Deontologist" and "Moral Objectivist" are not synonyms. Most deontologists are nonetheless objectivists. The reverse does not hold since, for instance, consequentiailists are not deontologists but are subjectivists.

It is sill a caricature to say deontologists conjure up Right and Wrong out of nowhere. The most famous deontologist was probably Kant, who argued elaborately for his claims.

The persistent problem in these discussions is the assumption that moral objectivism can only work like a quasi-empiricism, detecting some special domain of ethical facts. However, nobody seriously argues for it that way.

As noted by Alicorn. moral laws can apply counterfactually just as easily as natural laws.

comment by wedrifid · 2011-05-06T12:57:01.468Z · LW(p) · GW(p)

The kind of people who are using this word "deontologist" to refer to themselves actually are doing this.

That is certainly true but for my part I attribute that to them being humans engaging in moralizing, not their deonotology per se. The the 'objective rightness of their morals' thing can just as well be applied to consequentialist values.

Replies from: shokwave
comment by shokwave · 2011-05-07T04:21:23.538Z · LW(p) · GW(p)

Right; I trusted them when they said it was deontology that gave them absolute values - but of course, a moralizing human would say that.

comment by thomblake · 2011-05-02T22:37:57.378Z · LW(p) · GW(p)

If these systems don't allow lying in self defense, then they must not allow self defense at all, because lying in self defense is a trivial application of the general right to self defense.

'Rights' are most usefully thought of in political contexts; ethically, the question is not so much "Do I have a right to self-defense?" as "Should I defend myself?".

For Kant (the principal deontologist), lying is inherently self-defeating. The point of lying is to make someone believe what you say; but, if everyone would lie in that circumstance, then no one would believe what you say. And so lying cannot be universalized for any circumstance, and so is disallowed by the criterion of universalizability.

Replies from: None
comment by [deleted] · 2011-05-02T22:43:34.004Z · LW(p) · GW(p)

if everyone would lie in that circumstance, then no one would believe what you say.

This is only true if the other party is aware of the circumstance. If they are not - if they are already deceived about the circumstance - then if everyone lied in the circumstance, the other party would still be deceived. Therefore lying is not self-defeating.

Replies from: thomblake
comment by thomblake · 2011-05-02T22:55:45.807Z · LW(p) · GW(p)

I was just pointing out how Kant might justify self-defense but not lying in self-defense, in summary. If you'd like to disagree with Kant, I suggest doing so against more than an off-the-cuff summary.

Though I don't recommend bothering with it, as his ethics is based on his metaphysics and his metaphysics is false.

Replies from: None
comment by [deleted] · 2011-05-02T23:09:42.243Z · LW(p) · GW(p)

Understood.

comment by Nornagest · 2011-05-02T22:22:49.117Z · LW(p) · GW(p)

I don't disagree with your point. I just don't see it as relevant to mine.

There are any number of ways we can slice up a moral question: initiation of harm's one, protected categories like the "not any of your business" you mentioned are another, and my omission/commission distinction is a third. Bringing up one doesn't invalidate another.

Replies from: None
comment by [deleted] · 2011-05-02T22:35:28.368Z · LW(p) · GW(p)

But I think lying by omission can indeed be very bad, if you are using the lie of omission to defraud the other party, and that seems to be what is occurring in the scenario in question.

Generally speaking, we are not obligated to inform random people walking down the street of the facts. That would be active assistance, which we do not owe to random strangers. In contrast, telling random strangers active lies puts them at risk, because if they act on those lies they may be harmed. So there you have a moral distinction between failing to inform people of the truth, and informing them of lies. But if you are already interacting with someone, for example if you are buying life insurance from them with the intention of killing yourself, then they are no longer random strangers, and your obligations to them increase.

Replies from: Nornagest
comment by Nornagest · 2011-05-02T22:39:07.025Z · LW(p) · GW(p)

I am not arguing that lying by omission cannot be bad. Neither am I arguing for a specific policy toward lies of omission. I am arguing that folk ethics sees them as consistently less bad than lies of commission with the same consequences, and that a general discussion of the ethics of honesty ought to reflect this either by including reasons to do the same or by accounting for non-ethical reasons for the folk distinction. Otherwise you've got a theory that doesn't match the empirical data.

comment by RHollerith (rhollerith_dot_com) · 2011-05-02T21:49:37.941Z · LW(p) · GW(p)

That is how I feel.

comment by wedrifid · 2011-05-04T04:51:03.728Z · LW(p) · GW(p)

Optimizing your decisions so that other people will form less accurate beliefs is dishonesty. Making literally false statements you expect other people to believe is just a special case of this.

Only if dogs have five legs if you call a tail a leg.

Optimising your decisions so that other people will form less accurate beliefs can only be legitimately construed as dishonest if you say or otherwise communicate that it is your intention to produce accurate beliefs.

comment by MichaelHoward · 2009-03-18T00:40:40.855Z · LW(p) · GW(p)

Now I've thought more about it, if there's nothing in the agreement about suicide being intended at the time of application, then I think you're right.

I think of insurance policies as having clauses in about revealing any information that might affect the likelihood of a claim, but I can understand why that might not apply to life insurance policies.

comment by MichaelHoward · 2009-03-17T19:24:45.936Z · LW(p) · GW(p)

Because he didn't disclose to the insurance company that he was planning to commit suicide at the time he took out the policy(!)

comment by AllanCrossman · 2009-03-15T13:56:26.738Z · LW(p) · GW(p)

The Straight Dope has looked at this: http://preview.tinyurl.com/apvljw

comment by Marshall · 2009-03-15T11:20:05.694Z · LW(p) · GW(p)

This is a sad tale. Why invent such a sad tale? Such tales pollute and can infect.

The man in the story is obvioulsy ill. He had two years to get better, but didn't make it. The story gives no reason to make a moral judgement. It is just the spinning of rationalist wheels - signifying nothing. The storytelling on the other hand is a shameful and irresponsible act.

Replies from: bentarm, CronoDAS
comment by bentarm · 2009-03-15T12:43:40.660Z · LW(p) · GW(p)

"The man in the story is obvioulsy ill."

Are you living in "The Least Convenient of Possible Worlds"? It is surely conceivable that the man rationally considered his alternatives, and decided that the best thing he could do for the world was to kill himself and give the money from the life insurance policy to charity. Sure, it's also possible that he was ill, and then the story changes, but that's not what the story says. Or do you think thought experiments are inherently irresponsible?

Replies from: Marshall
comment by Marshall · 2009-03-15T15:13:08.245Z · LW(p) · GW(p)

Yes - I think just-so thought experiments about life with their built-in answers and embedded exclusions should be rejected outright. They have no friction, no gravity and say nothing of how you should spend the next hour of your life. They are like Hollywood action films - poison.

comment by CronoDAS · 2009-03-15T14:38:53.679Z · LW(p) · GW(p)

Assume that he is, indeed, suffering from depression, and attempts to treat it have not been particularly successful. Does that make a difference?

Replies from: Marshall
comment by Marshall · 2009-03-15T15:06:59.464Z · LW(p) · GW(p)

I do not think he is suffering from depression. I think he is "suffering" from some type of short-circuiting. Perhaps a genetic deficiency which leads to "meditations in morbidity" combined with reading too much of the wrong stuff and too little feedback from people in the real world (think LW readers with no job and little network). This "data-poverty" leading to delusions of grandeur.

He needs a girlfriend, a job and a real problem to work with.

Replies from: gjm, Roko
comment by gjm · 2009-03-15T19:47:40.073Z · LW(p) · GW(p)

The questions (1) "Is the fact that someone does X evidence of mental problems?" and (2) "Is doing X a good thing or a bad thing, on balance?" are different. As I read it, this article is addressing #2 and not #1. (I see no reason to think that there couldn't be rather a lot of things that ought to be done but that are psychologically near-impossible for most people with healthy minds.)

comment by Roko · 2009-03-15T15:28:11.576Z · LW(p) · GW(p)

"He needs a girlfriend, a job and a real problem to work with."

  • Seconded