Rational Terrorism or Why shouldn't we burn down tobacco fields?

post by whpearson · 2010-10-02T14:51:13.384Z · LW · GW · Legacy · 56 comments

Contents

56 comments

Related: Taking ideas seriously

Let us say hypothetically you care about stopping people smoking. 

You were going to donate $1000 dollars to givewell to save a life, instead you learn about an anti-tobacco campaign that is better. So you chose to donate $1000 dollars to a campaign to stop people smoking instead of donating it to a givewell charity to save an African's life. You justify this by expecting more people to live due to having stopped smoking (this probably isn't true, but for the sake of argument)

The consequences of donating to the anti-smoking campaign is that 1 person dies in africa and 20 live that would have died instead live all over the world. 

Now you also have the choice of setting fire to many tobacco plantations, you estimate that the increased cost of cigarettes would save 20 lives but it will kill likely 1 guard worker. You are very intelligent so you think you can get away with it. There are no consequences to this action. You don't care much about the scorched earth or loss of profits.

If there are causes with payoff matrices like this, then it seems like a real world instance of the trolley problem. We are willing to cause loss of life due to inaction to achieve our goals but not cause loss of life due to action.

What should you do?

Killing someone is generally wrong but you are causing the death of someone in both cases. You either need to justify that leaving someone to die is ethically not the same as killing someone, or inure yourself that when you chose to spend $1000 dollars in a way that doesn't save a life, you are killing. Or ignore the whole thing.

This just puts me off being utilitarian to be honest.

Edit: To clarify, I am an easy going person, I don't like making life and death decisions. I would rather live and laugh, without worrying about things too much.

This confluence of ideas made me realise that we are making life and death decisions every time we spend $1000 dollars. I'm not sure where I will go from here.

56 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2010-10-02T16:15:06.195Z · LW(p) · GW(p)

All this AI stuff is an unnecessary distraction. Why not bomb cigarette factories? If you're willing to tell people to stop smoking, you should be willing to kill a tobacco company executive if it will reduce lung cancer by the same amount, right?

This decision algorithm ("kill anyone whom I think needs killing") leads to general anarchy. There are a lot of people around who believe for one reason or another that killing various people would make things better, and most of them are wrong, for example religious fundamentalists who think killing gay people will improve society.

There are three possible equilibria - the one in which everyone kills everyone else, the one in which no one kills anyone else, and the one where everyone comes together to come up with a decision procedure to decide whom to kill - ie establishes an institution with a monopoly on using force. This third one is generally better than the other two which is why we have government and why most of us are usually willing to follow its laws.

I can conceive of extreme cases where it might be worth defecting from the equilibrium because the alternative is even worse - but bombing Intel? Come on. "A guy bombed a chip factory, guess we'll never pursue advanced computer technology again until we have the wisdom to use it."

Replies from: whpearson, XiXiDu
comment by whpearson · 2010-10-02T16:46:54.350Z · LW(p) · GW(p)

All this AI stuff is an unnecessary distraction.

In a way yes. It was just the context that I thought of the problem under.

. Why not bomb cigarette factories? If you're willing to tell people to stop smoking, you should be willing to kill a tobacco company executive if it will reduce lung cancer by the same amount, right?

Not quite. If you are willing to donate $1000 dollars to an ad campaign against stopping smoking, because you think the ad campaign will save more than 1 life then yes it might be equivalent. If killing that executive would have a comparable effect in saving lives as the ad campaign.

Edit: To make things clearer, I mean by not donating $1000 dollars to a give well charity you are already causing someone to die.

This decision algorithm ("kill anyone whom I think needs killing") leads to general anarchy.

But we are willing to let people die who we don't think are important that we could have saved. This is equivalent to killing them, no? Or do you approach the trolley problem in some way that references the wider society?

Like I said this line of thought made me want to reject utilitarianism.

"A guy bombed a chip factory, guess we'll never pursue advanced computer technology again until we have the wisdom to use it."

That wasn't the reasoning at all! It was, "Guess the price of computer chips has gone up due to the uncertainty of building chip factories so we can only afford 6 spiffy new brain simulators this year rather than 10." Each one has an X percent chance of becoming an AGI fooming and destroying us all. It is purely a stalling for time tactic. Feel free to ignore the AI argument if you want.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-10-02T22:39:01.131Z · LW(p) · GW(p)

I suppose the difference is whether you're doing the Intel attack now, or in a hypothetical future in which Intel is making brain simulators that seem likely to become AGI. As someone else mentioned, if we're talking about literally THEY ARE BUILDING SKYNET RIGHT NOW, then violence seems like the right idea.

I guess it is kind of suspicious that I know without doing the calculations that we're not at the point where violence is justified yet.

But we are willing to let people die who we don't think are important that we could have saved. This is equivalent to killing them, no? Or do you approach the trolley problem in some way that references the wider society?

Even though on this individual problem leaving things alone would be worse than committing an act of violence, in the general case having everyone commit acts of violence is worse than having everyone leave things alone.

This example cherry-picks a case where violence is the correct answer. But when we generalize it, most of the cases it affects won't be cherry picked and will have violence do more harm than good. We have to pretend we're setting a moral system both for ourselves and for the fundamentalist who wants to kill gay people.

So in this case, you're letting die (killing) the people your (smart) unpopular violent action would have saved, in order to save the lives of all the people whom other people's (stupid) unpopular violent actions would have killed.

It could be justified - if you're going to save the world from Skynet, that's worth instituting a moral system that gives religious fundamentalists a little more latitude to violent bigotry - but I imagine most cases wouldn't be.

comment by XiXiDu · 2010-10-02T16:26:59.747Z · LW(p) · GW(p)

I'm sure that smarts of the kind that visit LW and who work for the SIAI would be able to commit targeted killings of rogue researchers. Given that uFAI is portrayed to be the biggest risk humanity is facing, I don't think it is a distraction to directly ask for why we are not talking about preemptive actions against uFAI rather than trying to come up with a conceptual framework for FAI. Even if the SIAI succeeds in creating a mathematical strict model to guarantee FAI, who says that the people working on AGI would implement it? Given the stakes, the option to take active actions against anyone carelessly working on AGI should be seriously considered.

  • 1.) We have no guarantee that FAI is possible.
  • 2.) We don't know if the SIAI will succeed before the first AGI is build.
  • 3.) We don't know that if the SIAI succeeds that others will listen.

We should consider preemptive means in the case that someone might succeed in creating AGI under the above circumstances.

Replies from: Eliezer_Yudkowsky, Yvain
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-10-02T20:05:44.825Z · LW(p) · GW(p)

Someone asked me to delete this comment. I do not think deletion is necessary in cases like this. It's enough for the readers to downvote it into oblivion. Note that after it hits -3 most readers will stop seeing it, so don't panic if it only gets downvoted to -3 instead of -40.

Replies from: XiXiDu
comment by XiXiDu · 2010-10-03T09:23:45.896Z · LW(p) · GW(p)

You simply don't stand up to your own rationality here. Although I can understand why you have to deny it in public, of course it is illegal.

I think it is just ridiculous that people think about taking out terrorists and nuclear facilities but not about AI researchers that could destroy the universe according to your view that AI can go FOOM.

Why though don't we talk about contacting those people and tell them how dangerous it is, or maybe even try that they don't get any funding?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2010-10-03T10:23:32.900Z · LW(p) · GW(p)

If someone does think about it, do you think they would do it in public and we would ever hear about it? If someone's doing it, I hope they have the good sense to do it covertly instead of discussing all the violent and illegal things they're planning on an online forum.

Replies from: XiXiDu
comment by XiXiDu · 2010-10-03T10:34:21.253Z · LW(p) · GW(p)

I deleted all my other comments regarding this topic. I just wanted to figure out if you're preaching the imminent rise of sea levels and at the same time purchase ocean-front property. Your comment convinced me.

I guess it was obvious, but too interesting to ignore. Others will come up with this idea sooner or later and as AI going FOOM will become mainstream, people are going to act upon it.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-04T04:10:40.953Z · LW(p) · GW(p)

Thank you for deleting the comments; I realize that it's an interesting idea to play with, but it's just not something you can talk about in a public forum. Nothing good will come of it.

Replies from: XiXiDu
comment by XiXiDu · 2010-10-04T08:32:48.210Z · LW(p) · GW(p)

As usual, my lack of self control and that I do not think things through got me to act like an idiot. I guess someone like me is a even bigger risk :-(

I've even got a written list of rules I should follow but sometimes fail to care: Think before talking to people or writing stuff in public; Be careful of what you say and write; Rather write and say less or nothing at all if you're not sure it isn't stupid to do so; Be humble; You think you don't know much but you actually don't know nearly as much as you think; Other people won't perceive what you utter the way you intended it to be meant; Other people may take things really serious; You often fail to perceive that matters actually are serious, be careful...

A little bit of knowledge is a dangerous thing. It can convince you that an argument this idiotic and this sloppy is actually profound. It can convince you to publicly make a raging jackass out of yourself, by rambling on and on, based on a stupid misunderstanding of a simplified, informal, intuitive description of something complex. — The Danger When You Don’t Know What You Don’t Know

comment by Scott Alexander (Yvain) · 2010-10-02T19:50:52.159Z · LW(p) · GW(p)

Okay, I'm convinced. Let's add paramilitary.lesswrong.com to the subreddit proposal.

comment by Will_Newsome · 2010-10-02T21:31:23.273Z · LW(p) · GW(p)

This just puts me off being utilitarian to be honest.

Understandably so, because the outside view says that most such sacrifices for the greater good end up having been the result of bad epistemology and unrealistic assessments of the costs and benefits.

Strong rationality means that you'd be able to get away with such an act. But strong rationality also means that you generally have better methods of achieving your goals than dubious plans involving sacrifice. When you end thinking you have to do something intuitively morally objectionable 'for the greater good' then you should have tons of alarm bells going off in your head screaming out 'have you really paid attention to the outside view here?!'.

In philosophical problems, you might still have a dilemma. But in real life, such tradeoffs just don't come up on an individual level where you have to actually do the deed. Some stock traders might be actively profiting by screwing everyone over, but they don't have to do anything that would feel wrong in the EEA. The kinds of objections you hear against consequentialism are always about actions that feel wrong. Why not a more realistic example that doesn't directly feed off likely misplaced intuitions?

Imagine you're a big time banker whose firm is making tons of money off of questionably legal mortgage loans that you know will blow up in the economy's face, but you're donating all your money to a prestigious cancer research institute. You've done a very thorough analysis of the relative literature and talked to many high status doctors, and they say that with a couple billion dollars a cure to cancer is in sight. You know that when the economy blows up it will lead to lots of jobless folk without the ability to remortgage their homes. Which is sad, and you can picture all those homeless middle class people and their kids, depressed and alone, all because of you. But cancer is a huge bad ugly cause of death, and you can also picture all of those people that wouldn't have to go through dialysis and painful treatments only to die painfully anyway. Do you do the typically immoral and questionably illegal thing for the greater good?

Why isn't the above dilemma nearly as forceful an argument against consequentialism? Is it because it doesn't appeal in the same way to your evolutionarily adapted sense of justice? Then that might be evidence that your evolutionarily adapted sense of justice wasn't meant for rational moral judgement.

Replies from: Servant, whpearson
comment by Servant · 2010-10-03T03:15:25.906Z · LW(p) · GW(p)

You would likely have to, for the simple reason that if Cancer gets cured, more resources can be dedicated to dealing with other diseases, meaning even more lives will be saved in the process (on top of those lives saved due to the curing of Cancer).

The economy can be in shambles for a while, but it can recover in the future, unlike cancer patients..and you could always justifying it that if a banker like you could blow up the economy, it was already too weak in the first place: better to blow it up now when the damage can be limited rather than latter.

Though the reason it doesn't appeal is because you don't quote hard numbers, making the consequentalist rely on "value" judgements when doing his deeds...and different consequentalists have different "values". Your consequentalist would be trying to cure cancer by crashing the economy to raise money for a cancer charity, while a different consequentalist could be embezzling money from that same cancer charity in an attempt to save the economy from crashing.

comment by whpearson · 2010-10-03T14:42:42.378Z · LW(p) · GW(p)

Which is sad, and you can picture all those homeless middle class people and their kids, depressed and alone, all because of you.

And go on to commit suicide due to losing status, not be able to afford health insurance or die from lack of heating... Sure not all of them, but some of them would. Also cancer patients that relied on savings to pay for care might be affected in the time lag between crash and cure being created.

I'd also have weigh how quickly the billions could be raised without tanking the economy. And how many people the time difference in when it was developed would save.

So I am still stuck doing moral calculus, with death on my hands whatever I chose.

comment by XiXiDu · 2010-10-02T16:10:54.874Z · LW(p) · GW(p)

Here is an interesting comment related to this idea:

What I find a continuing source of amazement is that there is a subculture of people half of whom believe that AI will lead to the solving of all mankind's problems (which me might call Kurzweilian S^) and the other half of which is more or less certain (75% certain) that it will lead to annihilation. Lets call the latter the SIAI S^.

Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations.

And instead of waging desperate politico-military struggle to stop all this suicidal AI research you cheerlead for it, and focus your efforts on risk mitigation on discussions of how a friendly god-like AI could save us from annihilation.

You are a deeply schizophrenic little culture, which for a sociologist like me is just fascinating.

But as someone deeply concerned about these issues I find the irrationality of the S^ approach to a-life and AI threats deeply troubling. -- James J. Hughes (existential.ieet.org mailing list, 2010-07-11)

Also reminds me of this:

It is impossible for a rational person to both believe in imminent rise of sea levels and purchase ocean-front property.

It is reported that former Vice President Al Gore just purchased a villa in Montecito, California for $8.875 million. The exact address is not revealed, but Montecito is a relatively narrow strip bordering the Pacific Ocean. So its minimum elevation above sea level is 0 feet, while its overall elevation is variously reported at 50ft and 180ft. At the same time, Mr. Gore prominently sponsors a campaign and award-winning movie that warns that, due to Global Warming, we can expect to see nearby ocean-front locations, such as San Francisco, largely under water. The elevation of San Francisco is variously reported at 52ft up to high of 925ft.

I've highlighted the same idea before by the way:

Ask yourself, wouldn't you fly a plane into a tower if that was the only way to disable Skynet? The difference between religion and the risk of uFAI makes it even more dangerous. This crowd is actually highly intelligent and their incentive based on more than fairy tales told by goatherders. And if dumb people are already able to commit large-scale atrocities based on such nonsense, what are a bunch of highly-intelligent and devoted geeks who see a tangible danger able and willing to do? More so as in this case the very same people who believe it are the ones who think they must act themselves because their God doesn't even exist yet.

Replies from: Will_Newsome, cabalamat
comment by Will_Newsome · 2010-10-02T19:36:57.185Z · LW(p) · GW(p)

And instead of waging desperate politico-military struggle to stop all this suicidal AI research you cheerlead for it, and focus your efforts on risk mitigation on discussions of how a friendly god-like AI could save us from annihilation.

This is one of those good critiques of SIAI strategy that no one ever seems to make. I don't know why. More good critiques would be awesome. Voted up.

Replies from: bbleeker, XiXiDu
comment by Sabiola (bbleeker) · 2010-10-03T02:05:43.559Z · LW(p) · GW(p)

I don't really know the SIAI people, but I have the impression that they're not against AI at all. Sure, an unfriendly AI would be awful - but a friendly one would be awesome. And they probably think AI is inevitable, anyway.

Replies from: komponisto, Will_Newsome
comment by komponisto · 2010-10-03T02:27:31.538Z · LW(p) · GW(p)

This is true as far as it goes; however if you actually visit SIAI, you may find significantly more worry about UFAI in the short term than you would have expected just from reading Eliezer Yudkowsky's writings.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T06:45:24.822Z · LW(p) · GW(p)

I think that you interacted most with a pretty uncharacteristically biased sample of characters: most of the long-term SIAI folk have longer timelines than good ol' me and Justin by about 15-20 years. That said, it's true that everyone is still pretty worried about AI-soon, no matter the probability.

Replies from: komponisto
comment by komponisto · 2010-10-03T06:54:59.410Z · LW(p) · GW(p)

Well, 15-20 years doesn't strike me as that much of a time difference, actually. But in any case I was really talking about my surprise at the amount of emphasis on "preventing UFAI" as opposed to "creating FAI". Do you suppose that's also reflective of a biased sample?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T07:01:35.589Z · LW(p) · GW(p)

Well, 15-20 years doesn't strike me as that much of a time difference, actually.

Really? I mean, relative to your estimate it might not be big, but absolutely speaking, doom 15 years versus doom 35 years seems to make a huge difference in expected utility.

Do you suppose that's also reflective of a biased sample?

Probably insofar as Eliezer and Marcello weren't around: FAI and the Visiting Fellows intersect at decision theory only. But the more direct (and potentially dangerous) AGI stuff isn't openly discussed for obvious reasons.

Replies from: komponisto
comment by komponisto · 2010-10-03T07:23:12.387Z · LW(p) · GW(p)

relative to your estimate it might not be big, but absolutely speaking, doom 15 years versus doom 35 years seems to make a huge difference in expected utility.

A good point. By the way, I should mention that I updated my estimate after it was pointed out to me that other folks' estimates were taking Outside View considerations into account, and after I learned I had been overestimating the information-theoretic complexity of existing minds. FOOM before 2100 looks significantly more likely to me now than it did before.

Probably insofar as Eliezer and Marcello weren't around: FAI and the Visiting Fellows intersect at decision theory only.

Well I didn't expect that AGI technicalities would be discussed openly, of course. What I'm thinking of is Eliezer's attitude that (for now) AGI is unlikely to be developed by anyone not competent enough to realize Friendliness is a problem, versus the apparent fear among some other people that AGI might be cobbled together more or less haphazardly, even in the near term.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T07:27:17.759Z · LW(p) · GW(p)

Eliezer's attitude that (for now) AGI is unlikely to be developed by anyone not competent enough to realize Friendliness is a problem

Huh. I didn't get that from the sequences, perhaps I should check again. It always seemed to me as if he saw AGI as really frickin' hard but not excessively so, whereas Friendliness is the Impossible Problem made up of smaller but also impossible problems.

comment by Will_Newsome · 2010-10-03T02:22:25.098Z · LW(p) · GW(p)

I don't really know the SIAI people, but I have the impression that they're not against AI at all. Sure, an unfriendly AI would be awful - but a friendly one would be awesome.

True. I know the SIAI people pretty well (I'm kind of one of them) and can confirm they agree. But they're pretty heavily against uFAI development, which is what I thought XiXiDu's quote was talking about.

And they probably think AI is inevitable, anyway.

Well... hopefully not, in a sense. SIAI's working to improve widespread knowledge of the need for Friendliness among AGI researchers. It's inevitable (barring a global catastrophe), but they're hoping to make FAI more inevitable than uFAI.

As someone who was a volunteered for SIAI at the Singularity Summit, a critique of SIAI could be to ask why we're letting people who aren't concerned about uFAI speak at our conferences and affiliate with our memes. I think there are good answers to that critique, but the critique itself is a pretty reasonable one. Most complains about SIAI are comparatively maddeningly irrational (in my own estimation).

Replies from: Larks
comment by Larks · 2010-10-03T19:41:13.676Z · LW(p) · GW(p)

A stronger criticism, I think, is why the only mention of friendliness at the Summit was some very veiled hints in Eliezer's speech. Again, I think there are good reasons, but not good reasons that a lot of people know, so I don't understand why people bring up other criticisms before this one.

comment by XiXiDu · 2010-10-03T09:28:57.984Z · LW(p) · GW(p)

This was meant as a critique too. But people here seem not to believe what they preach, or they would follow their position taken to its logical extreme.

comment by cabalamat · 2010-10-03T00:59:32.341Z · LW(p) · GW(p)

Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations.

This seems to me a good strategy for SIAI people to persuade K-type people to join them.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-10-02T20:09:01.015Z · LW(p) · GW(p)

Ah yes, the standard argument against consequentialism: X has expected positive consequences, so consequentialism says we should do it. But clearly, if people do X, the world will be a worse place. That's why practicing consequentialism makes the world a worse place, and people shouldn't be consequentialists.

Personally I'll stick with consequentialism and say a few extra words in favor of maintaing the consistency of your abstract arguments and your real expectations.

Replies from: whpearson
comment by whpearson · 2010-10-02T20:17:38.431Z · LW(p) · GW(p)

It might be better for the world if people were consequentialists. I might be better off if I did more structured exercise. That doesn't mean I am going to like either of them.

comment by Will_Newsome · 2010-10-02T19:24:06.327Z · LW(p) · GW(p)

Uhm, it's seriously egregious and needlessly harmful to suggest that SIAI supporters should maybe be engaging in terrorism. Seriously. I agree with Yvain. The example is poor and meant to be inflammatory, not to facilitate reasonable debate about what you think utilitarianism means.

Would you please rewrite it with a different example so this doesn't just dissolve into a meaningless debate about x-risk x-rationality where half of your audience is already offended at what they believe to be a bad example and a flawed understanding of utilitarianism?

Replies from: Relsqui, whpearson
comment by Relsqui · 2010-10-03T08:33:05.147Z · LW(p) · GW(p)

A lot of the comments on this post were really confusing until I got to this one.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-03T08:36:41.542Z · LW(p) · GW(p)

I should make it explicit that the original post didn't advocate terrorism in any way but was a hypothetical reductio ad absurdum against utilitarianism that was obviously meant for philosophical consideration only.

Replies from: whpearson, XiXiDu
comment by whpearson · 2010-10-03T14:24:35.063Z · LW(p) · GW(p)

It was nothing as simple as a philosophical argument against anything.

It is a line of reasoning working from premises that seem to be widely held, that I am unsure of how to integrate into my world view in a way that I (or most people?) would be comfortable with.

comment by XiXiDu · 2010-10-03T10:07:33.023Z · LW(p) · GW(p)

I don't believe that you are honest in what you write here. If you would really vote against the bombing of Skynet before it tiles the universe with paperclips, then I don't think you actually believe most of what is written on LW.

Terrorism is just a word to discredit acts that are deemed bad by those that oppose it.

If I was really sure that Al Qaeda was going to set free some superbug bioweapon stored in a school and there was no way to stop them doing so and kill millions then I would advocate using incendiary bombs on the school to destroy the weapons. I accept the position that even killing one person can't be a mean to an end to save the whole world, but I don't see how that fits with what is believed in this community. See Torture vs. Dust Specks (The obvious answer is TORTURE, Robin Hanson).

I'll go ahead and reveal my answer now: Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity. -- Eliezer Yudkowsky

Replies from: Kevin
comment by Kevin · 2010-10-03T10:15:59.992Z · LW(p) · GW(p)

You missed the point. He said it was bad to talk about, not that he agreed or disagreed with any particular statement.

Replies from: XiXiDu
comment by XiXiDu · 2010-10-03T10:24:12.932Z · LW(p) · GW(p)

Hush, hush! Of course I know it is bad to talk about it in this way. Same with what Roko wrote. The amount of things we shouldn't talk about, even though they are completely rational, seems to be rising. I just don't have the list of forbidden topics at hand right now.

I don't think this is a solution. You better come up with some story why you people don't think killing is wrong to prevent Skynet, because the idea of AI going FOOM is getting mainstream quickly and people will draw this conclusion and act upon it. Or you stand to what you believe and try to explain why it wouldn't be terrorism but a far-seeing act to slow down AI research or at least watch over it and take out any dangerous research before FAI isn't guaranteed.

comment by whpearson · 2010-10-02T20:01:06.130Z · LW(p) · GW(p)

Done. The numbers don't really make sense in this version though....

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-02T21:01:30.990Z · LW(p) · GW(p)

Thanks. The slightly less sensible numbers might deaden the point of your argument a little bit, but I think the quality of discussion will be higher.

Replies from: whpearson
comment by whpearson · 2010-10-02T21:34:35.005Z · LW(p) · GW(p)

Somehow I doubt there will be much discussion, high quality or low :) It seems like it has gone below the threshold to be seen in the discussion section. It is -3 in case you are wondering.

comment by PeerInfinity · 2010-10-14T20:12:22.202Z · LW(p) · GW(p)

This confluence of ideas made me realise that we are making life and death decisions every time we spend $1000 dollars. I'm not sure where I will go from here.

Here's a blog post I found recently that discusses that idea further

comment by Relsqui · 2010-10-03T08:39:40.613Z · LW(p) · GW(p)

I'm surprised no one has linked to this yet. It's not a perfect match, but I think that "if killing innocent people seems like the right thing to do, you've probably made a mistake" is close enough to be relevant.

Maybe less so before the post was edited, I guess.

Replies from: NancyLebovitz, whpearson, XiXiDu
comment by NancyLebovitz · 2010-10-04T14:39:41.817Z · LW(p) · GW(p)

It would seem so, but is taking war into enemy territory that reliably a mistake?

comment by whpearson · 2010-10-03T12:28:53.547Z · LW(p) · GW(p)

I meant to link to that or something similar. In both situations I am killing someone. By not donating to a givewell charity some innocent in Africa dies, (saving more innocents live elsewhere). So I am already in mistake territory, even before I start thinking about terrorism.

I don't like being in mistake territory, so my brain is liable to want to shut off from thinking about it, or inure my heart to the decision.

Replies from: JGWeissman
comment by JGWeissman · 2010-10-03T18:57:53.097Z · LW(p) · GW(p)

The distinction between taking an action resulting in someone dying when counterfactually they would not have died if you took some other action, and when counterfactually they would not have died if you didn't exist, while not important to pure consequentialist reasoning, has bearing on when a human attempting consequentialist reasoning should be wary of the fact that they are running on hostile hardware.

Replies from: whpearson
comment by whpearson · 2010-10-04T10:23:21.647Z · LW(p) · GW(p)

You can slightly change the scenarios and get it so that people counter factually wouldn't have died if you didn't exist, which don't seem much morally different. For example X is going to donate to givewell and save Zs life. Should you (Y) convince X to donate to an anti-tobacco campaign which will save more lives. Is this morally the same as (risk free, escalation-less) terrorism or the same as being X?

Anyway I have the feeling people are getting bored of me on this subject, including myself. Simply chalk this up to someone not compartmentalizing correctly. Although I think that if I need to keep consequentialist reasoning compartmentalised, I am likely to find all consequentialist reasoning more suspect.

comment by XiXiDu · 2010-10-03T09:42:08.935Z · LW(p) · GW(p)

I think that "if killing innocent people seems like the right thing to do, you've probably made a mistake".

I don't think so. And I don't get why you wouldn't bomb Skynet if you could save the human race by doing so? Sure, you can call it a personal choice that has nothing to do with rationality. But in the face of posts like this I don't see why nobody here is suggesting to take active measures against uFAI. I can only conclude you either don't follow your beliefs through or don't discuss it because it could be perceived as terrorism.

comment by Will_Newsome · 2010-10-02T19:30:55.779Z · LW(p) · GW(p)

(Insert large amount of regret about not writing "Taking Ideas Seriously" better.)

Anyway, it's worth quoting Richard Chappell's comment on my post about virtue ethics-style consequentialism:

It's worth noting that pretty much every consequentialist since J.S. Mill has stressed the importance of inculcating generally-reliable dispositions / character traits, rather than attempting to explicitly make utility calculations in everyday life. It's certainly a good recommendation, but it seems misleading to characterize this as in any way at odds with the consequentialist tradition.

Replies from: whpearson, sfb
comment by whpearson · 2010-10-02T20:03:53.937Z · LW(p) · GW(p)

But SIAI have stressed making utility calculations in everyday life.... especially about charity.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-02T21:14:00.463Z · LW(p) · GW(p)

Hm, I wouldn't consider that 'in everyday life'. It seems like an expected utility calculation you do once every few months or years, when you're deciding where you should be giving charity. You would spend that time doing proto-consequentialist calculations anyway, even if you weren't explicitly calculating expected utility. Wanting to get the most warm fuzzies or status per dollar is typical altruistic behavior.

The difference in Eliezer's exhortations is that he's asking you to introspect more and think about whether or not you really want warm fuzzies or actual utilons, after you find out that significant utilons really are at stake. Whether or not you believe those utilons really are at stake at a certain probability becomes a question of fact, not a strain on your moral intuitions.

Replies from: whpearson
comment by whpearson · 2010-10-02T21:57:42.677Z · LW(p) · GW(p)

I had a broader meaning of everyday life, as things everyone might do.

Even taking a literal view of the sentence, burning down fields isn't an every day kind of thing.

I was actually thinking of Anna Salamon and her back of the envelope calculations about how worth it is to donate to SIAI, with that comment. I believe she mentions donating to givewell as a baseline to compare it with. Saving a human life is fairly significant utilons itself. So it was asking me to weigh up saving a human life to donating to SIAI. So the symmetric question came to mind. Hence this post.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-10-02T22:00:47.838Z · LW(p) · GW(p)

So it was asking me to weigh up saving a human life to donating to SIAI.

You phrase this as a weird dichotomy. It's more like asking you to weigh saving a life versus saving a lot of lives. Whether or not a lot of lives are actually at stake is an epistemic question, not a moral one.

comment by sfb · 2010-10-04T20:23:38.936Z · LW(p) · GW(p)

(Insert large amount of regret about not writing "Taking Ideas Seriously" better.)

Insert a reminder pointing to your medidation post and your relisation that post hoc beating yourself up about things doesn't benefit you enough to make it worth doing.

comment by JoshuaZ · 2010-10-03T02:34:58.191Z · LW(p) · GW(p)

In general, the primary problem with such behavior is that if lots of people do this sort of thing society falls apart. Thus, there's a bit of a prisoner's dilemma here. So any logic favoring "cooperate" more or less applies here.

Note also that for many people they would probably see this as wrong simply because humans have a tendency to see a major distinction between action and inaction. Action that results in bad things is seen as much worse than inaction that results in bad things. Thus, the death of the guard seems "bad" to most people. This is essentially the same issue that shows up in how people answer the trolley problem.

So, let's change the question: If there's no substantial chance of killing the guard should one do it?

Replies from: XiXiDu
comment by XiXiDu · 2010-10-03T09:45:06.983Z · LW(p) · GW(p)

If there's no substantial chance of killing the guard should one do it?

Is the guard working, maybe unknowingly, in the building where Skynet is just awaking? Torture vs. Dust Specks?

comment by cabalamat · 2010-10-03T01:08:03.972Z · LW(p) · GW(p)

A topical real-life example of this is the DDoS attacks that Anonymous are making against various companies that pursue/sue people for alleged illegal file sharing.

I make no comment on the morality of this, but it seems to be effective in practise, at least some of the time, for example it may lead to the demise of the law firm ACS:law.