The Least Convenient Possible World

post by Scott Alexander (Yvain) · 2009-03-14T02:11:15.177Z · LW · GW · Legacy · 203 comments

Related to: Is That Your True Rejection?

"If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments.  But if you’re interested in producing truth, you will fix your opponents’ arguments for them.  To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse."

   -- Black Belt Bayesian, via Rationality Quotes 13

Yesterday John Maxwell's post wondered how much the average person would do to save ten people from a ruthless tyrant. I remember asking some of my friends a vaguely related question as part of an investigation of the Trolley Problems:

You are a doctor in a small rural hospital. You have ten patients, each of whom is dying for the lack of a separate organ; that is, one person needs a heart transplant, another needs a lung transplant, another needs a kidney transplant, and so on. A traveller walks into the hospital, mentioning how he has no family and no one knows that he's there. All of his organs seem healthy. You realize that by killing this traveller and distributing his organs among your patients, you could save ten lives. Would this be moral or not?

I don't want to discuss the answer to this problem today. I want to discuss the answer one of my friends gave, because I think it illuminates a very interesting kind of defense mechanism that rationalists need to be watching for. My friend said:

It wouldn't be moral. After all, people often reject organs from random donors. The traveller would probably be a genetic mismatch for your patients, and the transplantees would have to spend the rest of their lives on immunosuppressants, only to die within a few years when the drugs failed.

On the one hand, I have to give my friend credit: his answer is biologically accurate, and beyond a doubt the technically correct answer to the question I asked. On the other hand, I don't have to give him very much credit: he completely missed the point and lost a valuable effort to examine the nature of morality.

So I asked him, "In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?"

He mumbled something about counterfactuals and refused to answer. But I learned something very important from him, and that is to always ask this question of myself. Sometimes the least convenient possible world is the only place where I can figure out my true motivations, or which step to take next. I offer three examples:

 

1:  Pascal's Wager. Upon being presented with Pascal's Wager, one of the first things most atheists think of is this:

Perhaps God values intellectual integrity so highly that He is prepared to reward honest atheists, but will punish anyone who practices a religion he does not truly believe simply for personal gain. Or perhaps, as the Discordians claim, "Hell is reserved for people who believe in it, and the hottest levels of Hell are reserved for people who believe in it on the principle that they'll go there if they don't."

This is a good argument against Pascal's Wager, but it isn't the least convenient possible world. The least convenient possible world is the one where Omega, the completely trustworthy superintelligence who is always right, informs you that God definitely doesn't value intellectual integrity that much. In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely everything.

Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of Pascal's Wager has less to do with a hypothesized pro-atheism God, and more to do with a belief that it's wrong to abandon your intellectual integrity on the off chance that a crazy deity is playing a perverted game of blind poker with your eternal soul?

2: The God-Shaped Hole. Christians claim there is one in every atheist, keeping him from spiritual fulfillment.

Some commenters on Raising the Sanity Waterline don't deny the existence of such a hole, if it is intepreted as a desire for purpose or connection to something greater than one's self. But, some commenters say, science and rationality can fill this hole even better than God can.

What luck! Evolution has by a wild coincidence created us with a big rationality-shaped hole in our brains! Good thing we happen to be rationalists, so we can fill this hole in the best possible way! I don't know - despite my sarcasm this may even be true. But in the least convenient possible world, Omega comes along and tells you that sorry, the hole is exactly God-shaped, and anyone without a religion will lead a less-than-optimally-happy life. Do you head down to the nearest church for a baptism? Or do you admit that even if believing something makes you happier, you still don't want to believe it unless it's true?

3: Extreme Altruism. John Maxwell mentions the utilitarian argument for donating almost everything to charity.

Some commenters object that many forms of charity, especially the classic "give to starving African orphans," are counterproductive, either because they enable dictators or thwart the free market. This is quite true.

But in the least convenient possible world, here comes Omega again and tells you that Charity X has been proven to do exactly what it claims: help the poor without any counterproductive effects. So is your real objection the corruption, or do you just not believe that you're morally obligated to give everything you own to starving Africans?

 

You may argue that this citing of convenient facts is at worst a venial sin. If you still get to the correct answer, and you do it by a correct method, what does it matter if this method isn't really the one that's convinced you personally?

One easy answer is that it saves you from embarrassment later. If some scientist does a study and finds that people really do have a god-shaped hole that can't be filled by anything else, no one can come up to you and say "Hey, didn't you say the reason you didn't convert to religion was because rationality filled the god-shaped hole better than God did? Well, I have some bad news for you..."

Another easy answer is that your real answer teaches you something about yourself. My friend may have successfully avoiding making a distasteful moral judgment, but he didn't learn anything about morality. My refusal to take the easy way out on the transplant question helped me develop the form of precedent-utilitarianism I use today.

But more than either of these, it matters because it seriously influences where you go next.

Say "I accept the argument that I need to donate almost all my money to poor African countries, but my only objection is that corrupt warlords might get it instead", and the obvious next step is to see if there's a poor African country without corrupt warlords (see: Ghana, Botswana, etc.) and donate almost all your money to them. Another acceptable answer would be to donate to another warlord-free charitable cause like the Singularity Institute.

If you just say "Nope, corrupt dictators might get it," you may go off and spend the money on a new TV. Which is fine, if a new TV is what you really want. But if you're the sort of person who would have been convinced by John Maxwell's argument, but you dismissed it by saying "Nope, corrupt dictators," then you've lost an opportunity to change your mind.

So I recommend: limit yourself to responses of the form "I completely reject the entire basis of your argument" or "I accept the basis of your argument, but it doesn't apply to the real world because of contingent fact X." If you just say "Yeah, well, contigent fact X!" and walk away, you've left yourself too much wiggle room.

In other words: always have a plan for what you would do in the least convenient possible world.

203 comments

Comments sorted by top scores.

comment by davidamann · 2009-03-14T03:53:55.323Z · LW(p) · GW(p)

I think a better way to frame this issue would be the following method.

  1. Present your philosophical thought-experiment.
  2. Ask your subject for their response and their justification.
  3. Ask your subject, what would need to change for them to change their belief?

For example, if I respond to your question of the solitary traveler with "You shouldn't do it because of biological concerns." Accept the answer and then ask, what would need to change in this situation for you to accept the killing of the traveler as moral?

I remember this method giving me deeper insight into the Happiness Box experiment.

Here is how the process works:

  1. There is a happiness box. Once you enter it, you will be completely happy through living in a virtual world. You will never leave the box. Would you enter it?
  2. Initial response. Yes, I would enter the box. Since my world is only made up of my perceptions of reality, there is no difference between the happiness box and the real world. Since I will be happier in the happiness box, I would enter.
  3. Reframing question. What would need to change so you would not enter the box.
  4. My response: Well, if I had children or people depending on me, I could not enter.

Surprising conclusion! Aha! Then you do believe that there is a difference between a happiness box and the real world, namely your acceptance of the existence of other minds and the obligations those minds place on you.

That distinction was important to me, not only intellectually but in how I approached my life.

Hope this contributes to the conversation.

David

Replies from: pwno, Vladimir_Nesov, abramdemski, Rings_of_Saturn, thrawnca
comment by pwno · 2009-03-14T21:07:25.237Z · LW(p) · GW(p)

I find a similar strategy useful when I am trying to argue my point to a stubborn friend. I ask them, "What would I have to prove in order for you to change your mind?" If they answer "nothing" you know they are probably not truth-seekers.

comment by Vladimir_Nesov · 2009-03-14T05:56:07.734Z · LW(p) · GW(p)

Namely, the point of reversal of your moral decision is that it helps to identify what this particular moral position is really about. There are many factors to every decision, so it might help to try varying each of them, and finding other conditions that compensate for the variation.

For example, you wouldn't enter the happiness box if you suspected that information about it giving the true happiness is flawed, that it's some kind of lie or misunderstanding (on anyone's part), of which the situation of leaving your family on the outside is a special case, and here is a new piece of information. Would you like your copy to enter the happiness box if you left behind your original self? Would you like a new child to be born within the happiness box? And so on.

comment by abramdemski · 2012-09-02T22:46:32.871Z · LW(p) · GW(p)

This seems to nicely fix something which I felt was wrong in the "least convenient possible world" heuristic. The LCPW only serves to make us consider a possibility seriously. It may be too easy to come up with a LCPW. Asking what would change your mind helps us examine the decision boundary.

comment by Rings_of_Saturn · 2009-03-14T19:16:28.672Z · LW(p) · GW(p)

Great, David! I love it.

comment by thrawnca · 2016-03-22T02:29:30.888Z · LW(p) · GW(p)

The happiness box is an interesting speculation, but it involves an assumption that, in my view, undermines it: "you will be completely happy."

This is assuming that happiness has a maximum, and the best you can do is top up to that maximum. If that were true, then the happiness box might indeed be the peak of existence. But is it true?

Replies from: CynicalOptimist
comment by CynicalOptimist · 2016-04-24T12:45:50.429Z · LW(p) · GW(p)

Okay, well let's apply exactly the technique discussed above:

If the hypothetical Omega tells you that they're is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?

Note: I'm asking that in order to give another example of the technique in action. But still feel free to give a real answer if you'd choose to.

Side you didn't answer the question one way or another, I can't apply the second technique here. I can't ask what would have to change in order for you to change your answer.

Replies from: Jiro, thrawnca
comment by Jiro · 2016-04-26T14:42:16.535Z · LW(p) · GW(p)

What if we ignore the VR question? Omega tells you that killing and eating your children will make you maximally happy. Should you do it?

Omega can't tell you that doing X makes you maximally happy unless doing X actually makes you maximally happy. And a scenario where doing X actually makes you maximally happy may be a scenario where you are no longer human and don't have human preferences.

Omega could, of course, also say "you are mistaken when you conclude that being maximally happy in this scenario is not a human preference". However,

  1. This conclusion that that is not a human preference is being made by you, the reader, not just by the person in the scenario. It is not possible to stipulate that you, the reader, are wrong about your analysis of some scenario.
  2. Even within the scenario, if someone is mistaken about something like this, it's a scenario where he can't trust his own reasoning abilities, so there's really nothing he can conclude about anything at all. (What if Omega tells you that you don't understand logic and that every use of logic you think you have done was either wrong or true only by coincidence?)
comment by thrawnca · 2016-07-27T02:08:55.247Z · LW(p) · GW(p)

If the hypothetical Omega tells you that they're is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?

This would depend on my level of trust in Omega (why would I believe it? Because Omega said so. Why believe Omega? That depends on how much Omega has demonstrated near-omniscience and honesty). And in the absence of Omega telling me so, I'm rather skeptical of the idea.

Replies from: TheOtherDave
comment by TheOtherDave · 2016-07-27T16:58:51.165Z · LW(p) · GW(p)

For my part, it's difficult for me to imagine a set of observations I could make that would provide sufficient evidence to justify belief in many of the kinds of statements that get tossed around in these sorts of discussions. I generally just assume Omega adjusts my priors directly.

comment by MBlume · 2009-03-14T02:21:30.300Z · LW(p) · GW(p)

I'm not sure if I'm evading the spirit of the post, but it seems to me that the answer to the opening problem is this:

If you were willing to kill this man to save these ten others, then you should long ago have simply had all ten patients agree to a 1/10 game of Russian Roulette, with the proviso that the nine winners get the organs of the one loser.

Replies from: Yvain, SaidAchmiz, bruno-mailly, Marshall
comment by Scott Alexander (Yvain) · 2009-03-14T02:28:53.789Z · LW(p) · GW(p)

While emphasizing that I don't want this post to turn into a discussion of trolley problems, I endorse that solution.

Replies from: abramdemski
comment by abramdemski · 2012-09-02T22:35:23.578Z · LW(p) · GW(p)

In the least convenient possible world, only the random traveler has a blood type compatible with all ten patients.

Replies from: CynicalOptimist, DanielLC, Rixie
comment by CynicalOptimist · 2016-04-24T12:56:48.212Z · LW(p) · GW(p)

This is fair, because you're using the technique to redirect us back to the original morality issue.

But i also don't think that MBlume was completely evading the question either. The question was about ethical principles, and his response does represent an exploration of ethical principles. MBlume suggests that it's more ethical to sacrifice one of the lives that was already in danger, than to sacrifice an uninvolved stranger. (remember, from a strict utilitarian view, both solutions leave one person dead, so this is definitely a different moral principle.)

This technique is good for stopping people from evading the question. But some evasions are more appropriate than others.

Replies from: abramdemski
comment by abramdemski · 2016-04-25T00:21:49.157Z · LW(p) · GW(p)

Agreed.

comment by DanielLC · 2014-11-13T05:42:13.745Z · LW(p) · GW(p)

I'd go with that he's the only one who has organs healthy enough to ensure the recipients survive.

comment by Rixie · 2012-11-14T02:21:15.621Z · LW(p) · GW(p)

MBlume knows this, he's just telling us what he was thinking.

comment by Said Achmiz (SaidAchmiz) · 2013-04-16T02:19:44.814Z · LW(p) · GW(p)

What if one or more of the patients don't agree to do this?

Replies from: DanielLC
comment by DanielLC · 2014-11-13T05:43:13.925Z · LW(p) · GW(p)

Then you let him die, and repeat the question with a 1/9 chance of death.

comment by Bruno Mailly (bruno-mailly) · 2018-07-23T19:59:46.571Z · LW(p) · GW(p)

To me the logical answer is that it depends on how much value is attributed to "a" life vs respect of individual freedom/integrity.

It is fairly reasonable : do no evil, do not instrumentalize people, especially if not involved ; because this is a very slippery slope.

But it is unworkable to enter such a game of value accounting : Whose value system should be used ? Apple-and-orange value ?

My practical answer meets yours : If one is ready to kill the stranger, one should have anticipated this and done something along those lines long ago, like kill a criminal or comatose.

comment by Marshall · 2009-03-14T07:15:41.862Z · LW(p) · GW(p)

The technical creativity of this solution reveals the limits of rationality. This is a solution only in a world of dice. But in a world of minds and psyches there are problems. The nine survivors have killed a man so they themselves can live. The "dice-argument" that it was voluntary and everyone had an equal chance of dying or surviving is irelevant. The survivors pulled the trigger on the victim in order that they could survive. That is their legacy, that is their gulit and only a "self-deceiving rationalist" would be able to suppress this guilt by rejoicing in the numbers.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-03-14T07:32:52.925Z · LW(p) · GW(p)

Throwing a die is a way of avoiding bias in choosing a person to kill. If you choose a person to kill personally, you run a risk of doing in in an unfair fashion, and thus being guilty in making an unfair choice. People value fairness. Using dice frees you of this responsibility, unless there is a predictably better option. You are alleviating additional technical moral issues involved in killing a person. This issue is separate from deciding whether to kill a person at all, although the reduction in moral cost of killing a person achieved by using the fair roulette technology may figure in the original decision.

Replies from: Tasky
comment by Tasky · 2011-09-23T22:56:47.857Z · LW(p) · GW(p)

But as a doctor, probably you will have to choose non-randomly, if you want to stand by your utilitarian viewpoint, since killing different people might have different probabilities of success. Assuming the lest convenient possible world hypothesis, you can't make your own life easier by assuming each one's sacrifice is as likely to go well. So in the end you will have to assume that one patients sacrifice will be the "best", and will have to decide if you kill them, thus reverting to the original problem.

comment by bentarm · 2009-03-14T21:24:19.075Z · LW(p) · GW(p)

There are real life examples where reality has turned out to be the "least convenient of possible worlds". I have spent many hours arguing with people who insist that there are no significant gender differences (beyond the obvious), and are convinced that to assert otherwise is morally reprehensible.

They have spent so long arguing that such differences do not exist, and this is the reason that sexism is wrong, that their morality just can't cope with a world in which this turns out not to be true. There are many similar politically charged issues - Pinker discusses quite a few in the Blank Slate - where people aren't wiling to listen to arguments about factual issues because they believe they have moral consequences.

The problem, of course - and I realise this is the main point of this post - is that if your morality is contingent on empirical issues where you might turn out to be wrong, you have to accept the consequences. If you believe that sexism is wrong because there are no heritable gender differences, you have to be willing to accept that if these differences do turn out to exist then you'll say sexism is ok.

This is probably a test you should apply to all of your moral beliefs - if it just so happens that I'm wrong about the factual issue on which I'm basing my belief is wrong, will really I be willing to change my mind?

Replies from: Pr0methean, Rixie
comment by Pr0methean · 2013-04-19T08:20:04.879Z · LW(p) · GW(p)

That raises an interesting question: is it possible to base a moral code only on what's true in all possible worlds that contain me?

Replies from: Richard_Kennaway, Jackercrack
comment by Richard_Kennaway · 2013-04-19T12:13:18.618Z · LW(p) · GW(p)

To do that would require that "all possible worlds that contain me" be a coherent concept. What does it mean, to identify as "me" some agent in a world very different from our own?

comment by Jackercrack · 2014-10-17T00:05:40.117Z · LW(p) · GW(p)

I think that it is not. All possible worlds include worlds where every tuesday the first person you meet in a crowded place just happens to attack you. That would lead to a personal moral code of stabbing the first person you meet on tuesday.

I think we can only have a moral code that works on most worlds at best

Replies from: DanielLC, None
comment by DanielLC · 2014-11-13T05:46:45.966Z · LW(p) · GW(p)

You could have a personal moral code of stabbing anyone who you're 90% certain would otherwise attack you. In a universe where the first person you meet on Tuesday always tries to kill you, you would quickly start stabbing them first. In other worlds, you would not.

comment by [deleted] · 2014-10-17T04:45:11.428Z · LW(p) · GW(p)

I think we can only have a moral code that works on most worlds at best

That doesn't follow from your logic. There could be multiple functions of maximal expectd utility. Or more fundamentally, how you sum over possible words reflects your prior anthropic biases (which worlds you think are most likely), which is sadly a completely arbitrary choice.

Replies from: Jackercrack
comment by Jackercrack · 2014-10-17T09:55:07.797Z · LW(p) · GW(p)

I took "all possible worlds that contain me" to mean all worlds where history went the same until my birth. Any world where significant things went differently would have led a different sperm to create a different person than him. That is, they should be reasonably similar but can still include diverse outcomes from for example nuclear war where Pr0methean is living in post-apocalyptic fallout to a USSR-US alliance leading to a fascist authoritarian government in your country to choice.

I did in fact assume that worlds more similar to our current one would make up the majority [or at least the plurality] in that case. Was I wrong to assume that?

Edit: thinking about it now, the plurality was post-hoc rationalisation, so ignore it. On a side note, how do I do strikethrough text?

Replies from: None
comment by [deleted] · 2014-10-17T20:09:39.967Z · LW(p) · GW(p)

Retract -- circle with an line through it.

Replies from: Jackercrack
comment by Jackercrack · 2014-10-17T20:42:21.487Z · LW(p) · GW(p)

What do you mean by circle with a line through it? Is that some sort of code for what buttons to press?

Replies from: Nornagest
comment by Nornagest · 2014-10-17T20:45:37.057Z · LW(p) · GW(p)

There should be a button with that appearance in the lower right-hand corner of your comments, which brings up a tooltip labeled "retract" when you mouse over it. Using it will strikethrough the entire text of your post, which 'round these parts is shorthand for "I, the author, no longer endorse this comment". Using it for a second time will delete your post, unless there are responses to it.

There isn't any way to strikethrough portions of a post with LW's markup. Or at least I wasn't able to find one the last time I looked into this. The usual Markdown syntax is disabled here, probably to reserve the look for the retract option.

Replies from: wedrifid, Jackercrack
comment by wedrifid · 2014-11-13T08:01:53.413Z · LW(p) · GW(p)

The usual Markdown syntax is disabled here, probably to reserve the look for the retract option.

The causality is unlikely. There was never strikethrough syntax here and the retract option was not conceived until years after the creation of the forum (and syntax choices).

comment by Jackercrack · 2014-10-17T20:54:28.336Z · LW(p) · GW(p)

Ah, thank you. I hadn't noticed that

comment by Rixie · 2012-11-14T02:31:19.152Z · LW(p) · GW(p)

I don't think that what I'm about to say actually applies, but you know what I find most annoying in the world?

Well, I'll give an example: We were in gym class and the teacher was explaining discus and he told us that the boys' discuses weighed 1 kilogram and the girls' discuses weighed 750 grams. And then this one girl in my class goes, "Why is the girls' discus lighter?" And she knows what the teacher is going to say so she goes, "Don't say it, it's sexist."

IT'S NOT SEXIST!!!

In reality, boys ARE stronger than girls! And admitting to that is not being sexist, it's being truthful! Being sexist would be saying that ALL girls are NOT ALLOWED to throw 1 kilogram discuses because they'll damage they're delicate bodies.

NEVER be afraid to say that something's true JUST because it's SOMETHING-IST! It's NOT unless it's DISCRIMINITORY!!!

Sorry for all the yelling, I'm very passionate about this, and thanks for listening to my rant . . .

Replies from: Strange7, CCC
comment by Strange7 · 2012-11-14T03:32:42.084Z · LW(p) · GW(p)

If you want to emphasize something without resorting to capslock, put asterisks on either side. The "show help" button (on the right when you're about to post) explains all the options.

Replies from: Rixie
comment by Rixie · 2013-01-25T04:16:08.573Z · LW(p) · GW(p)

Why thank you!

Replies from: Rixie
comment by Rixie · 2013-01-30T02:38:22.837Z · LW(p) · GW(p)

SIGH!!!

Nooooooo, we do not vote up comments like this. We vote up comments that make good points on the subject we are all trying to learn more about.

Replies from: Dorikka
comment by Dorikka · 2013-01-30T03:16:22.297Z · LW(p) · GW(p)

Um, a couple notes here:

  1. People sometimes find comments using ALL CAPS for emphasis unpleasant to read. I think that you may be annoying people with this behavior -- I wanted to let you know in case this wasn't your intention.

  2. I think that upvoting simple "Thanks" comments is sometimes beneficial -- courtesy can be helpful in maintaining useful dialogue between people who have very different beliefs prior to the discussion.

ETA: Yes, I will downvote such comments if they get too much karma -- my note above mostly extends to a point or two.

comment by CCC · 2013-01-30T06:52:03.886Z · LW(p) · GW(p)

I think it's worthwhile, at this point, to define exactly what one means by 'sexist'. According to Wiktionary, it means (as a noun):

A person who discriminates on grounds of sex; someone who practises sexism.

And as an adjective:

Unfairly discriminatory against one sex in favour of the other.

Now, looking at 'discriminate', we have two definitions:

  1. (intransitive) To make distinctions.

And:

  1. (intransitive, construed with against) To make decisions based on prejudice.

This definition leaves it a little unclear as to whether describing any differences between genders (including the obvious ones) is sexist, or whether only describing discriminatory differences between genders is sexist. Personally, I had always assumed that any difference counted as sexist, but only discriminatory differences counted as bad things to say; that is, that saying that boys are on average stronger than girls is both true and sexist.

From your post, I think that you are using a different definition of 'sexist', where only the discriminatory uses are considered valid examples.

comment by bill · 2009-03-15T18:46:10.546Z · LW(p) · GW(p)

One way to train this: in my number theory class, there was a type of problem called a PODASIP. This stood for Prove Or Disprove And Salvage If Possible. The instructor would give us a theorem to prove, without telling us if it was true or false. If it was true, we were to prove it. If it was false, then we had to disprove it and then come up with the "most general" theorem similar to it (e.g. prove it for Zp after coming up with a counterexample in Zm).

This trained us to be on the lookout for problems with the theorem, but then seeing the "least convenient possible world" in which it was true.

comment by Nebu · 2009-03-16T21:37:15.262Z · LW(p) · GW(p)

I voted up on your post, Yvain, as you've presented some really good ideas here. Although it may seem like I'm totally missing your point by my response to your 3 scenarios, I assure you that I am well aware that my responses are of the "dodging the question" type which you are advocating against. I simply cannot resist to explore these 3 scenarios on their own.

Pascal's Wager

In all 3 scenarios, I would ask Omega further questions. But these being "least convenient world" scenarios, I suspect it'd be all "Sorry, can't answer that" and then fly away. And I'd call it a big jerk.

For Pascal Wager's specific scenario, I'd probably ask Omega "Really? Either God doesn't exist or everything the Catholics say is correct? Even the self-contradicting stuff?" And of course, he'd decline to answer and fly away.

So then I'd be stuck trying to decide whether God doesn't exist, or logic is incorrect (i.e. reality can be logically self inconsistent). I'm tempted to adopt Catholicism (for the same reason I would one-box on Newcomb: I want the rewards), but I'm not sure how my brain could handle a non-logical reality. So I really don't know what would happen here.

But let's say Omega additionally tells me that Catholicism is actually self-consistent, and I just misunderstood something about it, before flying away. In that case, I guess I'd start to study Catholicism. If my revised view of Catholicism has me believe that it does some rather cruel stuff (stone people for minor offenses, etc.) then I'd have to weight that against my desire to not suffer eternal torture.

I mean, eternal torture is pretty frickin' bad. I think in the end, I'd convert. And I'd also try to convert as many other people as possible, because I suspect I'd need to be cruel to fewer people if fewer people went against Christianity.

The God-Shaped Hole

To clarify your scenario, I'm guessing Omega explicitly tells me that I will be happier if I believe something untrue (i.e. God). I would probably reject God in this case, as Omega is implicitly confirming that God does not exist, and I do care about truth more than happiness. I've already experience this in other manners, so this is a much easier scenario for me to imagine.

Extreme Altruism

I don't think I can overcome this challenge. No matter how much I think about it, I find myself putting up semantic stop signs. In my "least convenient world", Omega tells me that Africa is so poverty stricken, and that my contribution would be so helpful, that I would be improving the lives of billions of people, in exchange for giving up all my wealth. While I might not donate all my money to save 10, I think I value billions of lives more than my own life. Do I value it more than my own happiness? This is an extremely painful question for me to think about, so I stop thinking about it.

"Okay", I say to Omega, "what if I only donate X percent of my money, and keep the rest for myself?" In one possible "least convenient world", Omega tells me that the charity is run by some nutcase whom, for whatever reason, will only accept an all-or-nothing deal. Well, when I phrase it like that, I feel like not donating anything, and blaming it on the nutcase. So suppose instead Omega tells me "There's some sort of principles of economy of scale which is too complicated for me to explain to you which basically means that your contribution will be wasted unless you contribute at least Y amount of dollars, which coincidentally just happens to be your total net worth." Again, I'm torn and find it difficult to come to a conclusion.

Alternative, I say to Omega "I'll just donate X percent of my money." Omega tells me "that's good, but it's not optimum." And I reply "Okay, but I don't have to do the optimum." but then Omega convinces me that actually, yes, I really should be doing the optimum somehow. Perhaps something along the line of how my current "ignore Africa altogether" behaviour is better than the behaviour of going to Africa and killing, torturing, raping everyone there. That doesn't mean that the "ignore Africa" strategy is moral.

Replies from: matteyas, jknapka
comment by matteyas · 2017-07-18T11:02:05.207Z · LW(p) · GW(p)

For Pascal Wager's specific scenario, I'd probably ask Omega "Really? Either God doesn't exist or everything the Catholics say is correct? Even the self-contradicting stuff?" And of course, he'd decline to answer and fly away.

The point is that in the least convenient world for you, Omega would say whatever it is that you would need to hear to not slip away. I don't know what that is. Nobody but you do. If it is about eternal damnation for you, then you've hopefully found your holy grail, and as some other poster pointed out, why this is the holy grail for you can be quite interesting to dig into as well.

The point raised, as I see it, is just to make your stance on Pascal's wager contend against the strongest possible ideas.

Replies from: Jiro
comment by Jiro · 2017-07-24T18:25:47.632Z · LW(p) · GW(p)

The point is that in the least convenient world for you, Omega would say whatever it is that you would need to hear to not slip away.

The least convenient world is one where Omega answers his objections. The least convenient possible world is one where Omega answers his objections in a way that's actually possible. And it may not be possible for Omega to answer some objections.

comment by jknapka · 2011-12-02T17:36:58.206Z · LW(p) · GW(p)

I mean, eternal torture is pretty frickin' bad. I think in the end, I'd convert. And I'd also try to convert as many other people as possible, because I suspect I'd need to be cruel to fewer people if fewer people went against Christianity.

This is a very good point, and I believe I'll point it out to my rather fundamentalist sibling when next we talk about this: if I really, truly believed that every non-Christian was doomed to eternal damnation, you can bet I'd be an evangelist!

Extreme Altruism

While I might not donate all my money to save 10, I think I value billions of lives more than my own life. Do I value it more than my own happiness? This is an extremely painful question for me to think about, so I stop thinking about it.

I definitely don't value those billions of lives more than my own happiness, or more than the happiness of those I know and love. However, I would seriously consider giving all of my wealth if Omega assured me that me and mine would be able to continue to be reasonably happy after doing so, even if it meant severe lifestyle changes.

Replies from: DanielLC
comment by DanielLC · 2014-11-13T06:15:20.178Z · LW(p) · GW(p)

If I really, truly believed that every non-Christian was doomed to eternal damnation, I'd donate to a charity that distributes condoms to people in Africa. The key here is to minimize the number of non-Christians, not to make more people Christian.

comment by Vladimir_Nesov · 2009-03-14T07:23:16.300Z · LW(p) · GW(p)

Let's try something different.

  • Puts on the reviewer's hat.

The Yvain's post presented a new method for dealing with the stopsign problem in reasoning about questions of morality. The stopsign problem consists in following an invalid excuse to avoid thinking about the issue at hand, instead of doing something constructive about resolving the issue.

The method presented by Yvain consists in putting in place the universal countermeasure against the stopsign excuses: whenever a stopsign comes up, you move the discussed moral issue to a different, hypothetical setting, where the stopsign no longer applies. The only valid excuse in this setting is that you shouldn't do something, which also resolves the moral question.

However, the moral questions should be concerned with reality, not with fantasy. Whenever a hypothetical setting is brought in the discussion of morality, it should be understood as a theoretical device for reasoning about the underlying moral judgment applicable to the real world. There is a danger in fallaciously generalizing the moral conclusion from fictional evidence, both because there might be factors in the fictional setting that change your decision and which you haven't properly accounted for in the conclusion, and because decision extracted from the fictional setting is drawn in the far mode, running a risk of being too removed from the real world to properly reflect people's preferences.

Replies from: Marshall
comment by Marshall · 2009-03-14T09:28:52.070Z · LW(p) · GW(p)

I do agree. I think in many ways reality already is "the least convenient possible world" and the clearsightedness of thought experiments doesn't match the muddiness of the world.

comment by freyley · 2009-04-08T17:40:58.355Z · LW(p) · GW(p)

One difficulty with the least convenient possible world is where that least convenience is a significant change in the makeup of the human brain. For example, I don't trust myself to make a decision about killing a traveler with sufficient moral abstraction from the day-to-day concerns of being a human. I don't trust what I would become if I did kill a human. Or, if that's insufficient, fill in a lack of trust in the decisionmaking in general for the moment. (Another example would be the ability to trust Omega in his responses)

Because once that's a significant issue in the subject , then the least convenient possible world you're asking me to imagine doesn't include me -- it includes some variant of me whose reactions I can predict, but not really access. Porting them back to me is also nontrivial.

It is an interesting thought experiment, though.

comment by CronoDAS · 2009-03-14T02:45:20.246Z · LW(p) · GW(p)

So I asked him, "In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?"

Obviously, you wait for one of the sick patients to die, and use that person's organs to save the others, letting the healthy traveler go on his way. ;)

But that isn't the least convenient possible world - the least convenient one is actually the one in which the traveler is compatible with all the sick people, but the sick people are not compatible with each other.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-03-14T06:50:02.236Z · LW(p) · GW(p)

Actually, you don't even need to add that additional complexity to make the world sufficiently inconvenient.

If the rest of the patients are sufficiently sick, their organs may not really be suitable for use as transplants, right?

comment by alex_zag_al · 2011-11-17T07:09:12.488Z · LW(p) · GW(p)

There's another benefit: you remove a motivation to lie to yourself. If you think that a contingent fact will get you out of a hard choice, you might believe it. But you probably won't if it doesn't get you out of the hard choice anyway.

Replies from: Muhd
comment by Muhd · 2016-09-07T20:50:40.679Z · LW(p) · GW(p)

On the other hand, if you think that a contingent fact will get you out of a hard choice, perhaps you will be more likely to find legitimate contingent facts.

comment by Dreaded_Anomaly · 2011-01-04T10:19:45.603Z · LW(p) · GW(p)

Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of Pascal's Wager has less to do with a hypothesized pro-atheism God, and more to do with a belief that it's wrong to abandon your intellectual integrity on the off chance that a crazy deity is playing a perverted game of blind poker with your eternal soul?

I don't think I would be able to bring myself to worship honestly a God who bestowed upon us the ability to reason and then rewarded us for not using it.

Replies from: Nick_Tarleton, robert-miles
comment by Nick_Tarleton · 2011-01-05T01:58:04.409Z · LW(p) · GW(p)

Would you want to, if you could? If so, given the stakes, you should try damn hard to make yourself able to.

comment by Robert Miles (robert-miles) · 2011-09-18T01:11:54.892Z · LW(p) · GW(p)

I don't think I would be able to bring myself to worship honestly a God who bestowed upon us the ability to reason and then rewarded us for not using it.

I don't follow your reasoning. Because God made us able to do a particular thing, we shouldn't be rewarded for choosing not to do that thing? A quick word substitution illustrates my issue:

"I don't think I would be able to bring myself to worship honestly a God who bestowed upon us the ability to murder and then rewarded us for not using it."

Replies from: DanielLC, Dreaded_Anomaly
comment by DanielLC · 2014-11-13T06:13:30.531Z · LW(p) · GW(p)

"I don't think I would be able to bring myself to worship honestly a God who bestowed upon us the ability to murder and then rewarded us for not using it."

I certainly wouldn't like such a God. He'd be better than a God who bestowed upon us the ability to murder and then rewarded us for using it, but what kind of God would bestow upon us the ability to murder?

comment by Dreaded_Anomaly · 2011-09-18T02:34:46.007Z · LW(p) · GW(p)

My statement does not generalize in that way, and was not intended to do so.

Replies from: Antonio
comment by Antonio · 2011-10-22T00:50:31.896Z · LW(p) · GW(p)

It does. It just doesn't if you accept the premise that intelligence is, in and of itself, good (and murder is not). I accept that premise, of course, and your assertion that it was not intended to be generalized as such. But still, within the framework of this hipothetical world, that simply cannot be true. In fact, it cannot be relevant. It is not a moral question at all; more of an utility vs principles thing.

In the original Pascal's Wager, as I recall, false (outwardly) adoration does score you points. I seem to recall him saying that "at least you wouldn't be corrupting the youths" and "you may become convinced by habit", at least. So yeah, I would try my best (and probably fail) to aquire that reward, if it was shown to be worth it. On the other hand, on such a world, it probably would not. Heaven for the weather, Hell for the company, et al.

comment by [deleted] · 2009-03-14T18:32:31.267Z · LW(p) · GW(p)

The problem with the 'god shaped hole' situation (and questions of happiness in general) is that if something doesn't make you happy NOW, it becomes very difficult to believe that it will make you happy LATER.

For example, say some Soma-drug was invented that, once taken, would make you blissfully happy for the rest of your life. Would you take it? Our immediate reaction is to say 'no', probably because we don't like the idea of 'fake', chemically-induced happiness. In other words, because the idea doesn't make us happy now, we don't really believe it will make us happy later.

Valuing truth seems like just another way of saying truth makes you happy. Because filling the god shaped hole means not valuing truth, the idea doesn't make you happy right now, so you don't really believe it will make you happy later.

Replies from: Swimmer963, Hul-Gil, None
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-08-04T21:51:48.090Z · LW(p) · GW(p)

For example, say some Soma-drug was invented that, once taken, would make you blissfully happy for the rest of your life. Would you take it?

I try my best to value other peoples' happiness equal to my own. If taking a happiness-inducing pill was likely to make me a kinder, more generous, more productive person, I would choose to take it (with some misgivings related to it seeming like 'cheating' and 'not good for character-building') but if it were to make me less kind/generous/productive, I would have much stronger misgivings.

comment by Hul-Gil · 2011-05-21T08:32:15.667Z · LW(p) · GW(p)

I would definitely take the Soma, and don't see why anyone wouldn't. Odd, the differences between what people find acceptable.

Is anyone else with me in desiring chemically-induced happiness as much as any other? (Well, all happiness is chemically-induced, when you get right down to it, so I assume there are no qualitative differences.)

Replies from: Kingreaper, peter_hurford, jhuffman
comment by Kingreaper · 2011-10-04T13:25:07.522Z · LW(p) · GW(p)

I wouldn't take it. I desire to help others, and it gives me pleasure to do so, it makes me suffer to harm others, and I desire not to do so.

Being perpetually in a state of extreme pleasure would make this pleasure/suffering irrelevant, and might lead me to behave less in line with my desires.

So, being perpetually in a state of extreme pleasure seems like a bad idea to me.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-10-04T13:30:50.209Z · LW(p) · GW(p)

I agree with you completely. I can understand why others might not agree with me, but for me, pleasure isn't so much a goal as a result of accomplishing my goals.

comment by Peter Wildeford (peter_hurford) · 2011-08-04T21:06:54.206Z · LW(p) · GW(p)

I'm reminded of Yudkowsky's Not For the Sake of Happiness Alone.

Replies from: None, Hul-Gil
comment by [deleted] · 2011-08-04T22:22:00.845Z · LW(p) · GW(p)

I think one of the points underrepresented in these "Not For the Sake of XXX Alone" posts is how people would respond to a least convenient world possible in which they would be forced to make sharp trade-offs between competing values.

For instance, I value diversity, a kind of narrative depth to raw experiences. But if I had to choose either sustainable, chemically induced unsophisticated pleasure or else diverse pain and misery with narrative depth, I'd almost certainly choose the pleasure.

This is relevant to FAI and CEV, I think. If the success probability of simple, pleasure-generating FAI is higher than more sophisticated (and difficult) "Not For the Sake of XXX Alone"-respecting FAI, it might be better opting for the pleasure-generating version.

Replies from: Hul-Gil
comment by Hul-Gil · 2011-08-19T03:06:22.398Z · LW(p) · GW(p)

I value diversity, a kind of narrative depth to raw experiences. But if I had to choose either sustainable, chemically induced unsophisticated pleasure or else diverse pain and misery with narrative depth, I'd almost certainly choose the pleasure.

Agreed. I also think people tend to underestimate the goodness of pure bliss: I have experienced such a state, and I'm here to tell you, the concerns about XXX become very much more minor than you would expect. They don't disappear - if you like painting, you'll still want to paint - but you suddenly understand how minor the pleasure painting gives you really is, in comparison.

Or at least that's how I felt, anyway.

comment by Hul-Gil · 2011-08-19T03:03:04.297Z · LW(p) · GW(p)

He makes good points, but note that there's nothing saying you couldn't take Soma and participate in the joy of scientific discovery (or whatever).

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2011-08-20T04:53:19.618Z · LW(p) · GW(p)

The argument wasn't that you need the joy of scientific discovery; it was that scientific discovery is important to us for reasons entirely apart from joy. You would never want a Soma substitute for scientific discovery, because that wouldn't involve... you know... actual scientific discovery.

Additionally, another different take on this is Yvain's Are Wireheads Happy?.

comment by jhuffman · 2011-08-04T21:01:44.496Z · LW(p) · GW(p)

This is just wire-heading isn't it? At least, that is what you should search for if you want to hear what people on this site tend to think about this sort of idea. I am not certain of my own view of it. I tend to think I'd wire-head at first, but then some implications I find on more reflection make me unsure.

Replies from: Hul-Gil
comment by Hul-Gil · 2011-08-19T03:07:22.554Z · LW(p) · GW(p)

I tend to think I'd wire-head at first, but then some implications I find on more reflection make me unsure.

Same here. That is, I know I'd wirehead - I don't see any bothersome implications with that idea alone. However, if you add in something like "once you wirehead you are immobile and cannot do anything else", then I become more unsure.

Replies from: jhuffman
comment by jhuffman · 2011-08-19T14:44:23.278Z · LW(p) · GW(p)

It does not matter if you are immobilized. Once you are wire-heading there is no reason you would ever stop since you've already got peak pleasure/joy. I think this effectively immobilizes you. There is no problem that could come to you that wouldn't be best solved by more wire-heading, except for a threat to the wire-heading itself.

comment by [deleted] · 2011-01-22T12:56:50.113Z · LW(p) · GW(p)

I think you're simply assuming that we're motivated primarily by happiness in that case.

Valuing the truth doesn't suddenly make me happy when someone announces to me, and I verify, that my entire family has been eaten by wombats. If I didn't value the truth at all, I might be able to ignore reality and persist in my erroneous belief that my family is alive and wombats are as cute and cuddly as I have always believed. But I don't try to do that, and I don't regret my decision or any inability to maintain erroneous beliefs.

A soma drug offends my sensibilities on some level. It violates my moral value of "don't mess around with my brain except through standard sensory experiences or with my explicit and informed consent" (and no brainwashing: no concerted efforts to modify my opinions or attitudes or behaviors except through normal human interactions, like arguing and talking and long walks on the beach).

I value at least some of these moral sensibilities higher than my current or future happiness. That's why I would choose against the soma. Not because I doubt its efficacy.

comment by ChrisHibbert · 2009-03-14T18:43:24.236Z · LW(p) · GW(p)

I like the phrase "precedent utilitarianism". It sounds to utilitarians like you're joining their camp, while actually pointing out that you're taking a long-term view of utility, which they usually refuse to do. The important ingredient is paying attention to incentives, which is really the rational response to most questions about morality. Many choices which seem "fairer", "more just", or whose alternatives provoke a disgust response don't take the long-term view into account. If we go around sacrificing every lonely stranger to the highest benefit of others nearby, no one is safe. It's a tragedy that all those people are sick and will die if they don't get help, but we don't make the world less tragic by sacrificing one to save ten every chance we get.

Replies from: alex_zag_al, Desrtopa, Cameron_Taylor, Douglas_Reay
comment by alex_zag_al · 2011-11-17T07:02:20.374Z · LW(p) · GW(p)

Actually, we would all be more safe, because we'd be in less danger from organ failure. We are each more likely to be one of the "others nearby" than the "lonely stranger".

Replies from: DanielLC
comment by DanielLC · 2014-11-13T05:50:03.670Z · LW(p) · GW(p)

That would be true if they were hunting people down. As stated, people would become more resistant to going to hospitals, which would cause problems that way.

Replies from: theseus
comment by theseus · 2015-01-04T10:03:33.366Z · LW(p) · GW(p)

This is an exact instance of the point of the post. It is important to assume they are hunting people down, because that's the LCPW and the fact that this trolley problem incorporates using someone who shows up at the hospital is entirely an unnecessary contingent fact.

comment by Desrtopa · 2013-02-28T02:08:59.370Z · LW(p) · GW(p)

I like the phrase "precedent utilitarianism". It sounds to utilitarians like you're joining their camp, while actually pointing out that you're taking a long-term view of utility, which they usually refuse to do

On what basis would you say it's the case that utilitarians usually refuse to take a long-term view of utility?

Replies from: ChrisHibbert
comment by ChrisHibbert · 2013-03-03T06:32:36.787Z · LW(p) · GW(p)

When I've argued with people who called themselves utilitarian, they seemed to want to make trade-offs among immediately visible options. I'm not going to try to argue that I have population statistics, or know what the "proper" definition of a utilitarian is. Do you believe that some other terminology or behavior better characterizes those called "utilitarians"?

Replies from: Desrtopa
comment by Desrtopa · 2013-03-03T14:07:51.364Z · LW(p) · GW(p)

Well, in my experience people who self identify as utilitarians don't appear to be any more shortsighted in terms of real life moral quandaries than people who don't so self identify.

I don't think it's the case that utilitarians tend to be shortsighted, just that people in general tend to be; if non-utilitarians tend to choose a less shortsighted action in a constructed moral dilemma, it's not usually due to consciously taking a long view.

When I was in college, a professional philosopher once visited and gave a seminar, where she raised the traveler-at-a-hospital scenario as an argument against utilitarianism (simply on the basis that killing the traveler defies our moral intuitions.) I responded that realistically, given human nature, if doctors tended to do this, then because people aren't effective risk assessers, people would tend to avoid hospitals for fear of being harvested, to the point that the practice would probably be doing more harm than good. She had never heard or thought of this argument before, and found it a compelling reason not to harvest the traveler from a utilitarian point of view. So as a non utilitarian, it doesn't seem that she was any more likely to look at questions of utility from a long view, she was just more willing to let moral intuitions control her decision, which sometimes has the same effect.

Replies from: christopherj
comment by christopherj · 2013-10-17T20:33:25.832Z · LW(p) · GW(p)

And that is an advantage of traditional moral systems -- because they have been around for so long, they have had opportunities to be tried and tested in various ways. It won't give adherents a long-term view, but it can be a similar effect. Think of it as, "I don't have to think out the consequences of this because other people have thought through similar problems over a thousand years, and came up with a rule that says I should do X." One would be foolish to totally disregard traditional morality simply because of it's occasional clash with the modern world. It would be like disregarding a "traditional" gene made by "stupid blind arbitrary evolution" because we think we have a better one made by a smarter system -- it might be a good idea to compare anyways.

Replies from: Ford
comment by Ford · 2016-05-19T16:46:44.944Z · LW(p) · GW(p)

I tend to agree, but it depends on how something was tested. In "Darwinian Agriculture", I argue that testing by ability to persist is weaker than testing by competition against alternatives. Trees compete against each other, but forests don't. Societies often compete and their moral systems probably affect competitive success, but things are complicated by migration between societies, population growth (moral systems that work for bands of relatives may not work as well for modern nations), technological change (cooking pork), etc.

comment by Cameron_Taylor · 2009-03-19T12:28:06.507Z · LW(p) · GW(p)

If we go around sacrificing every lonely stranger to the highest benefit of others nearby, no one is safe.

That would make a great movie!

Lonely Stranger

Jason Statham wakes up and realises all his family and friends have been killed by a tornado while he survives through luck and general masculine superiority. Beset upon on all sides by scalpel and tranquiliser wielding doctors he must constantly slaughter all the nearby sick people just to keep himself alive. Meanwhile, a sexy young biologist has been captured by a militant sect of religious Fundamentalists. Will Statham be able to break the imprisoned costar out in time to reveal her secret human organ cloning technology or will civilisation as we know it be destroyed by utilitarianism gone wrong?

comment by Douglas_Reay · 2023-09-23T12:58:10.045Z · LW(p) · GW(p)

What do you think of the definition of "Precedent Utilitarianism" used in the philosophy course module archived at https://links.zephon.org/precedentutilitarianism ?

comment by [deleted] · 2011-06-13T06:53:14.454Z · LW(p) · GW(p)

I would act differently in the least convenient world than I do in the world that I do live in.

comment by Psy-Kosh · 2009-03-14T06:44:19.757Z · LW(p) · GW(p)

Very good point, and crystalizes some of my thinking on some of the discussion on the tyrant/charity thing.

As far as the specific problems you posed...

For your souped up Pascal's Wager, I admit that one gives me pause. Taking into account the fact that Omega singled out one out of the space of all possible religions, etc etc... Well, the answer isn't obvious to me right now. This flavor would seem to not admit to any of the usual basic refutations of the wager. I think under these circumstances, assuming Omega wasn't open to answering any further questions and wasn't giving any other info, I'd probably at least spend rather more time investigating Catholicism, studying the religion a bit more and really thinking things through.

For question 2 (the really "god shaped" hole) though, personally, while I value happiness, it's not the only thing I value. I'll take truth, thank you very much. (In the spirit of this, I'm assuming there's no psychological trick that would let me fake-believe enough to fill the hole or other ways of getting around the problem.) But yeah, I think I'd choose truth there.

Question 3? Assuming the most inconvenient world (ie, there's no way that I could potentially do more good by keeping the money, etc etc, no way out of the "give it away to do maximal good") well, I'm not sure what I'd do, but I'm pretty sure I wouldn't be able to in any way justify not giving it away to Charity X. Though, if I actually had a known Omega give me that information, then I think that might just be enough to give me the mental/emotional/willpower strength to do it. ie, assuming that I KNEW that that way was really the path if I wanted to optimize the good I do in the world, not just in an abstract theoretical way, but was actually told that by a known Omega, well, that might be enough to get me to actually do it.

Replies from: astray
comment by astray · 2009-03-19T19:31:56.935Z · LW(p) · GW(p)

The souped up Pascal's Wager seems like the thousand door version of Monty Hall.

comment by ouroborous · 2022-11-12T13:09:02.873Z · LW(p) · GW(p)

I am trying to imagine the least convenient possible world (LCPW) for the LCPW method.

Perhaps it is the world in which there is precisely one possible world. All 'possible' worlds turn out to be impossible on closer scrutiny. Omega reveals that talking about a counterfactual possible world is as incoherent as talking about a square triangle. There is exactly one way to have a world with anyone in it whatsoever, and we're in it.

comment by JJ10DMAN · 2010-08-10T11:13:06.742Z · LW(p) · GW(p)

Yes! I can't believe I don't see this repeated in one form or another more often. Fallacies are a bit like prions in that they tend to force a cascade of fallacies to derive from them, and one of my favorite debate tactics is the thought experiment, "Let's assume your entire premise is true. How might this contradict your position?"

Usually the list is longer than my own arguments.

comment by Epictetus · 2015-02-10T22:17:58.404Z · LW(p) · GW(p)

The least convenient world is one where there's no traveler and the doctor debates whether to harvest organs from another villager. I figure that if it's okay to kill the traveler for organs, then it should be okay to kill a villager. Similarly, if it's against general principle to kill a villager for organs, then it shouldn't be okay to kill the traveler. Perhaps someone can come up with a clever argument why the life of a villager is worth intrinsically more than the life of the traveler, but let's keep things simple for now.

So, let us suppose that N sick people is the threshold wherein it is okay to kill a traveler, and hence a villager. If it's good to do once, it's good to do anytime this situation comes up. So we have ourselves a society where whenever the doctor needs is in dire need of organs for N patients, a villager is sacrificed. If we scale it up to the national level we should have ourselves a proper system wherein each month a certain number of people are chosen (perhaps by lottery) for sacrifice and their organs are harvested. I should imagine an epidemic of obesity and alcoholism as people seek to make their organs undesirable and so avoid being sacrificed.

I find that a fair number of morality puzzles of this sort exhibit interesting behavior under scaling.

Replies from: Meni_Rosenfeld, Lumifer, Jiro
comment by Meni_Rosenfeld · 2015-03-03T16:25:04.303Z · LW(p) · GW(p)

The perverse incentive to become alcoholic or obese can be easily countered with a simple rule - a person chosen in the lottery is sacrificed no matter what, even if he doesn't actually have viable organs.

To be truly effective, the system needs to consider the fact that some people are exceptional and can contribute to saving lives much more effectively than by scrapping and harvesting for spare parts. Hence, there should actually be an offer to anyone who loses the lottery, either pay $X or be harvested.

A further optimization is a monetary compensation to (the inheritors of) people who are selected, proportional to the value of the harvested organs. This reduces the overall individual risk, and gives people a reason to stay healthy even more than normally.

All of this is in the LCPW, of course. In the real world, I'm not sure there is enough demand for organs that the system would be effective in scale. Also, note that a key piece of the original dilemma is that the traveler has no family - in this case, the cost of sacrifice is trivial compared to someone who has people that care about him.

comment by Lumifer · 2015-02-11T16:47:58.692Z · LW(p) · GW(p)

If we scale it up to the national level we should have ourselves a proper system wherein each month a certain number of people are chosen (perhaps by lottery) for sacrifice and their organs are harvested.

I think China used to have a similar system, except that instead of lottery they just picked prisoners from the death row.

Replies from: Marion Z.
comment by Marion Z. · 2022-11-01T04:31:04.800Z · LW(p) · GW(p)

That seems entirely reasonable, insofar as the death penalty is at all. I don't think we should be going around executing people, but if we're going to then we might as well save a few lives by doing it

comment by Jiro · 2015-02-10T22:37:54.288Z · LW(p) · GW(p)

We have two such systems today, except

  1. We call it "taxes".
  2. People die on an overall statistical basis (because people who are poorer die sooner, and paying taxes makes them poorer) rather than by loss of organs so it is hard to point to an individual death caused by taking things from one person to give them to someone who is more needy.

For the second system,

  1. We call it "a justice system".
  2. The harm to innocent people is again statistical--because all justice systems are imperfect, they will convict X innocent people, and we've decided that harming X innocent people is an acceptable price to pay to convict more guilty people and protect the populace from criminals.
comment by Irgy · 2011-11-02T06:31:08.400Z · LW(p) · GW(p)

This might be better placed somewhere else, but I just thought I'd comment on Pascal's Wager here. To me both the convenient and inconvenient resolutions of Pascal's Wager given above are quite unsatisfactory.

To me, the resolution of this wager comes from the concept of sets of measure zero. The set of possible realities in which belief in any given God is infinitely beneficial is an infinite set, but it is nonetheless like Cantor Dust in the space of possible explanations of reality. The existence of sets of measure zero explains why it is reasonable to assign the value zero to the probability of something which is not literally impossible. To me, the only true resolutions to this paradox are to either convert, dispute the use of infinity in the utility (which I would also do, although I'm not yet completely convinced either way), or accept that not just is zero an acceptable probability for something that's not literally impossible but that it's also the correct value to assign to the probability of an infinitely vindictive God. Everything else is just convenience, egotism or missing the point. Cancelling infinities against other (possibly negative) infinities is simply bad maths, refusing to change your beliefs on the basis of utility rather than evidence is simply not acting in your own best interest and dismissing it based on contradictions in any particular religion or on the existence of other religions is as this article says simply assumed convenience.

In this case, your least convenient world kind of misses the point then. As soon as Omega tells me there's a non-zero probability of Catholicism being correct (whether Omega has actually told me that or not is not entirely clear mind you) then sure, I'm converting. But this is such a substantial change to reality that I would say it's essentially removed the paradox. The principle is fine though, I guess my point is just that the least convenient world is not a fixed thing but relative to a particular argument.

It's interesting to me that the mathematics to resolve the paradox didn't even exist at the time it was raised. Most people knew there was a problem with it, but at the time (and even today for most people's level of maths understanding) they had simply no way of expressing it correctly. To me this is actually some justification for a little bit of inertia of beliefs - just because you can't refute an argument doesn't mean it's correct, and your intuition can often tell you something's wrong before you know what it is. It just needs to be balanced against the many situations where intuition is demonstrably misleading. Innertia and an immovable object are not the same thing.

comment by corruptmemory · 2009-03-15T00:26:51.957Z · LW(p) · GW(p)

Although I understand and appreciate your approach the particular examples do not represent particularly good ones:

1: Pascal's Wager:

For an atheist the least convenient possible world is one where testable, reproducible scientific evidence strongly suggests the existence of some "super-natural" (clearly no-longer super-natural) being that we might ascribe the moniker of God to. In such a world any "principled atheist" would believe what the verifiable scientific evidence support as probably true. "Atheists" who did not do that would be engaging in the exact same delusional thinking modern-day theists engage in: belief in "beings" despite the utter lack of evidence supporting the existence of such "beings" only in reverse, like flat-earthers.

2: The God-Shaped Hole:

The use of "Omega" here is a fair bit over the absurd line. It very much sounds like you wish to create the following situation for atheists: suppose there exists an oracle that can tell you that there is a "hole" in you and it's "God shaped", but cannot confirm the existence nor non-existence of the "God" that the hole is "shaped" like. Well, then my hole (being an atheist) is penguin shaped ;-).

It is clear that you want to create a world where some form of definitive information about some other "thing" is true while trying to maintain the "true" state of the existence or non-existence of that thing left undecided. Alas, your not allowed that degree of freedom. If definitive statements are made and accepted as true then the thing that the statement references also must exist in some meaningful way.

3: Extreme Altruism

Lots of leeway is left in your example to re-cast the moral dilemma, for example:

a. Charity X is, in fact, using the money you give it to feed people in Africa, but the population that is being helped lives in a fundamentally unsustainable environment. Suppose changes in weather patterns means that getting a meaningful sustained water supply requires considerable cost. In this case the charity itself is engaging in the morally wrong thing by not supporting efforts to relocate the people to a place that can sustain them better. Your analysis (not literally you, the "you" responding to pleas for money) leads you to extreme altruism. Others follow-suit creating an unsustainable dependant society. In the case of extreme charity you accidentally do harm: they're alive, but utterly dependant of the charity of others.

b. Turn the entire situation amoral: why should their lives there be of such an importance to affect me, in any way, here? I.E. why is this a moral consideration at all? In this context person a may choose to contribute to charity X not knowing if "in the large" a "good overall outcome" will result from such a donation, regardless of amount contributed. Another way of looking at it is if I consider increased happiness being an important element of "good morality" (dubious?) then is my personal depletion of resources and the "net" increase in happiness in the receiving population a net increase of happiness overall? And is that the "right" thing? By who's measure?

The above examples are not meant as a broad-stroke justification for a "let-em starve" thing. The issue simply concerns constraining the examples sufficiently to get the outcome you are looking for. To simplify matters this particular example is closely analogous to the trolley situation above: suppose the doctor offered to the patient with the good organs the option of donating all the organs to the patients in need, but as a result the patient would need to survive on uncomfortable synthetic replacements of his organs.

comment by Annoyance · 2009-03-14T16:17:48.526Z · LW(p) · GW(p)

"I believe that God’s existence or non-existence can not be rigorously proven."

Cannot be proven by us, with our limits on detection, or cannot be proven in principle?

Because if it's the latter, you're saying that the concept of 'God' has no meaning.

Replies from: corruptmemory, Nebu, cleonid
comment by corruptmemory · 2009-03-14T22:53:09.346Z · LW(p) · GW(p)

Formalize this a bit:

"I believe that X’s existence or non-existence can not be rigorously proven."

Where X is of the set of beings imagined by or could be imagined by humans, e.g.: God, Gnomes, Zeus, Wotan, Vishnu, unicorns, leprechauns, Flying Spaghetti Monster, etc. Why is any one of the statements that result from such substitutions more meaningful than any other?

comment by Nebu · 2009-03-16T21:06:54.318Z · LW(p) · GW(p)

I think just because something cannot be proven (even in principle) does not necessarily imply that it is not true, let alone has no meaning.

See Godel's Incompleteness Theorem, for example.

comment by cleonid · 2009-03-14T16:35:23.250Z · LW(p) · GW(p)

It is the latter (I’m an agnostic). However, I don’t see why the concept has no meaning. Would you say that axioms in math are meaningless?

Replies from: Baughn, Annoyance
comment by Baughn · 2009-03-14T16:42:24.382Z · LW(p) · GW(p)

It's possible to decide which axioms are in effect from the inside of a sufficiently complex mathematical system (such as this universe), however.

For that matter, it would be possible to deduce the existence of a god, too; you just have to die. Granted, there are some issues with this, but nobody said deducing the axiom had to be convenient.

Replies from: cleonid
comment by cleonid · 2009-03-14T16:54:41.880Z · LW(p) · GW(p)

"It's possible to decide which axioms are in effect from the inside of a sufficiently complex mathematical system (such as this universe), however."

I don't think I understand what you mean.

"For that matter, it would be possible to deduce the existence of a god, too; you just have to die."

When you meet a god, how can you be sure it's not a hallucination?

Replies from: Sebastian_Hagen
comment by Sebastian_Hagen · 2009-03-15T17:01:40.988Z · LW(p) · GW(p)

When you meet a god, how can you be sure it's not a hallucination?

Assuming the entity in question is cooperative, try this:

Ask it if P=NP is true, and for a proof for its answer to that in a form that you can easily understand. There's three possible outcomes:

  • It doesn't comply. Time to get suspicious about its claims to godhood.
  • It hands you a correct proof, beautifully elegant and easy to grasp.
  • It hands you a lump of nonsense, which your mind is too damaged to distinguish from a proof.

If you get something that appears like an elegant proof, memorize it and recheck it every now and then. If your mind is sufficiently malfunctioning that it can't distinguish an elegant proof for P=NP from something that isn't, you may not be able to notice that from inside. There's still a chance whatever is afflicting you will get better over time; hence, do periodic rechecks, and pay particular attention to any nagging doubts about the proof you get while performing those.

In the meantime, interpret the fact that you've gotten an apparent proof as significant evidence for the entity in question being real and very powerful.

Replies from: John_Baez, Eliezer_Yudkowsky
comment by John_Baez · 2010-05-03T00:03:38.484Z · LW(p) · GW(p)

Or: it says "This is undecidable in Zermelo-Fraenkel set theory plus the axiom of choice". In the case of P=NP, I might believe it

Ask again, with another famously unsolved math problem. Repeat until it stops saying that or you run out of problems you know.

I would not believe a purported god if it said all 9 remaining Clay math prize problems are undecidable.

Replies from: Tasky
comment by Tasky · 2011-09-23T22:52:49.184Z · LW(p) · GW(p)

If it really is undecidable, God must be able to prove that.

However, I think an easier way to establish whether something is just your hallucination or a real (divine) being is asking them about something you couldn't possibly know about and then check if it's true.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-15T17:08:52.937Z · LW(p) · GW(p)

It says "There is no elegant proof". Next?

Replies from: Sebastian_Hagen, Vladimir_Nesov
comment by Sebastian_Hagen · 2009-03-17T07:43:43.998Z · LW(p) · GW(p)

Ask again, with another famously unsolved math problem. Repeat until it stops saying that or you run out of problems you know.

If you ran out, ask the entity to choose a famous math problem not yet solved by human mathematicians, explain the problem to you, and then give you the solution including an elegant proof. Next time you have internet access, check whether the problem in question is indeed famous and doesn't have a published solution.

If the entity says "there are no famous unsolved math problems with elegant proofs", I would consider that significant empirical evidence that it isn't what it claims to be.

Replies from: Dacyn
comment by Dacyn · 2016-04-23T19:46:01.394Z · LW(p) · GW(p)

Depending on your definition of "elegant", there are probably no famous unsolved math problems with elegant proofs. For example, I would be surprised if any (current) famous unsolved math problems have proofs that could easily be understood by a lay audience.

comment by Vladimir_Nesov · 2009-03-15T18:00:03.864Z · LW(p) · GW(p)

It could give a formally checkable proof, that is far from being elegant, but your own simple proof checkers that you understand well can plough through a billion steps and verify the result.

comment by Annoyance · 2009-03-14T17:00:12.714Z · LW(p) · GW(p)

"Would you say that axioms in math are meaningless?"

They distinguish one hypothetical world from another. Furthermore, some of them can be empirically tested. At present, Euclidean geometry seems to be false and Riemannian to be true, and the only difference is a single axiom.

Replies from: Nebu, anonym, Johnicholas, Dacyn, cleonid
comment by Nebu · 2009-03-16T21:06:02.548Z · LW(p) · GW(p)

At present, Euclidean geometry seems to be false and Riemannian to be true

I think the words "true" and "false" have some connotation that you might not want to imply? Perhaps it would clearer to phrase this as "At present, it seems like the geometry of our universe is not Euclidean and that the geometry of our universe is Riemannian.".

comment by anonym · 2009-03-15T00:16:00.084Z · LW(p) · GW(p)

They distinguish one hypothetical world from another.

It's a subtle distinction, but I think it's more accurate and useful to say that the axioms define a mathematical universe, and that a mathematical universe cannot be true or false but only a better or poorer model of the physical universe.

comment by Johnicholas · 2009-03-14T17:24:53.350Z · LW(p) · GW(p)

Euclidean geometry isn't a theory about the world, and therefore cannot be falsified by evidence from the world. The primitives (e.g. "line" and "point") do not have unambiguous referents in the world.

You can associate real-world things (e.g. patterns of graphite, or wooden rods) to those primitives, and to the extent that they satisfy the axioms, they will also satisfy the conclusions.

Math is not physics.

Replies from: Annoyance
comment by Annoyance · 2009-03-14T17:30:25.409Z · LW(p) · GW(p)

"Math is not physics."

It's made out of physics. I think perhaps you mean that math isn't about physics.

To the degree that axioms aren't being used to talk about potential worlds, I would say that they're meaningless.

comment by Dacyn · 2016-04-23T19:52:58.480Z · LW(p) · GW(p)

Riemannian geometry is not an axiomatic geometry in the same way that Euclidean geometry is, so it is not true that "the only difference is a single axiom." I think you are thinking of hyperbolic geometry. In any case, the geometry of spacetime according to the theory of general relativity is not any of these geometries, but it is instead a Lorentzian geometry. (I say "a" because the words "Riemannian" and "Lorentzian" both refer to classes of geometries rather than a single geometry -- for example, Euclidean geometry and hyperbolic geometry are both examples of Riemannian geometries.)

Replies from: SanguineEmpiricist
comment by SanguineEmpiricist · 2016-04-23T21:16:51.498Z · LW(p) · GW(p)

First i've heard of this, super interesting. Hmm. So what is the correct way to highlight the differences while still maintaining the historical angle? Continue w/ Riemannian geometry? Or just say what you have said, Lorentzian.

Replies from: Dacyn
comment by Dacyn · 2016-05-04T23:13:45.028Z · LW(p) · GW(p)

Special relativity is good enough for most purposes, which means that (a time slice of) the real universe is very nearly Euclidean. So if you are going to explain the geometry of the universe to someone, you might as well just say "very nearly Euclidean, except near objects with very high gravity such as stars and black holes".

I don't think it's helpful to compare with Euclid's postulates, they reflect a very different way of thinking about geometry than modern differential geometry.

comment by cleonid · 2009-03-14T17:32:21.824Z · LW(p) · GW(p)

"They distinguish one hypothetical world from another."

Just like different religions.

"Furthermore, some of them can be empirically tested. "

Empirical tests do not prove a proposition, but increase the odds of its being correct (just like "miracles" would raise the odds in favor of religion).

comment by DPiepgrass · 2021-09-11T18:05:18.614Z · LW(p) · GW(p)

0: Should we kill the miraculously-compatible traveler and distribute his organs?

My answer is based on a principle that I'm surprised no one else seems to use (then again, I rarely listen to answers to the Fat Man/Train problem): ask the f**king traveler!

Explain to the traveler that he has the opportunity to save ten lives at the cost of his own. First they'll take a kidney and a lung, then he'll get some time to say goodbye to his loved ones while he gets to see the two people with the donated organs recover... and then when he's ready they'll take the rest (though he ought not wait long, as patients are dying while he waits).

Sure, most travelers will say no. But utilitarians should want to spread their moral thinking far and wide, thus creating more utilitarians who would be willing to say yes.

I'll call the principle is shared moral responsibility. Instead of one person claiming a moral authority to kill other people, which is problematic for various reasons, we give each person the moral responsibility (should they choose to accept it) of killing themselves or (as in this case) giving someone else permission to kill them. Of course, when the time comes, some "utilitarians" will agree that they should die under their own moral system, but refuse to die (and allow multiple other people to die instead). But this tradeoff is still the most appealing one I know of.

In a world like this, where organ compatibility is apparently easy to come by, I think I would donate all my organs, but wait until I was old enough for my remaining life to drop substantially in value. If Effective Altruism becomes sufficiently common, I suppose there will be enough people like me to provide almost all the organs that the world needs, even in our actual world where compatibility is rare. For myself, knowing my own "expiry date" would provide some relief to my financial worries, since I wouldn't have to plan for a very long retirement, nor would I have to plan for unforeseeable medical expenses of unbounded size. (I can't rule out that I might chicken out when the time comes, though).

1. Pascal's Wager?

This is a weird one because the "wager" doesn't seem to have the ordinary meaning of "placing money on a bet that God exists with a non-transferable payout to be received after death" (on which I would surely place a wager, if only for shits and giggles). Rather it means "discard my epistemic standards and choose to believe God exists regardless of what the evidence says". Presumably you can't fake it - profess belief but secretly disbelieve - for God would detect your deception. So, the problem with taking this second wager is that humans are not physically capable of choosing to discard epistemic standards in just one domain while keeping strict standards in all other domains. Also, probably if you appreciate the value of good epistemics, you'll be incapable of letting go of them entirely. Thus, in practice, only those with poor epistemic standards even have the ability to take the wager in the first place, and someone taking Pascal's Wager is engaging in a status-quo bias in which they promise not to improve their standards.

2. The God-Shaped Hole

I'll go off-topic. My God-shaped hole is the sense that I have a soul (which I now call a consciousness, because souls are considered to be eternal, but I have no evidence that my soul is eternal). This issue is also known as the hard problem of consciousness, and it seems to me that what makes it hard is that there's no reason for evolution to give humans a strong-but-wrong sense of having a soul/consciousness, nor is there a reason for evolution to involve genuine souls/consciousness. The hard problem is not hard for Christians, who can just suppose that humans have whatever-God-has-only-smaller. Nor is the problem hard for people that lack a sense of having consciousness/qualia (if such people exist?).

However, several years ago I noticed that the existence of God does not imply eternal life for human souls, nor does the existence of eternal life for human souls imply there is a God. This observation, along with the observation that God really, honestly seems like kind of an evil bastard (especially in the Old Testament), helped me let go of my theism.

But if the God-shaped hole you asked about was really, truly God-shaped, wouldn't that provide evidence that God actually exists and thus justify theism to at least some extent? If it's justified to some extent, well, we should at least attend mass on Christmas and Easter, no?

3. Shouldn't we give all our possessions to the charity?

I recall Scott Alexander argued about this in later posts. Basically the answer is no, on consequentialist grounds, because

  1. the number of people willing to do this is extremely low
  2. people who do this probably harm their own future earning potential (and their own happiness)
  3. by giving a lesser amount, say 10%, and then strongly encouraging/campaigning others to do the same, the total amount of donations can be vastly increased because almost everyone is capable of doing that and might actually do it with enough social pressure, whereas campaigning on "donate literally everything to charity" is never, ever, going to have many takers.

(I don't feel like putting forth the effort to figure out what kind of hypothetical worldstate is required for giving away everything to actually be the utilitarian best answer)

Replies from: Marion Z.
comment by Marion Z. · 2022-11-01T04:35:41.266Z · LW(p) · GW(p)

Only replying to a tiny slice of your post here, but the original (weak) Pascal's wager argument actually does say you should pretend to believe even if you secretly don't believe, for various fuzzy reasons such as societal influence, and that maybe God will see that you were trying, and that sheer repetition might make you believe a little bit eventually

comment by Thomas Eisen (thomas-eisen) · 2020-05-14T19:42:18.649Z · LW(p) · GW(p)

My answers:

1.No, because their belief doesn't make any sense. It even has logical contradictions, which makes it "super impossible", meaning there's no possible world where it could be true (the omnipotence paradox proves that omnipotence is logically inconsistent; a god which is nearly omnipotent, nearly omniscient and nearly omnibenevolent wouldn't allow suffering, which, undoubtably, exists; "God wants to allow free will" isn't a valid defence, since there's a lot of suffering that isn't caused by other humans, like illness and natural catastrophes) (note: I'm adding "nearly" to avoid paradoxes like the omnipotence paradox)

2. belief isn't a choice, for example, you can't "chose" to believe that the continent Australia doesn't actually exist. Therefore, I wouldn't be able to hold religious believes even if I'd acknowledge that this would bring greater happiness without negative side effects.

However, if we make the hypothetical world even less convenient by adding that, actually, I would be able to effectivly self-deceive, and there would be absolutly no negative side effects, then Yes, I would chose to believe.

3. I'm already highly sympathetic towards the "Effective altruism" movement and donate a lot of money to their causes. The reason I'm not donating literally everything I don't need for survival is that I'm not morally perfect; I admit that.

(EDIT just to correct spelling)

comment by passive_fist · 2015-03-25T07:49:59.695Z · LW(p) · GW(p)

either God does not exist or the Catholics are right about absolutely everything.

Then I would definitely and swiftly become an atheist, and I maintain that this is by far the most rational choice for everybody else as well. My prior belief in God not existing is relatively high (let's say 50/50), but my prior belief in all of Catholicism being the absolute truth is pretty much nil. And if you're using anything vaguely resembling consistent priors, it has to near-nil for you too, because the beliefs of Catholicism are just so incredibly specific. They narrow down the space of possible God-like beings to a very narrow slice of the type-of-God space.

Or do you admit that even if believing something makes you happier, you still don't want to believe it unless it's true?

More like: Believing something may make you happier but you can't easily force yourself to believe something is true. Placebos have the same problem. You give someone a placebo and tell them it's a happy pill, and it will make them happy. But you can't do that trick on yourself. The placebo won't work.

Still, I've seen people force themselves (over time) to believe in religion so I'm not saying it's impossible.

I think your third example (charity) best illustrates your point, but the proposed world is still not optimally inconvenient, because someone could counter by saying that if they invest their money they can make even more in the future and donate much more money. So, increasing the level of inconvenience, the question you have to answer is: "Assuming you have the most money you will ever have, and assuming your charity money will be used honestly, would you donate?" I don't have the answer to that question.

comment by Nanashi · 2015-02-10T13:48:17.075Z · LW(p) · GW(p)

I find this method to be intellectually dangerous.

We do not live in the LCPW, and constantly considering ethical problems as if we do is a mind-killer. It trains the brain to stop looking for creative solutions to intractable real world problems and instead focus on rigid abstract solutions to conceptual problems.

I agree that there is a small modicum of value to considering the LCPW. Just like there's a small modicum of value to eating a pound of butter for dinner. It's just, there are a lot better ways to spend ones time. The proper response to "Well, what about the LPCW?" is, "How do you know we are in the LCPW?" I think there is far more value in having a conversation that explores our assumptions about a difficult problem rather than indulges them.

Q: Consider the Sick Villager problem. How do we know that the patients won't die due to transplant rejection? A: Oh, well, Omega says so. Q: Okay. So, how do we know Omega is right? A: Because Omega is omniscient. Q: If Omega is omniscient, why can't it tell us how to grow working organs without need for human sacrifice? A: Because there are limits to how much it knows. Q: Ok, so if I knew in advance that Omega is omniscient but has these limitations, why on earth am I working in a village helping 10 villagers instead of working on advancing Omega to the point where it doesnt have those limitations. (And if I don't know this in advance, why would I suddenly start believing some random computer that claims it is Omega?) A: I don't know, because it's the LCPW.

That conversation yields a lot more intellectual value; it trains you to think creatively and explore all possible solutions, rather than devise a single heuristic that is only applicable in a 5d-corner case. As I indicated above, it can actually be dangerous because novice rationalists may feel compelled to apply that narrow heuristic to situations where a more optimal, creative solution is present.

Replies from: Nornagest, TheOtherDave
comment by Nornagest · 2015-02-10T21:31:40.880Z · LW(p) · GW(p)

That conversation yields a lot more intellectual value; it trains you to think creatively and explore all possible solutions, rather than devise a single heuristic that is only applicable in a 5d-corner case.

I don't think I could disagree more.

The point of ethical thought experiments like the sick villager problem is not to present a practical scenario to be solved; it's to illuminate seeming contradictions in our values. Yes, a lot of them have some holes -- where did the utility monster come from? Are there more of them with different preferences? Is it possible to make it happy by feeding it Tofumans? -- but spending a lot of time plugging them only distracts from the exercise. The reason they appear to be extreme corner cases is that only there does the contradiction show up without complications; but that doesn't mean they're not worth addressing on their own terms, the better to come up with a set of principles -- not a narrow heuristic -- that could be applied in real life. The LCPW, therefore, is a gentle reminder to think in those terms.

Training yourself to think creatively has value, of course; if you're actually faced with sick villagers that you could save by chopping up healthy patients for their organs, you should by all means consider alternative solutions and have trained yourself for doing so. But this thought experiment isn't aimed at a rural surgeon with an itchy bonesaw; it's aimed at philosophers of ethics, or at armchair hobbyists of same. If you are such a person and you're faced with something like that as a hypothetical, and you can't prove that not only that hypothetical but no analogous situation will ever come up, engage with the apparent ethical contradiction or show that there isn't one; don't poke holes in the practical aspects of the hypothetical and congratulate yourself for it. That's like a physicist saying "well, I can't actually ride a beam of light, so obviously it's not worth thinking about".

Though you could decide the whole genre isn't worth your time. That's fine too. Philosophy of ethics is a little abstract for most people's taste, including mine.

Replies from: Nanashi
comment by Nanashi · 2015-02-10T23:41:36.260Z · LW(p) · GW(p)

The point of ethical thought experiments like the sick villager problem is.... to illuminate seeming contradictions in our values.

That's fair. I understand the value: it exposes the weakness of using overly rigid heuristics by presenting a situation where those heuristics feel wrong. And I agree that it's an evasion to nitpick the thought experiment in an attempt to avoid having to face the contradiction of your poorly-formed heuristics.

My standard response to thought-experiment questions is: "I would do everything possible to have my cake and eat it too." In many cases, that response satisfies whoever asked the question. Immediately defaulting to the LCPW is putting words into the other person's mouth by assuming they wouldn't be satisfied with that ethical approach.

Making the LCPW something "normal" seriously underestimates how world-bendingly different it would be from reality. If we truly lived in the LCPW, in most cases it would be such a different reality from the one we exist in that it would require a completely different set of ethics, and I just haven't really thought hard enough about it to generate a new system of ethics for each tailor-made LCPW.

Incidentally I don't have a problem with the LCPW when it's actually realistic, as is the case with the "charity" example.

comment by TheOtherDave · 2015-02-10T20:38:16.339Z · LW(p) · GW(p)

I agree that insisting on assuming the LCPW is a lousy strategic approach to most real-world situations, which (as you say) don't normally occur in the LCPW. And I agree that assuming it as an analytical stance towards hypothetical situations increases the chance that I'll adopt it as a strategic approach to a real-world situation, and therefore has some non-zero cost.

That said, I would also say that a one-sided motivated "exploration" of a situation which happens to align with our pre-existing biases about that situation has some non-zero cost, for basically the same reasons.

The OP starts out by objecting to that sort of motivated cognition, and proposes the LCPW strategy as a way of countering it. You object to the LCPW strategy, but seem to disregard the initial problem of motivated cognition completely.

I suspect that in most cases, the impulse to challenge the assumptions of hypothetical situations does more harm than good, since motivated cognition is pretty pervasive among humans.

But, sure, in the rare case of a thinker who really does "explore assumptions about a difficult problem" rather than simply evade them, I agree that it's an exercise that does more harm than good.

If you are such a thinker and primarily engage with such thinkers, that's awesome; whatever you're doing, keep it up!

comment by MichaelHoward · 2009-03-14T15:27:03.770Z · LW(p) · GW(p)

Yvain,

Do you have a blog or home page with more material you've written? Failing that, is there another site (apart from OB) with contributions from you that might be interesting to LW readers?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-03-15T16:09:35.378Z · LW(p) · GW(p)

Thanks for your interest. My blog is of no interest to anyone but my immediate personal friends, but I am working on a website. I'll let you know when it's up.

Replies from: michaelkeenan
comment by michaelkeenan · 2009-04-29T04:25:30.844Z · LW(p) · GW(p)

Hey Yvain. I found your blog a little while ago (I think it was from an interesting comment on Patri's LiveJournal, or maybe he linked to you). I disagree that your blog isn't interesting to people that aren't immediate friends (for example, I found your arguments about boycotts and children's rights to be interesting and persuasive). I respect that you seem to not want to link to it here, so I won't. But I urge you to change your mind!

Replies from: badger, Yvain
comment by badger · 2009-04-29T05:02:01.731Z · LW(p) · GW(p)

Ha, this was just enough information for my google-fu to finally succeed.

Yvain, I have a feeling that between your articles here, your travels through Outer Mongolia, and your apparent all-around awesomosity, EY has some stiff competition for cult leader.

comment by Scott Alexander (Yvain) · 2009-05-02T23:49:31.272Z · LW(p) · GW(p)

Thank you, Michael, for not linking to it here, and thank you, Badger, for the kind words. Although I'm not going to accept any comparisons to EY until I've come up with and implemented at least one feasible plan to save the world.

comment by nazgulnarsil · 2009-03-14T10:05:50.431Z · LW(p) · GW(p)

with regards to the third question: what if I believe that any resources given simply allow the population to expand and hence cause more suffering than letting people die?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-03-14T11:14:31.693Z · LW(p) · GW(p)

If you don't really believe that, and it's just your excuse for not giving away lots of money, you should say loud and clear "I don't believe I'm morally obligated to reduce suffering if it inconveniences me too much." And then you've learned something useful about yourself.

But if you do really believe that, and you otherwise accept John's argument, you should say explicitly, "I accept I'm morally obligated to reduce suffering as much as possible, even at the cost of great inconvenience to myself. However, I am worried because of the contingent fact that giving people more resources will lead to more population, causing more suffering."

And if you really do believe that and think it through, you'll end up spending almost all your income on condoms for third world countries.

comment by Mr Valmonty · 2023-04-23T20:32:41.828Z · LW(p) · GW(p)

Is this not just an alternative way of describing a red herring argument? If not, I would be interested to see what nuance I'm missing.

I find this classically in the abortion discussion. Pro-abortionists will bring up valid-at-face-value concerns regarding rape and incest. But if you grant that victims of rape/incest can retain full access to abortions, the pro-abortionist will not suddenly agree with criminalisation of abortion in the non-rape/incest group. Why? Because the rape/incest point was a red herring argument

comment by NoriMori1992 · 2021-11-10T21:05:56.427Z · LW(p) · GW(p)

This is a good argument against Pascal's Wager, but it isn't the least convenient possible world. The least convenient possible world is the one where Omega, the completely trustworthy superintelligence who is always right, informs you that God definitely doesn't value intellectual integrity that much. In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely everything.

Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of Pascal's Wager has less to do with a hypothesized pro-atheism God, and more to do with a belief that it's wrong to abandon your intellectual integrity on the off chance that a crazy deity is playing a perverted game of blind poker with your eternal soul?

This is a bad example. Pascal's wager wasn't a thought experiment. Pascal genuinely believed that the two options his wager proposed were the only options. Not hypothetically, but in real life. He wasn't asking, "If this were the case, would you wager that God exists?" He was saying, "This is the case, and so you'd be stupid not to wager that God exists, QED" It's not fighting the hypothetical to say "Those aren't the only two options", because the problem as Pascal viewed it wasn't hypothetical in the slightest. He was making an argument, and the rebuttal to his argument is to point out that it relies on flawed assumptions.

comment by Aurini · 2009-03-19T05:06:10.105Z · LW(p) · GW(p)

I apologize for banging on about the railroad question, but I think the way you phrased it does an excellent job of illustrating (and has helped me isolate) why I've always vaguely uncomfortable with Utilitarianism. There is a sharp moral contrast which the question doesn't innately recognize between the patients entering into a voluntary lottery, and the forced-sacrifice of the wandering traveller.

Unbridled Utilitarianism, taken to the extreme, would mandate some form of forced Socialism. I think it was you who commented on OvercomingBias, that one of the risks associated with Cryogenics is waking up in a society where you are not permitted to auto-euthanize. Utilitarianism might argue that the utility of your own diminished suffering would be less than the utility of others people valuing your continued life.

While Utilitarianism is excellent for considering consequences, I think it's a mistake to try and raise it as a moral principle. I lean towards a somewhat Objectivist viewpoint: namely, that the first principle we ought to start with is that each person has the right to their own person and property, and that it is immoral to try and take it from them for any cause.

Following from this, let me address your third question: I'd argue that this type of wealth transfer not only undermines long-term economic develop of the African country (empirical, I could be proved wrong), not only prevents me from spending money on quality products & investing in practical businesses (once again, empirical), but that on a deeper level it undermines the individuality which I value in the human condition. Askin which produces greater happiness & material wealth, Communism or Capitalism, is an empirical question: Omega could come down and tell me that Communism will produce 10x the happiness, or 100x, or whatever. But the idea of slamming everybody into the same, mass produced box to maximize happiness utility sounds suspiciously like Orgasmium.

I don't see how you can compromise on these principles. Either each person has full ownership of themselves (so long as they don't infringe on others), or they have zero ownership. Morality (as I would define it) demands that we fight to protect others freedom, but it says nothing about ensuring their welfare. Giving something for 'free' is just another form of enslavement - even if it's only survival and dependence in exchange for a smug sense of superiority.

On a side note, you did a brilliant job of deconstructing 'morality based on empiricism.'

Replies from: John_Baez, Hroppa, handoflixue, TheAncientGeek, AspiringRationalist, Swimmer963
comment by John_Baez · 2010-05-02T23:59:27.110Z · LW(p) · GW(p)

Unbridled Utilitarianism, taken to the extreme, would mandate some form of forced Socialism.

So maybe some form of forced socialism is right. But you don't seem interested in considering that possibility. Why not?

While Utilitarianism is excellent for considering consequences, I think it's a mistake to try and raise it as a moral principle.

Why not?

It seems like you have some pre-established moral principles which you are using in your arguments against utilitarianism. Right?

I don't see how you can compromise on these principles. Either each person has full ownership of themselves (so long as they don't infringe on others), or they have zero ownership.

To me it seems that most people making difficult moral decisions make complicated compromises between competing principles.

Replies from: JohnH
comment by JohnH · 2011-04-28T05:58:04.643Z · LW(p) · GW(p)

Utilitarianism itself requires the use of some pre-established moral principles.

comment by Hroppa · 2011-10-26T13:52:40.496Z · LW(p) · GW(p)

the first principle we ought to start with is that each person has the right to their own person and property, and that it is immoral to try and take it from them for any cause.

Thought experiment: A dictator happens to own all the property on the planet. Until now, he has been giving everybody exactly enough food to survive. In a fit of rage/madness, he stops. You would support the death of all humans other than the dictator, rather than taking his property?

Replies from: Aurini
comment by Aurini · 2011-10-27T07:02:28.660Z · LW(p) · GW(p)

Good god, Aurini (2009) sounds quite pompous. I can't even deal with reading his entire comment.

I've since drifted from Libertarian to full-fledged Reactionary. I will attempt to answer the question as such.

Either the Dictator is God, and we're all damned anyway, so any question of 'Rights' is irrelevant - or the Dictator is Moral, in which case we will kill him by sodomizing him with a red-hot poker. He remains King, so long as he is a competent King (through our eyes, his police's eyes, et cetera).

Supposition: the Dictator is God, but only a God - his peers see his mistreatment of us. Recognizing the fact that he is wasting Good Wheat, they murder him, and install a new Potus. Life returns to happiness - because happy citizens pay the most taxes (just ask Russia).

EDIT: Neither of which is an "End of History" solution, mind you, but I'm just beginning to realize how intractable the problem is. Obviously the new Dictator God will be just as idiotic and faliable as the last - which is why, as nice as Monarchy might seem, it ultimately self-destructs into Democracy, and then Dictatorship (just ask Marc Antony).

Replies from: CharlieSheen, pedanterrific
comment by CharlieSheen · 2012-03-16T07:23:34.858Z · LW(p) · GW(p)

I've since drifted from Libertarian to full-fledged Reactionary

Most LessWrong posters are still firmly in the Cathedral and may fail to appreciate the significance of this, for they can not imagine a world outside it. A sizeable minority though has been influenced by the teachings of Darth Moldbug and the other lords of the alternative right, showing them a surprising taste of the true power of intellectual reaction.

Some such as I have embraced these teachings for now, since it seems the very complexity of the world around us demands it. For there are very difficult and old problems which despite protestation to the contrary remain unsolved. Yet sanity on them must be approached if humanity is to have any hope at all.

comment by pedanterrific · 2011-10-27T07:09:01.927Z · LW(p) · GW(p)

What an interesting way of dodging the question.

Either the Dictator is God, ... or the Dictator is Moral

What's this supposed to mean? First of all, the context of the hypothetical clearly indicates that the dictator is human. Second, what do you mean by 'Moral"?

Also, why is it suddenly more acceptable to murder than to just take their property? There's these things called mental hospitals and prisons, you should look into them.

Edit: Look, it might help to define the LCPW in this context: the question is whether it is ever moral to take someone's property from them, yeah? So focus on that by making the actual act of doing so really easy - he's an absent dictator, owning all property on Rinax from his penthouse on Earth. One day he gets buried in paperwork and forgets to sign the form releasing the next year's food allowance to his subjects. How long should they wait before they break open the shipping containers and steal his food?

Replies from: Aurini
comment by Aurini · 2011-10-27T08:30:18.683Z · LW(p) · GW(p)

Heh, I'm nothing if not Interesting.

The quote is a typo, incidentally - I meant to write "morTal".

As seductive as the concept is, I see no firmament underlying the basis of 'human rights' - without a godhead who frimly endorses them, I'm not sure what they mean, beyond self-evident utlitarianism...

Oh Gods, have I become a Utlitarian? Possibly.

It's hard to say; given that narrative of who is 'right' and who is 'wrong' is inevitably written by those who are on the firing squad, I tend not to like this question. I'm honestly not sure how to respond to your comment; what sort of reply would make sense? Let me ask you - was Darth Vader obviously the Good Guy, or was he a Villains whom you could Sympathize With?

comment by handoflixue · 2010-12-17T23:39:16.785Z · LW(p) · GW(p)

"Giving something for 'free' is just another form of enslavement "

Hmmmm, this actually really puzzles me - how do you handle inheritance? Presumably, it being my property, I ought to be free to delegate it as I wish in my will. But, equally, to the person receiving it, it's a 'freebie', a form of enslavement.

What about the other gifts that come from a privileged upbringing (access to a good education, or even just good nutrtion, say)? Surely, as a 2 year old, you didn't do anything particularly special to be deserving of these things compared to our example African kids - indeed, I doubt there's anything a 2-year old could do to shift themselves from one circumstance to the other.

comment by TheAncientGeek · 2016-04-23T13:49:56.649Z · LW(p) · GW(p)

Either each person has full ownership of themselves (so long as they don't infringe on others), or they have zero ownership.

Or "mu". Ownership, self or otherwise, is the wrong frame entirely, for instace.

comment by NoSignalNoNoise (AspiringRationalist) · 2012-03-19T03:54:41.328Z · LW(p) · GW(p)

There are important differences between moral principles and government policies. Even if you accept the premise that the morally optimal course of action is X, it does not logically follow that the government should mandate X. For one thing, it may or may not be feasible to enforce such a law, or the costs of implementing it may outweigh the benefits. Furthermore, some moral philosophies (though not utilitarianism) place firm boundaries on what is and is not the proper role of government.

I would be curious to know your true rejection of utilitarianism.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-03-19T05:06:11.990Z · LW(p) · GW(p)

Even if you accept the premise that the morally optimal course of action is X, it does not logically follow that the government should mandate X.

More generally, reaching the moral conclusion that agent A should do X (or even is obligated to do X), doesn't obviously entail that agents B, C, D should compel A to do X, nor punish A for failing to do X — nor even that B, C, D are permitted to do so.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-05T13:11:02.245Z · LW(p) · GW(p)

Either each person has full ownership of themselves (so long as they don't infringe on others), or they have zero ownership.

The Canadian government has socialist elements, and I wouldn't mind and would even choose to live in a society that had more. As far as I know, the only freedoms it takes away are those that infringe on other people...considering that human beings are social animals, many or most of our decisions do affect other people. (I have not researched this. Feel free to prove me wrong or enlighten me on other aspects.)

Replies from: Aurini
comment by Aurini · 2011-06-12T19:15:47.998Z · LW(p) · GW(p)

The problem for me - speaking as a Canadian - is that there's no choice about it. To be honest Canada's a pretty good place to live. Despite the personality-disordered weirdo we have running the place, it's relatively free; decent amounts of freedom of speech, stable currency, only moderate corruption in our police forces, and greater economic liberty than the US (that's right - Soviet Canuckistan is less government run than the US) - my biggest upsets are Gun Control, the state of Domestic Violence Law, and the 'Human Rights' Tribunals which censor speech critical of protected groups. The worst thing our Monster in Parliament is trying to do is enact the equivalent of the Patriot Act, ten years too late.

The fundamental problem, though, is the lack of choice to begin with - immigration has huge barriers, and it's not like there's room for any more countries. We're all forced into the coutnry we live in, and I suspect that the real civilizing force is the decency of regular people, who manage despite the government.

It's like the post office, fifty years ago - they delivered the mail, they were adequate, but they weren't performing anywhere near the level that was possible. Nobody complained (much) because they were accustomed to it. As soon as private delivery companies entered the scene.the post office had to shape up fast.

If I had the choice, I might choose to enter a socialist collective of sorts - at the very least, I'd want to live in an incorporated city which took care of the roads and sewers. The same thing should go for countries; nobody forces me to live in Calgary and accept the local tax burden, it wouldn't be right. Similarly it isn't right to force people to pay taxes in a country, when they're deeply opposed to certain elements of government.

Keep in mind, I'm not just complaining without a solution in mind; there are workable solutions that would pay for things such as national defence, while subjecting government to the integrity of the private market. Poly-centric law is one example, though I think having a Corporate Monarchy would be more workable here in Canada.

Replies from: MixedNuts
comment by MixedNuts · 2011-06-12T19:43:39.267Z · LW(p) · GW(p)

Anectodal evidence: In France, the post office is much worse since they have competition.

comment by gmweinberg · 2009-03-14T21:45:27.343Z · LW(p) · GW(p)

I don't see any problem with acknowledging that in a world very different from this one my beliefs and actions would also be different. For example, I think the fact that there are and have been so many different religions with significantly different beliefs as to what God wants is evidence that none of them are correct. It follows that if there was just one religion with any significant number of adherents then that would be evidence (not proof) that that religion was in fact correct.

Maybe if Omega tells me it's Catholicism or nothing I'll become a Catholic. Maybe if he says it's the Aztec religion or nothing I'll cut out your beating heart and toss you down a pyramid. But no worries, neither one is going to happen in the real world.

comment by MichaelHoward · 2009-03-14T13:40:50.024Z · LW(p) · GW(p)

Yvain,

Do you have a blog or home page with more material, or is there another site (apart from OB) with contributions from you that might be interesting to LW readers?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-14T02:24:10.821Z · LW(p) · GW(p)

Yvain, you frequently seem to have extra line breaks in your post, which I've been editing to fix. I'm leaving this post as is because I'm wondering if you can't even see them, in which case are you using an unusual browser or OS?

comment by Gunslinger (LessWrong1) · 2016-04-26T16:53:46.989Z · LW(p) · GW(p)

So I asked him, "In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?"

That's a pretty damn convenient world. It's basically like saying "In a world where serious issue X isn't applicable, what would you do?" which might as well be the better question instead of beating around the bush.

Sorry if this was posted before.

comment by aausch · 2015-09-28T16:29:11.030Z · LW(p) · GW(p)

The acceleratingfuture domain's registration has expired (referenced in the starting quote) (http://acceleratingfuture.com/?reqp=1&reqr=)

comment by matteyas · 2014-09-28T14:56:26.171Z · LW(p) · GW(p)

I have a question related to the initial question about the lone traveler. When is it okay to initiate force against any individual who has not initiated force against anyone?

Bonus: Here's a (very anal) cop out you could use against the least convenient possible world suggestion: Such a world—as seen from the perspective of someone seeking a rational answer—has no rational answer for the question posed.

Or a slightly different flavor for those who are more concerned with being rational than with rationality: In such a world, I—who value rational answers above all other answers—will inevitably answer the question irrationally. :þ

Replies from: DanielLC
comment by DanielLC · 2014-11-13T05:57:11.191Z · LW(p) · GW(p)

Bonus: Here's a (very anal) cop out you could use against the least convenient possible world suggestion: Such a world—as seen from the perspective of someone seeking a rational answer—has no rational answer for the question posed.

I'm not sure what this means. There is a finite number of choices. Each of them has a specific utility. The one with the highest utility is the most rational. Are you saying that one or more choices has undefined utility?

comment by A1987dM (army1987) · 2013-11-30T23:19:10.841Z · LW(p) · GW(p)

In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely everything.

That sounds like it would decrease my probability that God exists by several dozen orders of magnitude.

Replies from: DanielLC
comment by DanielLC · 2014-11-13T05:51:35.456Z · LW(p) · GW(p)

Yes, but the important part is that it would mean that you know God won't punish you for becoming a Catholic.

Replies from: hairyfigment
comment by hairyfigment · 2014-11-13T06:24:18.163Z · LW(p) · GW(p)

I should point out that - if for some reason we're taking absurdly low-probability hypotheses into account - the idea that religion will prevent us from using the Force to live forever seems more likely to me than any deity who could offer us eternity.

Replies from: DanielLC
comment by DanielLC · 2014-11-13T18:35:11.901Z · LW(p) · GW(p)

if for some reason we're taking absurdly low-probability hypotheses into account

Generally you use the probability times the utility. It would seem reasonable to take absurdly low-probability hypotheses into account if the difference in utility is absurdly high. That being said, refusing to take into account probabilities below a given value regardless of utility is a perfectly acceptable answer. I can't assert that you take them into account any more than I can assert you're a utilitarian in the doctor example.

the idea that religion will prevent us from using the Force to live forever seems more likely to me than any deity who could offer us eternity.

I don't know if the Force counts as a religion, but even if it doesn't there are a few things that are not religions that would work. You are still missing the point, though. Lets say that Omega also gives an upper bound for the absolute value of utility you will have if catholicism isn't true.

Replies from: hairyfigment
comment by hairyfigment · 2014-11-13T20:12:02.612Z · LW(p) · GW(p)

I know you've seen the Pascal's Mugging problem - that's what I meant to refer to. An upper bound to utility elsewhere doesn't matter if P(Catholicism) gets a sufficient leverage penalty (and the same again for all stronger claims). Are you saying that according to Omega, Hansonian leverage penalties are unsalvageable and this upper bound is the solution? (On its face, the claim "Catholicism is true" does not logically rule out the Mugger's claim, but of course we could go further.) I'd be more skeptical about this than I would be if Omega told me P=NP and also self-modifying AI is impossible by Godel's Incompleteness. But of course if I accepted it, this would change the equation.

comment by Varan · 2013-05-10T13:41:48.568Z · LW(p) · GW(p)

I think that traveler's problem may pose two questions instead of one. First of all - is that a right thing to do just once, and the second is if it's good enough to be a universal rule. We can counclude that's the same question, because using it once means we should use it every time when a situation is the same. But using it as a universal rule has an additional side effect - a world where you know you can be killed (depraved of all posessions, etc.) any moment to help some number of strangers is not such a nice place to live in, though sometimes it's possible that the sacrifice is still worth it.

Someone can say that in the least convenient world the general rule is "you only kill strangers when it's absolutely not possible anyone (even the patients) would know that, and they have no one, and so on". In that world it's similar with "living happily in a lie" problem. If the world where people don't know about murdered travellers (the lie) is worse then the world where they know about the murders, then this world is even worse than the previous one.

comment by brainoil · 2013-04-17T11:02:44.509Z · LW(p) · GW(p)

Would this be moral or not?

Of course it is, if you live in this hypothetical world. The fact that in real life things are rarely this clear, or the fact that in real life you will be jailed for doing this, or the fact that you'd feel guilty if you do this, or the fact that in real life you won't have the courage to do this, doesn't mean that it's wrong.

But in real life I'd hardly ever violate the libertarian rights because of all the reasons mentioned above.

comment by rasthedestroyer · 2012-02-09T23:57:57.904Z · LW(p) · GW(p)

The biological commentary is indeed accurate, but I question its relevance in the context of the question, which seems to be one in favor of a utilitarian ethical discourse without the biological considerations. It might be better to assume the biological factors involved are compatible, or assume all other factors are equal, and disregard the biology.

The first answer that comes to mind for most I'm sure is that 10 is greater than 1, and that such a sacrifice would return a net gain in lives saved. However, this question is complicated by what it is about saving lives at all that is good. If you can save a life of a dying patient without risk then you should. We assume this is true - even from a utilitarian position - that life has intrinsic value and therefore saving lives when possible is the correct decision. Thus, as a strictly quantitative normative comparison, the decision to kill one healthy person to save 10 dying patients is right. But qualitative interests should be accounted for too: treating the act of killing a healthy individual as a separate act, it stands in direct contradiction to the supposed ethical duty we are to uphold, mainly, saving life, by taking the life of one who is not sick. Consider it in these terms: is it ethically correct, then, to take the lives of the healthy to save those of the sick? This amounts to a zero-sum game and a clear logical aporia.

Replies from: None
comment by [deleted] · 2015-09-16T02:43:28.642Z · LW(p) · GW(p)

Why, if the sick people are so close biologically, can't we sentence one of them instead to help the rest?

comment by A1987dM (army1987) · 2012-02-05T17:48:22.189Z · LW(p) · GW(p)

But in the least convenient possible world, here comes Omega again and tells you that Charity X has been proven to do exactly what it claims: help the poor without any counterproductive effects.

You don't need the least convenient possible world and Omega for that; for non-excessively-large values of proven, this world and givewell.org suffice. I'm surprised that in three years nobody pointed that out before.

comment by anupriya28 · 2011-03-15T05:45:10.522Z · LW(p) · GW(p)

I dont think this would be a moral example Debt Consolidation Loan

comment by lucidfox · 2010-12-02T08:22:44.752Z · LW(p) · GW(p)

The least convenient possible world is the one where Omega, the completely trustworthy superintelligence who is always right, informs you that God definitely doesn't value intellectual integrity that much. In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely everything.

The problem with this specific formulation is that fundamentalist Christian beliefs are inconsistent, and thus it is trivially follows from Omega's wording that God does not exist.

A better wording would be to postulate that Omega asserts the possibility that a God exists who judges you posthumously based on belief, and it is the same one for all humans who have ever lived. In that case, if I completely trust Omega, the argument collapses into "shut up and calculate": in other words, there is some threshold of probability P(faith-judging God exists) at which point worshiping it would be the rational choice.

comment by Bugle · 2010-02-04T21:27:45.874Z · LW(p) · GW(p)

"first, do no harm"

It's remarkable that medical traditions predating transplants* already contain an injunction against butchering passers by for spare parts

*I thought this was part of the Hippocratic oath but apparently it's not

Replies from: thomblake
comment by thomblake · 2010-02-04T21:57:22.164Z · LW(p) · GW(p)

An injunction to do no harm is part of the Hippocratic oath, and the actual text has multiple translations, so I don't think it's too far-fetched to attribute "first, do no harm" to the oath.

Replies from: MrHen
comment by MrHen · 2010-02-04T23:12:21.128Z · LW(p) · GW(p)

Obligatory wikipedia link.

The phrase "first, do no harm" is often, incorrectly, attributed to the oath.

On the other hand:

The origin of the phrase is uncertain. The Hippocratic Oath includes the promise "to abstain from doing harm" but not the precise phrase. Perhaps the closest approximation in the Hippocratic Corpus is in Epidemics: "The physician must...have two special objects in view with regard to disease, namely, to do good or to do no harm".

This was from the article on first, do no harm.

comment by Cameron_Taylor · 2009-03-19T12:22:02.607Z · LW(p) · GW(p)

3: Extreme Altruism.

I don't want to save starving Africans. In most circumstances I would not actively mass murder to cull overpopulation but I wouldn't judge myself immoral for doing so.

comment by Cameron_Taylor · 2009-03-19T12:16:56.885Z · LW(p) · GW(p)

2: The God-Shaped Hole....Do you admit that even if believing something makes you happier, you still don't want to believe it unless it's true?

I would, and do, admit that I don't want to believe it unless it's true. I watch myself make that decision more than enough to be honest about it.

(I'll note that believing things that aren't true makes me miserable and stressed. My verbal beliefs go about interfering with my behavior and my aversion to hypocrisy frustrates me. I'm usually better off believing the truth and just going along with the lie. However, I've assumed that in the least convenient possible world my God shaped hole was repared to normal function.)

comment by Cameron_Taylor · 2009-03-19T12:10:36.972Z · LW(p) · GW(p)

You are a doctor in a small rural hospital. You have ten patients, each of whom is dying for the lack of a separate organ; that is, one person needs a heart transplant, another needs a lung transplant, another needs a kidney transplant, and so on. A traveller walks into the hospital, mentioning how he has no family and no one knows that he's there. All of his organs seem healthy. You realize that by killing this traveller and distributing his organs among your patients, you could save ten lives.

I wouldn't kill him. It isn't worth the risk for me. I also wouldn't consider it my job. I'm a doctor, not a super hero. The service doctor's provide isn't maximising survival.

Would this be moral or not?

No. And even if it was, I'd violate the moral out of self interest.

comment by Cameron_Taylor · 2009-03-19T12:03:23.419Z · LW(p) · GW(p)

1: Omega, the completely trustworthy superintelligence who is always right, informs you that God definitely doesn't value intellectual integrity that much. In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely everything.

Would you become a Catholic in this world?

Yes, plus pay bribes/alms at whatever the going rate is for having doubts that have been updated to greater than 50% based on observations. Since faith is somewhat destinct than raw prediction I suspect God'd be cool with that.

comment by vroman · 2009-03-19T01:32:25.741Z · LW(p) · GW(p)

*kill traveler to save patients problem

assuming that

-the above solutions (patient roulette) were not viable

-upon recieving their new organs, the patients would be restored to full functionality, the equal of or better utility generators than the traveler

then I would kill the traveler. however, if the traveler successfully defended himself, and turned the tables on me, I would use my dying breath to happily congratulate his self preservation instinct and wish him no further problems on the remainder of his journey. and of course Id have left instructions w my nurse to put my body on ice and call the doctor from the next town over to come and do the transplants from my own organs.

  1. pascal wager

if catholicism is true, then Im already in hell. what else can you call an arbitrary, irrational universe?

  1. god hole

if there is a evolutionary trap in the human mind that requires irrational belief to achieve optimal happiness, then I just add that to the list of all the other 'design flaws' and ignore.

  1. extreme altruism

I can not imagine a least convenient world in which something resembling what we understand of the laws of economics operates, where both I and the africans would not be better off by me using my money to invest in local industry, or financing an anti-warlord coup dtat. if you want to fiat that these ppl cant work, or that the dictator is unstoppable and will nationalize and embezzle my investments, then I dont see how charity is going to do any better. if theres no way that my capital can improve their economy, then they are just flat doomed, and Id rather keep my money.

comment by Jonnan · 2009-03-18T23:00:00.213Z · LW(p) · GW(p)

The problem is the "least convenient world" seems to involve a premise that would, in and of itself, be unverifiable.

The best example is the pascals wager issue - Omega tells me with absolute certainty that It's either a specific version of God (Not, for instance Odin, but Catholicism), or no God.

But I'm not willing to believe in an omniscient deity called God, taking it back a step and saying "But we know it's either or, because the omniscient de . . . errr . . . Omega tells you so" is just redefining an omniscient deity.

Well, if I don't believe is assuming god exists without proof, I can happily not assume Omega exists without proof. Proof is verifiably impossible, because all I can prove is that Omega is smarter than me.

Since I won't assume anything based only on the fact that someone is smarter than me - which is all I know about Omega - then no, the fact that Omega says any of this stuff and states it by fiat isn't going to convince me.

If Omega is that damn smart, it can go to the effort of proving it's statements.

Jonnan

Post-script: Which suddenly explains to me why I would pick the million dollar box, and leave the $1000 dollars alone. Because that's win win - either I get the million or I prove Omega is in fact not omniscient. He might be smarter than me (almost certainly is - the memory on this bio-computer I'm running needs upgraded something fierce, and the underlying operating system was last patched 30,000 years ago or so), but I can't prove it, I can only debunk it, and the only way to do that is to take the million.

Replies from: Nick_Tarleton, DanielLC
comment by Nick_Tarleton · 2009-03-18T23:06:01.928Z · LW(p) · GW(p)

Yes, to make it work, you may have to imagine yourself in an unreachable epistemic state. I don't see why this is a problem, though.

Replies from: Jonnan
comment by Jonnan · 2009-03-24T20:29:53.963Z · LW(p) · GW(p)

No, to make it work you have to assume that you believe in omniscience in order to clarify whether you believe in omniscience, a classic 'begging the question' scenario.

Replies from: Cyan
comment by Cyan · 2009-03-24T21:17:21.041Z · LW(p) · GW(p)

You're right that the existence of Omega is information relevant to the existence of other omniscient beings, but in the least convenient world Omega tells you that it is not the Catholic version God, and you still need to decide if that being exists. (And you really do have to decide that specific question because eternal damnation is in the payoff matrix.)

Omniscience is almost a side issue.

Replies from: Jonnan
comment by Jonnan · 2009-04-07T04:02:54.703Z · LW(p) · GW(p)

Not if omniscience is A) a necessary prerequisite to the existence of a deity, and B) by definition unverifiable to an entity that is not itself omniscient.

Without being omniscient myself, I can only adjudge the accuracy of Omega's predictions based in the accuracy of it's known predictions versus the accuracy of my own.

Unfortunately, the mere fact that I am not omniscient means I cannot, with 100% accuracy, know the accuracy of Omega's decisions, because I am aware of the concepts of selection bias, and furthermore may not be capable of actually evaluating the accuracy of all Omega's predictions.

I can take this further, but fundamentally, to be able to verify Omega's omniscience, I actually have to be omniscient . Otherwise I can only adjudge that Omega's ability to predict the future is greater, statistically, than my own, to some degree 'x', with a probable error on my part 'y', said error which may or may not place Omega's accuracy equal to or greater than 100%.

Omega may in fact be omniscient, but that fact is itself unverifiable, and any philosophical problem that assumes A) I am rational, but not omniscient B) Omega is omniscient, and C) I accept B as true has a fundamental contradiction. By definition, I cannot be both rational and accept that Omega is Omniscient. At best I can only accept that Omega has, so far as I know, a flawless track record, because that is all I can observe.

Unfortunately, I think this seemingly small difference between "Omniscient" and "Has been correct to the limit of my ability to observe" makes a fairly massive difference in what the logical outcome of "Omega" style problems is.

Jonnan

Replies from: Cyan
comment by Cyan · 2009-04-23T20:28:36.546Z · LW(p) · GW(p)

The whole idea of an unreachable epistemic state seems to be tripping you up. In the least convenient world, you know that Omega is omniscient, and the fact that you cannot verify that knowledge doesn't trouble you.

Replies from: Dan_Moore
comment by Dan_Moore · 2011-12-29T16:58:46.786Z · LW(p) · GW(p)

Argument #1 works in the least convenient imaginable world, in my opinion. However, the OP concerns the least convenient possible world. The existence of an omnicient Omega seems to be possible in only the same sense as the existence of a deity; i.e., no-one has proven it to be impossible. The ability to hypothesize the existence of Omega doesn't imply that its existence is actually possible.

Replies from: Cyan
comment by Cyan · 2011-12-30T05:59:36.464Z · LW(p) · GW(p)

It's been more than two and a half years, dude!

OK, here goes. I made a misstep by involving Omega in my least convenient world scenario at all. But I was right to try to redirect attention away from omniscience -- it just doesn't matter how you get to the epistemic state of discounting all possibilities other than Catholicism or atheism. All you need to grant is that it's possible for your brain to be in that state. Did knowledge from Omega put you there? Did you suffer an organic brain injury? Did your social context influence the possibilities you were willing to consider? Were you kidnapped and brainwashed? Who cares? It's irrelevant -- the presence of eternal damnation in the payoff matrix makes it so. However you got there, you must now face Pascal's Wager head on. How will you answer?

Replies from: Dan_Moore
comment by Dan_Moore · 2011-12-30T16:45:40.508Z · LW(p) · GW(p)

It's been more than two and a half years, dude!

sorry - I was led there by a recent thread.

But I was right to try to redirect attention away from omniscience -- it just doesn't matter how you get to the epistemic state of discounting all possibilities other than Catholicism or atheism. All you need to grant is that it's possible for your brain to be in that state.

Given the epistemic state of recognizing only those two possibilites, I suppose I would cop out as follows. I would examine the minimum requirements of being a Catholic, and determine whether this would require me to do anything I find morally repugnant. If not, I would comply with the Catholic minimum requirements, while not rejecting either possibility. In other words, I would be an agnostic. (I don't think Catholicism requires a complete absence of doubt.)

comment by DanielLC · 2014-11-13T05:59:57.586Z · LW(p) · GW(p)

He might be smarter than me (almost certainly is - the memory on this bio-computer I'm running needs upgraded something fierce, and the underlying operating system was last patched 30,000 years ago or so), but I can't prove it, I can only debunk it, and the only way to do that is to take the million.

You could two-box. If you get the million and the thousand you prove that he's not omniscient. All that's required is that you make the choice he did not predict.

comment by cleonid · 2009-03-14T15:37:29.942Z · LW(p) · GW(p)

“Do you head down to the nearest church for a baptism? Or do you admit that even if believing something makes you happier, you still don't want to believe it unless it's true?”

I believe that God’s existence or non-existence can not be rigorously proven. Likewise there is no rigorous protocol for estimating the chances. Therefore we are forced to rely on our internal heuristics which are extremely sensitive to our personal preferences for the desired answer. Consequently, people, who would be happier believing in God, mostly likely already do so. The same principle applies to people with “rationality-shaped holes”. It’s possible that one group is on average happier than the other. However, becoming happier simply by switching sides may not be possible without a profound personality change. In other words you need to become somebody else than you are now. Since this seems little different from being erased and replaced by another person, it’s hardly an appealing choice for most people.

On the other hand, we seem to be little concerned about the gradual change of our personalities (compare yourself now and twenty years ago). Hence, it’s quite possible for the same physical person at different points of life to be comfortable in totally different camps.

comment by JohnBuridan · 2015-03-25T05:28:49.498Z · LW(p) · GW(p)

I think Pascal's Wager and the God-Shaped Hole should get more play.

To your Pascal's Wager statement

Perhaps God values intellectual integrity so highly that He is prepared to reward honest atheists, but will punish anyone who practices a religion he does not truly believe simply for personal gain.

I don't think what you say is incommensurable with the Catholic position that what is most important to the Omega is that we pursue the best thing we know i.e. intellectual integrity along with charity. But perhaps I am wrong. You might know more about this than I do.

If God is Truth, then wouldn't it follow that rationality fills (or at least could fill) that God-shaped hole? This brings me to the second point you made.

I have never heard a Christian say there is a God-shaped whole inside me. But if there is it would be universe-sized! But I suppose I can be more generous with my interpretation. Christianity has a technical vocabulary too, but this isn't it. The theological way to say it would be something like, "A good life for a human being includes worship of God who is personal and just." That's simply what I imagine a serious Christian would say, right?

You said:

Omega comes along and tells you that sorry, the hole is exactly God-shaped, and anyone without a religion will lead a less-than-optimally-happy life.

What do you mean by happy? What would Omega mean by a "less than happy life?" The truth or doing something that you must do by virtue of knowing the truth will not always make you happy. Perhaps you don't feel like defending the truth today. A blissful life could be spent shopping in malls or donating to African countries or picking up litter. How are the types of happiness achieved in each act different? Or are they?

Replies from: None
comment by [deleted] · 2015-03-25T08:48:49.890Z · LW(p) · GW(p)

I think the GSH is largely that our whole way of thinking, our terminology, our philosophy, our science evolved in theistic societies. Taking god out of it leaves a lot of former linkages dangling in the air, probably we learn to link them up sooner or later but it requires revising a surprisingly large amount.

For example, a godless universe has no laws of nature, just models of nature that happen to be predictive.

For example, there isn't really such a thing as progress because there cannot be a goal in history in the godless universe. There is social change, and it is up to you to judge if it is good.

For example, there are no implicit promises in the godless universe, we could do everything "right" and still get extinct. This is non-intuitive deep on the bones level, our whole cultural history teaches that if you we make a good enough effort some mysterious force will pat our backs, give a B+ for effort and will pick up the rest of the slack: because this is what our parents and teachers did to us. Just look at common action movies, they are about heroes trying hard, and almost failing, then getting almost miraculously lucky. Deus ex machina.

The GSH becomes very intense when you start raising children. For example it would mean not giving praise for effort, in fact, sometimes punishing good solutions to demonstrate how in the real world you can do things right and still fail. This would be really cruel and probably we don't want to do it. Most education tends to imply what it teaches is certain truth, laws of nature etc. so things get hard from here.

Replies from: gjm
comment by gjm · 2015-03-25T13:06:49.977Z · LW(p) · GW(p)

Taking god out of it leaves a lot of former linkages dangling in the air

There's a nice exposition of roughly this idea over at Yvain's / Scott Alexander's blog.

Replies from: JohnBuridan
comment by JohnBuridan · 2015-03-25T14:00:36.808Z · LW(p) · GW(p)

To Hollander:

When we create models, they are models of something other than your own mind's processes. Or are you a coherence theorist/ epistemological anarchist? I think that some models (of progress, of biology, of morality) are more true aka, less wrong. Their predictive power comes from the near-miraculous fact that the symbols we use for math and science can be manipulated and after the manipulation still work in the world! I am always in awe at this natural wonder. Logic, Nature, Beautiful.

gjm:

Thanks for that link! It's really good, as is the previous post on his blog. I underestimate how metaphysically-light most atheisms are. Since I still believe in a knockout-fundamental-goodness in the universe that we model with morality, I might be more in Scott's camp.

comment by Dmytry · 2011-12-29T09:53:52.453Z · LW(p) · GW(p)

In good ol days there was concept of whose problem something is. It's those people's problem that their organs have failed, and it is traveller's problem that he need to be quite careful because of demand for his organs (why he's not a resident, btw? The idea is that he will have zero utility to village when he leaves?). Society would normally side with traveller for the simple reason that if people start solving their problems at other people's expense like this those with most guns and most money will end up taking organs from other people to stay alive for longer, or indeed, for totally superfluous reasons. It is a policy issue and the correct policy is obvious.

You can just take organs away from traveller today, but travellers will start paying for a reliable service that finds people where organs end up, and assassinates them, and the patients will start paying for anonymous find the traveller and kill him and take his organs service, and those with most money will end up having the organs as a matter of luxury. Good thing that we live in the convenient world where it is not very practical, albeit happens to some extent. Otherwise you could've seen what you can't think of in your neat little example.

With regards to the doctor, that issue is simply not his problem in the first place unless he's being paid for it. He can make it his problem if he wants to, or he can make it his problem to kill everyone who has particular eye colour, we would deem one choice more moral than another due to better utility to the society but we would still not grant him enough autonomy to pursue this kind of stuff unhindered because a: he will be using it to solve his problems (saving relatives for example) and b: because he can just as well as to decide to do good, decide to cut up random people for no reason what so ever.

Other issue is, of course, that you are making up this kind of stuff in totally imaginary world, where those whose organs have been replaced have reasonable life expectancy, whereas this (people being cut up for organs) is a real world problem that exists right now in the real world where a bunch of other conditions apply, and I think it is you, not your friend, who completely missed the point in the first place. To complicate the issue: what if those people's organs are failing due to their own fault? Their own stupid action? Suddenly you realize that different people have different worth.

With regards to the moral judgement: yes with absolutely equal worth of continuation of life of each of the people saved, and the traveller, the organs have to be transplanted. This, however, raises the question: what is the reason behind this exercise? You may be pursuing this topic idly. Other people are almost always more rational than this, more purpose-driven, and they pursue this topic if they want to make some inference to use in the real world. Especially, they ask a question like that if they want a confirmation which they will misuse. In fact you're an extreme oddity, pursuing this unrealistic example for (giving you benefit of the doubt) other purpose than committing a logical fallacy in the real world after caching the conclusion (perhaps, or perhaps you just want to make a very long chain of tiny fallacies and do actually want to conclude something about real world based on your imaginary world). That is very odd, and most people don't quite know how to react to such behaviour.

comment by Lisawood · 2011-06-02T10:17:32.674Z · LW(p) · GW(p)

The essence of the technique is to assume that all the specific details will align with the idea against which you are arguing.

http://www.tryhydroxatone.com/

comment by corruptmemory · 2009-03-15T01:00:32.780Z · LW(p) · GW(p)

For fun here's my personal contribution to "yet another proof of the non-existence of God (of the Bible)", not that really any such "proof" ought to be necessary.

The Bible (and other Arahamic texts) is quite clear that God is both omniscient and omnipotent. But at the same time endowed man with free will (Genesis 2:7-17 - Adam had CHOICE). The problem is that these are irreconcilably contradictory.

Omniscience implies the complete LACK of free will because God already knows everything that will ever happen. In fact, from God's point of view everything that ever could happen has already happened.

Omnipotence implies that even if God lacked omniscience, somehow intrinsically, he could grant omniscience to himself, after all, he is omnipotent right? If he cannot grant himself omniscience then he is not omnipotent because there are tasks that are beyond even his power! At the same time he would also not be omniscient.

So, either we have free will and God is not omniscient (and therefore flawed because he can be wrong about anything -> i.e. not know the outcome of any decision he makes), or we do not have free will and God has always know everything including every soul that would occupy heaven and hell (including knowing for all eternity that Lucifer would fall) -> i.e. your illusion of making choices is irrelevant - your final disposition has always been known - and he made it that way. All that we know is some expression of some ultimately constant "reality". God can play such a reality out along any parameter he wants and the path through is "reality" simulation would always be the same.

Furthermore, God being omnipotent is beyond time: our soul has already been basking in heaven or burning in hell for eternity already (for our notions of time simply do not apply to such a being), the mere illusion of your life is ultimately irrelevant. If God is not beyond time then he is bound by our limits of seeing into the future, again implying that he is both NOT omniscient and NOT omnipotent.

Free will implies the God of the Bible does not exist. Q.E.D. -OR- All of it doesn't matter in the slightest.

Replies from: JohnH, gnobbles
comment by JohnH · 2011-04-28T14:56:16.345Z · LW(p) · GW(p)

Omniscience implies the complete LACK of free will because God already knows everything that will ever happen. In fact, from God's point of view everything that ever could happen has already happened

I am sorry I do not understand how Omniscience implies lack of free will, can you explain?

Replies from: brainoil
comment by brainoil · 2013-04-18T08:40:31.223Z · LW(p) · GW(p)

Yes, but omniscience added with omnipotence implies predestination.

comment by gnobbles · 2009-06-12T04:14:41.417Z · LW(p) · GW(p)

"your illusion of making choices is irrelevant - your final disposition has always been known - and he made it that way."

I think this is where the trouble is. Just because God knows the outcome doesn't mean he made it that way. For example, if you've known a friend for an extremely long time, and he has never chosen A over B, you can be reasonably sure he'll pick A. That doesn't he has no choice. God is just someone who has infinite knowledge of you. He knows what you will end up doing, but you still have to carry it out and make the choice yourself. It really just depends on how you define "free will" and "choice".

comment by Saladin · 2012-02-07T17:23:11.369Z · LW(p) · GW(p)

If I go back to the original situation: 1 healty stranger-donor to save a dozen transplant patiens and what to to: It always surprizes me why no one suggest to simply talk with the stranger, explain him the situation and ask him, if he would willingly sacrifice his life to save the other dozen. Is it so hard to imagine that such altruism (or an exchange for a small token of appreciation: commemorative plaque, nameds street/building, etc.) could be realistically expected?

Replies from: pedanterrific, army1987
comment by pedanterrific · 2012-02-07T19:29:23.259Z · LW(p) · GW(p)

In the Least Convenient Possible World the stranger says "Hell no!"

Now what?

Replies from: Saladin
comment by Saladin · 2012-02-07T19:59:16.304Z · LW(p) · GW(p)

If You force the outcome to be soly on Your decision alone and if Your decision is clear, free and consistent with a specific philosophy, then You must be judge acc. to this philosophy.

Which philosophy is valid in a Least Convenient Possible World?

If everything I do to "humanely" help the patients without commiting murder to the strange ris futile AND and if none of the patients would be willing to do a self-sacrifice to save the others AND if the sole and only decision to this situation would lie on me, then (my clearly idealized) I would teach the donor all the neccesary skills to kill and harvest me to save the others.

If not even that is allowed, then yes - a utalitarianistic murder of the stranger would be legit, beacuse You have trully checked for all options, to freely and through selff-sacrifice try to save the patiens - without success.

Only when You eliminate all humane options can You turn to the "inhumane" (I use thet term loosely - in this case, at the end, it was a humane sollution) - if that brings out more utility/less global suffering/more global pleasure and freedom.

But again - this is not a realistic option. Realistically it is almost certain that a humane approach would become viable before that.

comment by A1987dM (army1987) · 2012-02-07T19:49:07.554Z · LW(p) · GW(p)

Well, I wouldn't accept. (Even because I'm quite young and I likely have more QALYs ahead of me than those patients combined; but then again, in the Least Convenient Possible World all of those patients are in their teens and, except for the organs they need, completely healthy...)