[LINK] The most important unsolved problems in ethics

post by jefftk (jkaufman) · 2012-10-17T20:03:46.874Z · LW · GW · Legacy · 46 comments

Contents

    Given our best ethical theories (or best credence distribution in ethical theories), what’s the biggest problem we currently face?
  The Theoretical List
    What additional items should be on these lists?
None
46 comments

Will Crouch has written up a list of the most important unsolved problems in ethics:

 

The Practical List


  1. What’s the optimal career choice? Professional philanthropy, influencing, research, or something more common-sensically virtuous?

  2. What’s the optimal donation area? Development charities? Animal welfare charities? Extinction risk mitigation charities? Meta-charities? Or investing the money and donating later?

  3. What are the highest leverage political policies? Libertarian paternalism? Prediction markets? Cruelty taxes, such as taxes on caged hens; luxury taxes?

  4. What are the highest value areas of research? Tropical medicine? Artificial intelligence? Economic cost-effectiveness analysis? Moral philosophy?

  5. Given our best ethical theories (or best credence distribution in ethical theories), what’s the biggest problem we currently face?


The Theoretical List


  1. What’s the correct population ethics? How should we value future people compared with present people? Do people have diminishing marginal value?

  2. Should we maximise expected value when it comes to small probabilities of huge amounts of value? If not, what should we do instead?

  3. How should we respond to the possibility of creating infinite value (or disvalue)? Should that consideration swamp all others? If not, why not?

  4. How should we respond to the possibility that the universe actually has infinite value? Does it mean that we have no reason to do any action (because we don’t increase the sum total of value in the world)? Or does this possibility refute aggregative consequentialism?

  5. How should we accommodate moral uncertainty? Should we apply expected utility theory? If so, how do we make intertheoretic value comparisons? Does this mean that some high-stakes theories should dominate our moral thinking, even if we assign them low credence?

  6. How should intuitions weigh against theoretical virtues in normative ethics? Is common-sense ethics roughly correct? Or should we prefer simpler moral theories?

  7. Should we prioritise the prevention of human wrongs over the alleviation of naturally caused suffering? If so, by how much?

  8. What sorts of entities have moral value? Humans, presumably. But what about non-human animals? Insects? The natural environment? Artificial intelligence?

  9. What additional items should be on these lists?

46 comments

Comments sorted by top scores.

comment by jimrandomh · 2012-10-18T02:07:15.165Z · LW(p) · GW(p)

Hey, many of those look like multiple-choice questions! Let's have a poll. I filled out the lists of possibilities as thoroughly as I could, including both answers that I think are right, and answers that I think are wrong but which other people might vote for, but I can't have gotten everything; so vote "Other" and reply with your answer, if you have another possibility.

The Practical List

  1. What career choice do you most strongly endorse, when you can't be person- or skill-specific? (Wording changed: Was "What’s the optimal career choice?") [pollid:161]

  2. What’s the optimal donation area? [pollid:181]

  3. What are the highest leverage political policies? [pollid:163]

  4. What are the highest value areas of research? [pollid:164]

The Theoretical List

  1. What’s the correct population ethics? Compared to present people, we should value future people [pollid:165] Do people have diminishing marginal value? [pollid:166]

  2. Should we maximise expected value when it comes to small probabilities of huge amounts of value? If not, what should we do instead? [pollid:167]

  3. How should we respond to the possibility of creating infinite value (or disvalue)? Should that consideration swamp all others? (If not, why not?) [pollid:168]

  4. How should we respond to the possibility that the universe actually has infinite value? Does it mean that we have no reason to do any action (because we don’t increase the sum total of value in the world)? Or does this possibility refute aggregative consequentialism? [pollid:169]

  5. How should we accommodate moral uncertainty? Should we apply expected utility theory? If so, how do we make intertheoretic value comparisons? Does this mean that some high-stakes theories should dominate our moral thinking, even if we assign them low credence? [pollid:170]

  6. How should intuitions weigh against theoretical virtues in normative ethics? [pollid:171] Is common-sense ethics roughly correct? [pollid:172] Or should we prefer simpler moral theories? A good moral theory is [pollid:173]

  7. Should we prioritise the prevention of human wrongs over the alleviation of naturally caused suffering? If so, by how much? [pollid:174]

  8. What sorts of entities have moral value? Humans, presumably. But what about non-human animals? [pollid:175] Which ones? [pollid:176] Insects? [pollid:177] The natural environment? [pollid:178] Artificial intelligences? [pollid:179] Which kinds? [pollid:180]

9. What additional items should be on these lists?

Replies from: CarlShulman, army1987, fubarobfusco
comment by CarlShulman · 2012-10-18T03:46:01.916Z · LW(p) · GW(p)

One of these poll items is not like the others. The answer to the career question varies depending on the individual whose career is under consideration. ETA: even given some kind of non-relativistic moral realism.

Replies from: buybuydandavis, jimrandomh
comment by buybuydandavis · 2012-10-18T09:10:52.491Z · LW(p) · GW(p)

By my values, that criticism applies to most everything in the list.

Optimal, by whose values? Why should anyone assume that their values dictate what "we should do"? These questions are unsolved because they haven't been formulated in a way that makes sense. Provide the context of an actual Valuer to these questions of value, and you might make some progress toward answers.

To answer one question according to my values, the biggest problem we face is death.

comment by jimrandomh · 2012-10-18T04:01:40.589Z · LW(p) · GW(p)

You're right. I paid lots of attention to filling in options and not enough to the wording of the questions I was copying. There are multiple interpretations of this question, so I changed it (with 9 votes entered so far, 2-0-1-0-3-3) to "What career choice do you most strongly endorse, when you can't be person- or skill-specific?" Also interesting would have been "What is the optimal career choice for you", but this seemed like changing the spirit of the question too much.

Replies from: wedrifid
comment by wedrifid · 2012-10-18T04:08:44.144Z · LW(p) · GW(p)

You can discount my "Other" answer then. My "other" answer was "Huh? Optimal for what? Getting laid? Saving the planet? Maximising life satisfaction?" (This supplements that "For Who?" that Carl Mentions.)

comment by A1987dM (army1987) · 2012-10-18T15:45:43.460Z · LW(p) · GW(p)

What career choice do you most strongly endorse, when you can't be person- or skill-specific?

None. Any endorsement of career choice makes sense for certain people/skills but not for others.

What’s the optimal donation area?

Why the hell is “prevention and treatment of diseases” (e.g. the AMF, the SCI, etc.) not on the list?

Replies from: jimrandomh
comment by jimrandomh · 2012-10-18T22:14:46.676Z · LW(p) · GW(p)

Why the hell is “prevention and treatment of diseases” (e.g. the AMF, the SCI, etc.) not on the list?

It is; it's labelled "tropical medicine". Apparently that description was quite unclear, though, so the lack of votes for it isn't necessarily meaningful.

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-19T00:49:47.393Z · LW(p) · GW(p)

I meant in Question 2.

Replies from: jimrandomh
comment by jimrandomh · 2012-10-19T01:58:45.960Z · LW(p) · GW(p)

Ack, you're right, that should be in there. The closest match is "Development charities", which isn't really the same thing.

comment by fubarobfusco · 2012-10-18T16:05:34.122Z · LW(p) · GW(p)

Is common-sense ethics roughly correct?

It would be interesting to extend the range of answers as follows:

  • Common-sense ethics is reliably, or mostly, good
  • Common-sense ethics is a mix of good and bad
  • Common-sense ethics is reliably, or mostly, bad
  • Common-sense ethics is useless or fails to achieve either good or bad

IOW, it is possible for ethical rules or systems not only to be incorrect, but to be anti-correct.

comment by Daniel_Burfoot · 2012-10-18T16:46:03.294Z · LW(p) · GW(p)

I found this post annoying for several reasons that I don't have time to fully explain, but the simplest way of articulating my annoyance is just to say that it should have been entitled something like "The most important unsolved problems in Singerian ethics".

Replies from: CronoDAS
comment by CronoDAS · 2012-10-18T22:19:22.265Z · LW(p) · GW(p)

Do you know of some other problems in ethics, of whatever kind, that should be on a list?

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2012-10-19T15:47:05.120Z · LW(p) · GW(p)
  • Is it ethical to pay taxes?
  • Is it ethical to send your children to school?
  • Is it ethical to associate with government employees?

Obviously these questions all presuppose a strongly libertarian ethical framework, which is probably annoying to non-libertarians. One can easily imagine a set of questions that presuppose a Christian ethical framework ("is it ethical to associate with adulterers?"), which non-Christians would find annoying. The questions in the top post presuppose a sort of liberal/Singerian ethical framework, which people who don't subscribe to that framework are justified (I think) in being annoyed by.

Replies from: mwengler
comment by mwengler · 2012-10-23T14:26:09.408Z · LW(p) · GW(p)

Is it unethical to pose questions that some people find intriguting and interesting, but that other people find annoying?

comment by summerstay · 2012-10-22T14:47:40.607Z · LW(p) · GW(p)

Wait, there are solved problems in ethics?

comment by Peter Wildeford (peter_hurford) · 2012-10-17T21:18:02.317Z · LW(p) · GW(p)

1.) The link is broken. It should go here, and yes, that's normative.

2.) Also, as I commented there:

I’m confused about what answers to the theoretical questions would look like.

Are things like 1-4 supposed to be within utilitarianism? Additionally, do things like 5-8 seem to presuppose some sort of moral realism?

Replies from: jkaufman
comment by jefftk (jkaufman) · 2012-10-17T21:28:58.622Z · LW(p) · GW(p)

The link is broken

Fixed; sorry!

comment by wdmacaskill · 2012-10-22T20:32:43.787Z · LW(p) · GW(p)

Hi all,

It's Will here. Thanks for the comments. I've responded to a couple of themes in the discussion below over at the 80,000 hours blog, which you can check out if you'd like. I'm interested to see the results of this poll!

comment by NancyLebovitz · 2012-10-18T05:45:25.259Z · LW(p) · GW(p)

How should effort be balanced between getting people to do more good and getting them to do less harm?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2012-10-18T23:59:34.307Z · LW(p) · GW(p)

Related question: are people here overly optimistic about plans to do good that impose "diffuse" harms (small harms imposed on millions of people)?

Example: a proposal was floated here to try to persuade Craigslist to run ads and to donate the ad income to high-impact charity. No one suggested that the diffuse harm of subjecting the hundreds of millions of readers of Craigslist to ads that they would not otherwise be subjected to might cancel out the expected good of the proposal.

comment by Richard_Kennaway · 2012-10-19T13:50:40.064Z · LW(p) · GW(p)

What additional items should be on these lists?

Should we maximise expected value?

comment by Decius · 2012-10-17T22:49:48.116Z · LW(p) · GW(p)

If the universe has infinite value, and no action can increase or decrease the value of the universe, think locally: What can I do to reduce the value of the universe in ways that do not affect agents (Since that is the only way to increase the value in ways that do affect agents.); conversely, act to maximize value within your own cosmological event horizon, knowing that this results in harm only to entities that cannot ever be affected by you. (Zero sum, right?)

Oh, and the entire thing begs the question of ethical consequentialism. If I reject general consequentialism because I refuse any ethical system that requires me to kill people, then most of the questions become null.

Replies from: DaFranker
comment by DaFranker · 2012-10-18T14:56:54.170Z · LW(p) · GW(p)

I don't think that's quite how infinities work. If we posit that the raw universe as a whole including beyond any event horizons is infinite in spacetime size, then you can't change the total value of the universe ("Infinite", considering at least 1 of the smallest unit of value per largest unit of spacetime) while still massively increasing your local value, without stealing it from anywhere.

The original wording and question are ambiguous anyway. I wish that "infinite value" were taboo'd.

Replies from: Decius
comment by Decius · 2012-10-18T23:47:22.302Z · LW(p) · GW(p)

Why is an infinite spatial universe necessarily infinite in value?

Replies from: DaFranker
comment by DaFranker · 2012-10-19T14:16:27.730Z · LW(p) · GW(p)

Assuming a >0 value density throughout the universe, then an infinite space will contain infinite value as no matter how small you make the value density, the universe is infinitely large, so no matter how much value you want, you can just look further out and consider a larger space that will contain as much value as needed, ad infinitum.

The >0 value density assumption is the important part, and basically means that "for each X-large expansion of the space under consideration, there exists at least one means of generating even infinitesimally small value from using this space as opposed to not using it, and hence you can arbitrarily expand how much of the universe you want to take into account in order to obtain infinite value (assuming you actually do use these spaces)".

However, to fully answer the question: It is not necessarily so, but if it does have both infinite value and infinite space, then as I mention in the grandparent it is fully plausible to increase local value without decreasing value anywhere at all and without changing the total value (by property of infinity).

Side-note:

I guess the grandparent was poorly phrased in that regards. I can indeed conceive of various kinds of infinite-space universes without infinite value (for various definitions of "value" or "infinite value").

For instance, if you add the somewhat-contrived possible condition that it is impossible to configure a mind that will not evaluate with diminishing returns for astronomically high amounts of value until the returns reach zero at some factor correlating with the amount of nodes/matter/whatever of which the mind is made of, then it becomes obvious that you would need a mind infinitely big in order for it to gain infinite value from an infinite universe - something that will never, ever be built within finite time without breaking a bunch of other implicit rules.

Of course, there could-in-principle already exist such a mind, if the mind only occupies space in one direction towards infinity, but humans are unlikely to ever come into contact with such a mind, let alone integrate themselves into it, within finite time, so humans would not benefit from this infinite value being.

These things are fun to think about, but seem to have little value beyond that.

Replies from: Decius
comment by Decius · 2012-10-19T14:55:40.284Z · LW(p) · GW(p)

Assuming a >0 value density throughout the universe, then an infinite space will contain infinite value as no matter how small you make the value density, the universe is infinitely large, so no matter how much value you want, you can just look further out and consider a larger space that will contain as much value as needed, ad infinitum.

You are simply wrong about the math there. I can construct an infinite sequence of terms >0 which sum to a finite number.

If you meant that there is some sigma, such that every X-large portion of the universe had value of at least sigma, you would be technically correct. You are already setting a lower bound on value, which precludes the possibility of there being an x-large area of net negative value.

EDIT: Corrected from 1/n to 1/2^n

Replies from: thomblake
comment by thomblake · 2012-10-22T20:48:22.564Z · LW(p) · GW(p)

Your math looks wrong. The sum from 1 to infinity of 1/n does not converge, and as a simple visualization 1 + 1/2 + 1/3 + 1/4 is already greater than 2.

Replies from: Decius
comment by Decius · 2012-10-22T23:57:18.735Z · LW(p) · GW(p)

Brainfart: My math was wrong. Corrected to 1/n^2

That's 1/2+1/4+1/8... or Zeno's sum.

comment by moridinamael · 2012-10-19T00:18:43.161Z · LW(p) · GW(p)

At least from a traditional decision theory point of view, the point of assigning value to things is to rank them. In other words, the actual value of a thing is not relevant or even defined except in reference to other things. To say some outcome has "infinite value" merely suggests that this outcome outranks every other conceivable outcome. I'm not sure that designating something as having "infinite value" is a coherent application of the concept of value.

So, if you imagine (or discover) some outcome which you prefer more than any other conceivable outcome, you should always consider that you might've merely failed to conceive of a better alternative, meaning that your best alternative is merely your best known alternative, meaning you can't guarantee that it is indeed the "best" alternative, meaning you shouldn't jump the gun and assign it infinite value.

Replies from: evand
comment by evand · 2012-10-19T04:00:11.659Z · LW(p) · GW(p)

Saying something has infinite value is not the same as simply saying it outranks all other outcomes in your preference ordering. It is saying that a course of action that has a finite probability of producing it is superior to any course of action that does not have a finite probability of producing a comparable outcome, no matter how low that probability. (Provided, of course, that it does not come with a finite probability of an infinitely-bad outcome.)

In other words: utilities really do need to be treated as scalars, not just orderings, when reasoning under uncertainty.

If two courses of action differ in their probability of producing infinite utility, you might be able to make a coherent argument for maximizing the probability of infinite utility. But when you have multiple distinct infinite-utility outcomes, that gets harder. Really, anything nontrivially complex involving infinite utilities is hard.

comment by drethelin · 2012-10-17T21:20:42.599Z · LW(p) · GW(p)

The most important unsolvable question is how do we get all these religious deontologists to become atheist(optional) consequentalists?

Replies from: aelephant, bogus
comment by aelephant · 2012-10-18T13:46:12.846Z · LW(p) · GW(p)

I'm not a religious deontologist, but I can't say I've been fully convinced that consequentialism is where it is all at. I remember reading about a couple of criticisms by Rothbard. I'm going to paraphrase, maybe incorrectly, but here goes: consequentialism would dictate that it is morally right to execute an innocent person as a deterrent so long as it was kept a secret that the innocent person was actually innocent. Based on this, it seems to me that what is missing from consequentialism is justice. It is unjust to execute an innocent person, regardless of the positive effects it might have.

Apologies if this is an inappropriate place to pose such a question. Otherwise, I'd love to hear some counterarguments.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-18T15:16:26.156Z · LW(p) · GW(p)

Consequentialism doesn't necessarily dictate that it is morally right to execute an innocent person as a deterrent.
But, yes, if I value the results of that deterrent more than I value that life, then it does.
I expect this is what you meant in this case.

Just to pick a concrete example: if the deterrent execution saves a hundred innocent lives over the next year, and I value a hundred innocent lives over one innocent life, then consequentialist ethics dictate that I endorse the deterrent execution, whereas non-consequentialist ethics might allow or require me to oppose it.

Would you say that allowing those hundred innocent people to die is justice?

If not, then it sounds like justice is equally missing from non-consequentialist ethics, and therefore justice is not grounds for choice here.

If so... huh. I think I just disagree with you about what justice looks like, then.

Replies from: aelephant
comment by aelephant · 2012-10-18T23:48:31.125Z · LW(p) · GW(p)

This makes it seem to me that Consequentialism is totally subjective: whatever produces the result I personally value the most is what is morally right.

So if I don't value innocent human lives, taking them reaps me great value, & I'm not likely to get caught or punished, then Consequentialism dictates that I take as many innocent human lives as I can?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-19T02:21:31.324Z · LW(p) · GW(p)

It's not necessarily subjective, although it can be.

But yes, values matter to a consequentialist, whether subjective or not. For example, if live humans are more valuable than dead humans, a consequentialist says I should not kill live humans (all else being equal). If, OTOH, dead humans are more valuable than live humans, then a consequentialist says I should kill live humans.

But it's not like there's some ethical system out there you can compare it to that won't ever give the answer "don't kill humans" regardless of what properties we assign to dead and live humans.

For example, if there exists a moral duty to kill live humans, then a deontologist says I should kill live humans, and if good people kill humans, then a virtue ethicist says that I should be the sort of person who kills live humans.

Incidentally, that's the last of your questions I'll answer until you answer my previous one.

Replies from: aelephant
comment by aelephant · 2012-10-19T10:16:00.717Z · LW(p) · GW(p)

Sorry Dave (if I can call you Dave), I saw your question but by the time I finished your comment I forgot to answer it.

Just to pick a concrete example: if the deterrent execution saves a hundred innocent lives over the next year, and I value a hundred innocent lives over one innocent life, then consequentialist ethics dictate that I endorse the deterrent execution, whereas non-consequentialist ethics might allow or require me to oppose it.

Would you say that allowing those hundred innocent people to die is justice?

If I didn't exist, those people would die. If I do nothing, those people will die. I don't think inaction is moral or immoral, it is just neutral.

It seems to me that justice only applies to actions. It would be unjust for me to kill 1 or 100 innocent people, but if 100 people die because I didn't kill 1, I did the just thing in not killing people personally.

This hypothetical, like most hypotheticals, has lots of unanswered questions. I think in order to make a solid decision about what is the best action (or inaction) we need more information. Does such a situation really exist in which killing 1 person is guaranteed to save the lives of 100? The thing about deterrence is that we are talking about counterfactuals (I think that is the right word, but it is underlined in red as I type it, so I'm not too sure). Might there not be another way to save those 100 lives without taking the 1? It seems to me the only instance in which taking the 1 life would be the right choice would be when there was absolutely no other way, but in life there are no absolutes, only probabilities.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-19T13:20:32.569Z · LW(p) · GW(p)

I agree that in the real world the kind of situation I describe doesn't really arise. But asking about that hypothetical situation nevertheless reveals that our understandings of justice are very very different, and clarity about that question is what I was looking for. So, thanks.

And, yes, as you say, consequentialist ethics don't take what you're calling "justice" into account. If a consequentialist values saving innocent lives, they would typically consider it unethical to allow a hundred innocent people to die so that one may live.

I consider this an advantage to consequentialism. Come to that, I also consider it unjust, typically, to allow a hundred innocent people to die so that one may live.

Replies from: aelephant
comment by aelephant · 2012-10-19T15:50:25.440Z · LW(p) · GW(p)

If this is true, then consequentialists must oppose having children, since all children will die someday?

The corollary, I suppose, is that you are acting intensely "immoral" or "unjust" right now because you are "allowing" hundreds of innocent people to die when your efforts could probably be saving them. You could have, for example, been trained as a doctor & traveled to Africa to treat dying children.

Even then, you might have grown tired. If you nap, innocent children may die during your slumber. Does the consequentialist then say that it is immoral or unjust for the doctor in Africa to sleep?

I can see no way that a consequentialist can, in the real world, determine what is the "most moral" or "most just" course of action given that there are at any point in time an almost countless number of ways in which one could act. To say that anything less than the optimal solution (whatever that is) is immoral or unjust leads us down the path of absurdity.

Replies from: pengvado, TheOtherDave
comment by pengvado · 2012-10-19T16:13:56.196Z · LW(p) · GW(p)

To say that anything less than the optimal solution (whatever that is) is immoral or unjust leads us down the path of absurdity.

Don't say that then. Expected utility isn't about partitioning possible actions into discrete sets of "allowed" vs "forbidden", it's about quantifying how much better one possible action is than another. The fact that there might be some even better action that was excluded from your choices (whether you didn't think of it, or akrasia, or for any other reason), doesn't change the preference ordering among the actions you did choose from.

Replies from: aelephant
comment by aelephant · 2012-10-19T22:43:49.147Z · LW(p) · GW(p)

So consequentialism doesn't say whether it is moral or immoral to kill the 1 to save the 100 or to allow the 1 to live & the 100 to die? It seems like we haven't gotten very far with consequentialism. I already knew that either 1 would die or 100 would die. What has consequentialism added to the discussion?

Replies from: jkaufman
comment by jefftk (jkaufman) · 2012-10-24T10:30:50.808Z · LW(p) · GW(p)

consequentialism doesn't say whether it is moral or immoral to kill the 1 to save the 100 or to allow the 1 to live & the 100 to die?

Consequentialism says that the way to evaluate whether to kill the one is to make your best estimate at whether the world would be better with them killed or still alive. If you think that the deterrent effect is significant enough and there won't be any fallout from your secretly killing an innocent (though secrets have a way of getting out) then you may think the world is better with the one killed.

This is not the same as "do whatever you want". For starters it is in opposition to your "I don't think inaction is moral or immoral, it is just neutral". To a Consequentialist the action/inaction distinction isn't useful.

Note that this doesn't tell you how to decide which world is better. There are Consequentialist moral theories, mostly the many varieties of Utilitarianism, that do this if that's what you're looking for.

Replies from: Bakkot
comment by Bakkot · 2012-10-24T16:29:00.649Z · LW(p) · GW(p)
comment by TheOtherDave · 2012-10-19T19:47:25.930Z · LW(p) · GW(p)

If this is true, then consequentialists must oppose having children, since all children will die someday?

Again, not necessarily. A consequentialist who values the absence of a dead child more than the presence of a living one would conclude that one ought not have children, since it likely (eventually) results in a loss of value . A consequentialist who values the presence of a living child more than the absence of a dead one.

You seem to keep missing this point: consequentialism doesn't tell you what to value. It just says, if X is valuable, then choices that increase the X in the world are good choices to make, and choices that reduce the X in the world are bad choices, all else being equal. If babies are valuable, a consequentialist says having babies is good, and eliminating babies is bad. If the babies are anti-valuable, a consequentialist says eliminating babies is good, and having babies is bad.

Consequentialism has nothing whatsoever to say about whether babies are valuable or anti-valuable or neither, though.

The corollary, I suppose, is that you are acting intensely "immoral" or "unjust" right now because you are "allowing" hundreds of innocent people to die when your efforts could probably be saving them.

I'm not sure what "intensely" means here, but yes, a consequentialist would say (assuming innocent people dying is bad) that allowing those people to die given a choice is immoral.

More generally, if you and I have the same opportunity to improve the world and you take it and I don't, a consequentialist says you're behaving better than I am. This is consistent with my intuitions about morality as well.
Would you say that, in that case, I'm behaving better than you are?
Would you say we are behaving equally well?

To generalize this a bit: yes, on a consequentialist account, we pretty much never do the most moral thing available to us.

Does the consequentialist then say that it is immoral or unjust for the doctor in Africa to sleep?

Not necessarily, but if the long-term effects of the doctor staying awake are more valuable than those of the doctor sleeping (which is basically never true of humans, of course, since our performance degrades with fatigue), then yes, a consequentialist says the doctor staying awake is a more moral choice than the doctor sleeping.

I can see no way that a consequentialist can, in the real world, determine what is the "most moral" or "most just" course of action

Yes, that's true. A consequentialist in the real world has to content themselves with evaluating likely consequences and general rules of thumb to decide how to act, since they can't calculate every consequence.

But when choosing general rules of thumb, a consequentialist chooses the rules that they expect to have the best consequences in the long run, as opposed to choosing rules based on some other guideline.

For example, as I understand deontology, a deontologist can say about a situation "I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose A despite the likely increased suffering." A consequentialist in the same situation would say "I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose B despite the duty I'm failing to discharge."

It's certainly true that they might both be wrong about the situation... maybe A actually causes less suffering than B, but they don't know it; maybe their actual duty is to do A, but they don't know it. But there's also a difference between how they are making decisions that has nothing to do with whether they are right or wrong.

To say that anything less than the optimal solution (whatever that is) is immoral or unjust leads us down the path of absurdity

Well, yes. Similarly, to say that anything less than the optimal financial investment is poor financial planning is similarly absurd. But that doesn't mean we can't say that good financial planning consists in making good financial investments, and it doesn't mean that we can't say that given a choice of investments we should pick the one with a better expected rate of return.

More generally, treating "A is better than B" as though it's equivalent to "Nothing other than A is any good" will generally lead to poor consequences.

comment by bogus · 2012-10-17T21:30:48.173Z · LW(p) · GW(p)

unsolvable question is how do we get all these religious deontologists to become atheist(optional) consequentalists?

Perhaps. But that question is more closely related to morality than ethics persay. Many ethicists (perhaps most) would simply take these folks' deontological moral core as a given; they would not endorse any intervention.

We do know for sure that moral conflicts in the real world are quite complicated, and they often spill into political conflict.

comment by torekp · 2012-10-18T00:20:03.139Z · LW(p) · GW(p)

Nice lists.

How should we respond to the possibility that the universe actually has infinite value? Does it mean that we have no reason to do any action (because we don’t increase the sum total of value in the world)? Or does this possibility refute aggregative consequentialism?

Neither. It is possible that action A results in the subsequent history of the universe being at all times better than it would be at the corresponding times following action B. (Times measured from the agent's reference frame.) In that case, a consequentialist worthy of the name would conclude that action A was to be preferred to B. For that matter, a non-consequentialist who considered consequences one of several morally important dimensions along which to assess actions, could say that A was better than B in that dimension.

comment by [deleted] · 2012-10-17T21:26:07.132Z · LW(p) · GW(p)

What's in it for me? This could appear on the practical side, and is similar to questions of maximum effective charity (the charity being myself). This could also appear on the theoretical side (why should I care about the non-me?).

How high a price is a claimed truth worth paying? I'm plenty ignorant and won't say yea or nay to James D. Watson being correct in his controversial claims. But he made those claims, and sometimes had to pay a price for them. How much heat I'm willing to take for controversial claims is an important unsolved ethical question. One specific resolved area: I'll keep saying publicly that there's no God / Allah, but I'll say it in the United States and not in nations that have hate speech laws or laws against apostasy / blasphemy. In those countries I'll smile and nod, or maybe not go there at all. That's how much heat I'll take (and produce) in that one area.

The tricky issue of when to not tolerate intolerance, that's a real corker.