Morality: Theory and Practice

post by JMiller · 2013-01-15T20:03:47.503Z · LW · GW · Legacy · 22 comments

Contents

22 comments

One of the criteria moral philosophers use to asses the credibility and power of a moral theory is "applicability". That is, how easy is it for humans to implement a moral rule? For example, a rule exists like "donate 23 hours a day to charity" it would be impossible for humans to fulfill the goal.

This lead me to start thinking about whether we want to be able to to pursue "the moral theoretical truth" should such a truth exist, or if we want to find the most applicable and practical set of rules, such that reasonably intramentaly rational (human) agents could figure out what is best in any given situation.

I feel like this is sort of like a map-territory distinction in a loose way. For example, the best thing to do in situation X might be A. A may be so difficult or require so much sacrifice, that B might be preferable, even if the overall outcome is not as good. This reminds me of how Eliezer says that the map is not the territory, but you can't fold the territory and put it in your pocket. 

I'd love to be able to understand this issue a little better. If anyone has any thoughts, ideas or evidence, I'd appreciate hearing them.

Thanks,

Jeremy

22 comments

Comments sorted by top scores.

comment by Luke_A_Somers · 2013-01-15T21:46:31.307Z · LW(p) · GW(p)

Any moral system that tells you to devote 23 hours a day to any one activity isn't so much inapplicable as wrong. Consequential morality at least must incorporate strategy.

Replies from: buybuydandavis, JMiller
comment by buybuydandavis · 2013-01-16T00:55:58.570Z · LW(p) · GW(p)

As Hitchens would say, born sick, but commanded to be well.

If your morality is telling you to do something it's physically impossible for you to do, tell it that it has been over ruled by reality, and should try again.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2013-01-16T10:14:46.532Z · LW(p) · GW(p)

Ought implies can.

-Kant's Law

comment by JMiller · 2013-01-15T21:51:25.231Z · LW(p) · GW(p)

I guess what I meant is, what happens if what is right is not doable. This has been addressed below though. Thank you!

Replies from: EricHerboso
comment by EricHerboso · 2013-01-16T00:18:14.567Z · LW(p) · GW(p)

Whether something is doable is irrelevant when it comes to determining whether it is right.

A separate question is what should we do, which is different from what is right. We should definitely do the most right thing we possibly can, but just because we can't do something does not mean that it is any less right.

A real example: There's nothing we can realistically do to stop much of the suffering undergone by wild animals through the predatory instinct. Yet the suffering of prey is very real and has ethical implications. Here we see something which has moral standing even though there appears to be nothing we can do to help the situation (beyond some trivial amount).

comment by Shmi (shminux) · 2013-01-15T21:46:13.303Z · LW(p) · GW(p)

Maybe consider the relationship between consequentialism (theory) and deontology (practice): the rules of the latter can be considered pre-calculated shortcuts to the former. For example, "do not kill" and other commandments are widely applicable shortcuts for most real-world consequentialst calculations, though they obviously fail in some cases. An example from religious ethics: you ought to donate some of your income to charity (through church), but how much? A tithe (1/10) of your material and/or financial revenue is a rule that makes it workable in practice in many cases, without an undue burden.

Of course, with time the rules of deontological ethics tend to become "imperatives" due to lost purposes, and "practice" becomes "theory".

comment by BerryPick6 · 2013-01-15T20:38:58.702Z · LW(p) · GW(p)

For example, the best thing to do in situation X might be A. A may be so difficult or require so much sacrifice, that B might be preferable, even if the overall outcome is not as good.

Maybe I'm reading this wrong, but it seems like A is the "commonsense" interpretation of what 'morality' means. I honestly don't know what you mean by B, though. If the overall outcome of B is not as good as A, in what way does it make sense to say we should prefer B?

Further, plenty of contemporary Moral Philosophers deny that "applicability" (I believe the phil-jargon word is "demandingness") has any relevance to morality. See Singer, or better yet, Shelly Kagan's book The Limits of Morality for a more in-depth discussion of this.

Replies from: JMiller
comment by JMiller · 2013-01-15T20:48:44.715Z · LW(p) · GW(p)

I'll make it more explicit with an example: here is a possible moral declaration: "give all your free time to charity". Here is another: "you ought to provide your friend's child with a university education if your friend cannot afford it, but you can, (barely)".

These seem very harsh. Lets consider two scenarios: 1) you can do it, but it would leave you very unhappy and financially or mentally impoverished.

2) you cannot do it, because such demands taken to the logical conclusion results in awful outcomes for you.

If 1, then I suppose that should be considered in the calculation, and so my question is irrelevant to consequentialism.

If 2, then it seems like the best action is impossible. By "B" I meant the second best action, say giving some time to charity, or donating some books to your friend's child.

Do we want to promote a theory that says "the very best thing is right, everything else is wrong", or "the best thing that 'makes sense' is still considered good, even if, were it possible, another action would be better"?

I realize that 'makes sense' carries a ton of baggage and is very vague. I'm having some difficulty articulating my self.

As for applicability, thanks, I will look at those.

Replies from: BerryPick6, None, Larks
comment by BerryPick6 · 2013-01-15T21:03:34.838Z · LW(p) · GW(p)

Ah, I see. I'm pretty sure you've run up against the "ought implies can" issue, not the issue of demandingness. IIRC, this is a contested principle, but I don't really know much about it other than Kant originally endorsing it. I think the first part of Larks' answer gives you a good idea of what consequentialists would say in response to this issue.

comment by [deleted] · 2013-01-16T15:44:17.719Z · LW(p) · GW(p)

Do we want to promote a theory that says "the very best thing is right, everything else is wrong",

No. That just means the better your imagination gets, the less you do.

Consequentialism solves all of this:

  1. Give each possible world a "goodness" or "awesomeness" or "rightness" number (utility)
  2. Figure out the probability distribution over possible outcomes of each action you could take.
  3. Choose the action that has highest mean awesomeness.

If something is impossible, it won't be reachable from the action set and therefore won't come into it. If something is bad, but nothing you can do will change it, it will cancel out. If some outcome is not actually preferable to some other outcome, you will have marked it as such in your utility assignment. If something good also comes with something worse, the utility of that possibility should reflect that. Etcetera.

In practice, you don't actually compute this, because it is uncomputable. Instead you follow simple rules that get you good results, like "don't throw away money" and "don't kill people" and "feed yourself" (Notice how the rules are justified by appealing to their expected consequences, though).

Replies from: JMiller
comment by JMiller · 2013-01-16T19:40:05.053Z · LW(p) · GW(p)

Thank you. As I understand it, "Consequentialism" means the idea that you should optimize outcomes.... It is a theory of right action. It requires a theory of "goodness" to go along with it. So, you're saying that "awesomeness" or "utility" is what is to be measured or approximated. Is that utilitarianism?

Replies from: None
comment by [deleted] · 2013-01-17T01:05:35.548Z · LW(p) · GW(p)

So, you're saying that "awesomeness" or "utility" is what is to be measured or approximated. Is that utilitarianism?

No.

There are two different concepts that "utility" refers to. VNM utility is "that for which the calculus of expectation is legitimate". ie. it encodes your preferences, with no implication about what those preferences may be, except that they behave senisibly under uncertainty.

Utilitarian utility is an older (I think) concept referring to a particular assignment of utilities involving a sum of people's individual utilities, possibly computed from happiness or something. I think utilitarianism is wrong, but that's just me.

I was referring to VNM utility, so you are correct that we also need a theory of goodness to assign utilities. See my "morality is awesome" post for a half-baked but practially useful solution to that problem.

Replies from: JMiller
comment by JMiller · 2013-01-17T02:03:52.541Z · LW(p) · GW(p)

Got it. Much appreciated.

Replies from: None
comment by [deleted] · 2013-01-17T02:05:38.029Z · LW(p) · GW(p)

No problem. Glad to have someone curious asking questions and tryign to learn!

comment by Larks · 2013-01-15T20:55:28.392Z · LW(p) · GW(p)

Consequentialism is a method for choosing an action from the set of possible actions. If "the best action is impossible" it shouldn't have been in the option set in the first place.

However, I think you might like to look into scalar consequentialism.

Plain Scalar Consequentialism: Of any two things a person might do at any given moment, one is better than another to the extent that the its overall consequences are better than the other’s overall consequences.

Replies from: JMiller
comment by JMiller · 2013-01-15T20:57:13.980Z · LW(p) · GW(p)

Thank you!

comment by Qiaochu_Yuan · 2013-01-15T20:53:52.098Z · LW(p) · GW(p)

This lead me to start thinking about whether we want to be able to to pursue "the moral theoretical truth" should such a truth exist, or if we want to find the most applicable and practical set of rules, such that reasonably intramentaly rational (human) agents could figure out what is best in any given situation.

Both? The latter needs to be judged by how closely it approximates the former. There are lots of moral rules that are easy to implement but not useful, e.g. "don't do anything ever." There's a tradeoff that needs to be navigated between ease of implementation and accuracy of approximation to the Real Thing.

Replies from: JMiller
comment by JMiller · 2013-01-15T20:55:22.835Z · LW(p) · GW(p)

So, figure out the theoretical correct action, and then approximate it to the best of your ability?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-15T21:12:46.133Z · LW(p) · GW(p)

If you figured out the theoretically correct action, you wouldn't need to approximate it. I mean figure out the theoretically correct moral theory, then approximate it to the best of your ability. You're not approximating the output of an algorithm, you're approximating an algorithm (e.g. because the correct algorithm requires too much data, or time, or rationality...).

Replies from: JMiller
comment by JMiller · 2013-01-15T21:21:20.193Z · LW(p) · GW(p)

That's a great way of saying it. Thanks a lot!

comment by OrphanWilde · 2013-01-15T20:15:55.564Z · LW(p) · GW(p)

Are you -intending- to deconstruct rule utilitarianism back into act utilitarianism here, or is that just me misunderstanding what you're getting at?

ETA: I think it's just me. Retracting.

Replies from: JMiller
comment by JMiller · 2013-01-15T20:37:19.493Z · LW(p) · GW(p)

I certainly do not think that is what I was doing. Really, I guess I want to understand the kind of normative theories people on here think are correc (and why), under a specific criterion of assessment. I think many people will take a consequentialist perspective on this site, (tentatively, I do too, but I am not confident in my convictions yet).

On a more meta-ethical level, I'm wondering how important the criterion of applicability is to a moral theory (for real humans, now.) I'm more interested in understanding the question "how should we act, and how do we know?" rather than "what is the best theoretical action?". (Of course, I may be begging the question assuming there is a difference between the two.)