Rational Ethics

post by OrphanWilde · 2012-07-11T22:26:40.730Z · LW · GW · Legacy · 32 comments

[Looking for feedback, particularly on links to related posts; I'd like to finish this out as a post on the main, provided there aren't too many wrinkles for it to be salvaged.]

Morality as Fixed ComputationAbstracted Idealized Dynamics, as part of the Metaethics Sequence, discuss ethics as computation.  This is a post primarily a response to these two posts, which discuss computation, and the impossibility of computing the full ethical ramifications of an action.  Note that I treat morality as objective, which means, loosely speaking, that two people who share the same ethical values should arrive, provided neither makes logical errors, at approximately the same ethical system.

On to the subject matter of this post - are Bayesian utilitarian ethics utilitarian?  For you?  For most people?

And, more specifically, is a rational ethics system more rational than a heuristics and culturally based one?

I would argue that the answer is, for most people, "No."

The summary explanation of why: Because cultural ethics are functioning ethics.  They have been tested, and work.  They may not be ideal, but most of the "ideal" ethics systems that have been proposed in the past haven't worked.  In terms of Eliezer's post, cultural ethics are the answers that other people have already agreed upon; they are ethical computations which have already been computed, and while there may be errors, most of the potential errors an ethicist might arrive upon have already been weeded out.

The longer explanation of why:

First and foremost, rationality, which I will use from here instead of the word "computation," is -expensive-.  "A witch did it", or the equivalent "Magic!", while not in fact conceptually simple, is in fact logically simple; the complexity is encoded in the concept, not the logic.  The rational explanation for, say, static electricity, requires far more information about the universe, which for an individual who aspires to be a farmer because he likes growing things, may never be useful, and whose internalization may never pay for itself.  It can be fully consistent with a rational attitude to accept irrational explanations, when you have no reasonable expectation that the rational explanation will provide any kind of benefit, or more exactly when the cost of the rational explanation exceeds its expected benefit.

Or, to phrase it another way, it's not always rational to be rational.

Terminal Values versus Instrumental Values discusses some of the computational expenses involved in ethics.  It's a nontrivial problem.

Rationality is a -means-, not an ends.  A "rational ethics system" is merely an ethical system based on logic, on reason.  But if you don't have a rational reason to adopt a rational ethics system, you're failing before you begin; logic is a formalized process, but it's still just a process.  The reason for adopting a rational ethics system is the starting point, the beginning, of that process.  If you don't have a beginning, what do you have?  An ends?  That's not rationality, that's rationalization.

So the very first step in adopting a rational ethics system is determining -why- you want to adopt a rational ethics system.  "I want to be more rational" is irrational.

"I want to know the truth" is a better reason for wanting to be rational.

But the question in turn must, of course, be "Why?"

"Truth has inherent value" isn't an answer, because value isn't inherent, and certainly not to truth.  There is a blue pillow in a cardboard box to my left.  This is a true statement.  You have truth.  Are you more valuable now?  Has this truth enriched your life?  There are some circumstances in which this information might be useful to you, but you aren't in those circumstances, nor in any feasible universe will you be.  It doesn't matter if I lied about the blue pillow.  If truth has inherent value, then every true statement must, in turn, inherit that inherent value.  Not all truth matters.

A rational ethics system must have its axioms.  "Rationality," I hope I have established, is not a useful axiom, nor is "Truth."  It is the values that your ethics system seeks to maximize which are its most important axioms.

The truths that matter are the truths which directly relate to your moral values, to your ethical axioms.  A rational ethics system is a means of maximizing those values - nothing more.

If you have a relatively simple set of axioms, a rational ethics system is relatively simple, if still potentially expensive to compute.  Strict Randian Objectivism, for example, attempts to use human life as its sole primary axiom, which makes it a relatively simple ethical system.  (I'm a less strict Objectivist, and use a different axiom, personal happiness, but this rarely leads to conflict with Randian Objectivism, which uses it as a secondary axiom.)

If, on the other hand, you, like most people, have a wide variety of personal values which you are attempting to maximize, attempting to assess each action on its ethical merits becomes computationally prohibitive.

Which is where heuristics, and inherited ethics, start to become pretty attractive, particularly when you share (and most people do, to more extent than they don't) your culture's ethical values.

If you share at least some of your culture's ethical values, normative ethics can provide immense value to you, by eliminating most of the work necessary in evaluating ethical scenarios.  You don't need to start from the bottom up, and prove to yourself that murder is wrong.  You don't need to weigh the pros and cons of alcoholism.  You don't need to prove that charity is a worthwhile thing to engage in.

"We all engage in ethics, though; it's not like a farmer with static electricity, don't we have a responsibility to understand ethics?"

My flippant response to this question is, should every driver know how to rebuild their car's transmission?

You don't need to be a rationalist in order to reevaluate your ethics.  An expert can rebuild your transmission - an expert can also pose arguments to change your mind.  This has, indeed, happened before on mass scales; racism is no longer broadly acceptable in our society.  It took too long, yes, -but-, a long-established ethics system, being well-tested, should require extraordinary efforts to change.  If it were easily mutable, it would lose much of its value, for it would largely be composed of poorly-tested ideas.

All of which is not to say that rational ethics are inherently irrational - only that one should have a rational reason for engaging in them to begin with.  If you find that societal norms frequently conflict with your own ethical values, that is a good reason to engage in rational ethics.  But if you don't, perhaps you shouldn't.  And if you do, you should be cautious of pushing a rational ethics system on somebody for whom existing ethical systems do well, if your goal is to improve their well-being.

32 comments

Comments sorted by top scores.

comment by maia · 2012-07-12T18:05:16.670Z · LW(p) · GW(p)

Writing things like "it's not always rational to be rational" is a good sign that you should taboo at least one of the ways you're using the word.

If you had replaced "rational ethics" with "utilitarian ethics calculations from scratch," I think this post would have been better received. Your argument is reasonable in substance, but the way you use the world "rational" seems different from how most people here use it.

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-12T20:00:58.628Z · LW(p) · GW(p)

"It's not always logical to be logical" suffers the same apparent construction problem, as would "It's not always utilitarian to be utilitarian."

I'm using both words in the same sense; if you prefer, "It's not always rational to choose a rational decision-making process for making a decision." I presume that the choice in how to make a decision is an independent decision from the decision itself; that is, there are in fact two decisions to be made. It's not necessary for the methodology used to make the first decision - what methodology to use for the second decision - to choose itself.

Replies from: maia, mwengler
comment by maia · 2012-07-12T20:44:50.917Z · LW(p) · GW(p)

They do. The advantage of such confusing patterns is that they're memorable and rhetorically interesting, but they receive no points for clarity.

So you actually did mean that if you undergo a meta-level value calculation, you will decide that the value of information from doing an object-level moral calculation is sometimes negative?

Replies from: buybuydandavis, OrphanWilde
comment by buybuydandavis · 2012-07-13T19:40:57.790Z · LW(p) · GW(p)

The advantage of such confusing patterns is that they're memorable and rhetorically interesting, but they receive no points for clarity.

If the writer is doing his job, the different senses of the term should be clear in context, and the construction serves to reinforce that a distinction is being made between two senses of a term. The cognitive dissonance inherent in the seeming contradiction helps make it memorable so that it can act as a touchstone to the in context meaning.

That's if the writer is doing his job. Often, the writer is merely mesmerized by his own language, and is wallowing in the "mystery of the paradox".

Replies from: maia
comment by maia · 2012-07-13T20:11:37.806Z · LW(p) · GW(p)

Of course. Complex arguments tend to call for as much clarity as possible, though, so i'd advocate generally avoiding these constructions in venues like LessWrong.

comment by OrphanWilde · 2012-07-12T20:56:19.550Z · LW(p) · GW(p)

I started as a poet, so I hope I'll be forgiven my occasional forays into rhetorically interesting constructions, as I am prone to them.

I'd say that the construction is somewhat weaker; if you undergo a meta-level value calculation, you -may- decide that the value of information from doing an object-level moral calculation is sometimes negative, including the cost of the calculation in the value of the information. (There's a joke in there somewhere about the infinite cost I calculated in my meta-meta-level value calculation for collecting the information to prove the meta-level calculation for all cases...)

Replies from: maia
comment by maia · 2012-07-12T21:05:28.169Z · LW(p) · GW(p)

They have their uses, but the word "rational" can be a bit sensitive around here. If you've done a value of information calculation and decided the moral calculation isn't worth your time, then obviously doing that moral calculation can't be considered "rational." Though it could be a way to attempt to make a "rational" choice on a moral problem. This meta-level stuff can be tricky!

That's what I meant to say, actually; I think we agree on what the construction means now,

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-13T21:13:21.733Z · LW(p) · GW(p)

I'll add one thing, on consideration: Doing that calculation may be irrational, but that's not to say the calculation itself is irrational.

comment by mwengler · 2012-07-12T20:47:57.294Z · LW(p) · GW(p)

"It's not always rational to choose a rational decision-making process for making a decision."

You are NOT using rational in the same sense in the two places it is used in that sentence.

The first rational means something like "optimal," or like "winning" when Eliezer says "rationalists win."

The second meaning means something like "doing your own analysis and calculations to create or derive a system which, in some theoretical but not real world where it is implemented by everybody INSTEAD of the existing system, would be (according to your own calculations) better than the existing system."

comment by RobertLumley · 2012-07-11T23:18:38.152Z · LW(p) · GW(p)

Please don't use that word.

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-12T13:14:17.200Z · LW(p) · GW(p)

"So while you can eliminate the word 'rational' from "It's rational to believe the sky is blue", you can't eliminate the concept 'rational' from the sentence 'It's epistemically rational to increase belief in hypotheses that make successful experimental predictions.'"

I'm discussing the usefulness of rationality as a cognitive algorithm in a situation in which other, cheaper, algorithms are available.

Eliminating the word would make this post less clear, not more.

Replies from: RobertLumley
comment by RobertLumley · 2012-07-12T13:49:27.028Z · LW(p) · GW(p)

I should have been more specific, but I was really trying to make my entire comment hyperlinkable to all of the threads we have about this.

So to clarify. Yes, your use of the word is pretty appropriate in the body. But it is very, very common for people to sign up on LW, make a post about "Rational Nose Picking" or something like that, and get downvoted into oblivion. So (If I may generalize from one example) most people on LW have a pretty solidly formed aversion to threads written by new members of the form "Rational ___". This is probably reinforced by the fact that Eliezer wrote an entire metaethics sequence. While you linked to it and are clearly aware of it, people probably are less interested in rehashing issues they feel are already settled. So in summary, I'd recommend you try and come up with a better title and edit it. (Related: We've noticed over time the meaning of posts tends to converge to the literal meaning of their title.)

For what it's worth, given all of the things mentioned in the first paragraph that led me to expect this would be terrible, I found myself agreeing with most of it, although to be honest, I only skimmed. And for what it's worth, I've upvoted it, since I didn't think it should be below -3.

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-12T14:10:38.907Z · LW(p) · GW(p)

Grins I'm personally a little bit pleased with the negative karma value, to be perfectly honest. I was being deliberately contrarian.

The title could probably use some work, though, yes. Originally I titled it simply "Metaethics," but found that this was a bit heavy-handed (and I yanked out the bits actually concerned with metaethics anyways). "The Ethics of Ethics" was another I considered. Any recommendations?

Replies from: RobertLumley
comment by RobertLumley · 2012-07-12T14:37:30.980Z · LW(p) · GW(p)

I'm personally a little bit pleased with the negative karma value, to be perfectly honest.

So you're just trolling?

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-12T14:58:11.549Z · LW(p) · GW(p)

No. I'm challenging people to question beliefs which are, in this context, sacred. If the response was generally positive, it would mean that I either miscalculated my audience or failed. I believe what I wrote here, which is why I wrote it. I just wouldn't have written it for an audience which already believes it.

(Negative doesn't necessarily mean I've succeeded, however - as you point out, the response could be to other things, such as the title.)

Replies from: RobertLumley
comment by RobertLumley · 2012-07-12T18:21:30.160Z · LW(p) · GW(p)

I think you misunderstand the karma system. Things that challenge our beliefs are upvoted all the time.

Replies from: Kawoomba
comment by Kawoomba · 2012-07-12T22:19:56.371Z · LW(p) · GW(p)

I'd challenge that belief. Let the karma count on this comment be my evidence in my favor, or in yours!

Replies from: prase
comment by prase · 2012-07-13T19:56:51.557Z · LW(p) · GW(p)

Mere saying "challenge" doesn't constitute a challenge.

comment by jimrandomh · 2012-07-11T23:16:17.559Z · LW(p) · GW(p)

Note that I treat morality as objective, which means, loosely speaking, that two people who share the same ethical values should arrive, provided neither makes logical errors, at approximately the same ethical system.

This is a tautology. When people say that morality is not objective, they mean that people tend to have different underlying values, and that there is no good way to reconcile tem.

Replies from: MileyCyrus, OrphanWilde
comment by MileyCyrus · 2012-07-11T23:33:54.800Z · LW(p) · GW(p)

When people say that morality is not objective, they mean that people tend to have different underlying values, and that there is no good way to reconcile term.

Objectivity is not about everyone sharing values. If everyone hates the Yankees, "Boo Yankees!" is still not an objective statement.

comment by OrphanWilde · 2012-07-12T13:04:24.999Z · LW(p) · GW(p)

Objectivity is not universality.

comment by KPier · 2012-07-12T15:51:52.517Z · LW(p) · GW(p)

I think what you're trying to say is:

"Morally as computation" is expensive, and you get pretty much the same results from "morality as doing what everyone else is doing." So it's not really rational to try to arrive at a moral system through precise logical reasoning, for the same reasons it's not a good idea to spend an hour evaluating which brand of chips to buy. Yeah, you might get a slightly better result - but the costs are too high.

If that's right, here are my thoughts:

Obviously you don't need to do all moral reasoning from scratch. There aren't many people (on LessWrong or off) who think that you should. The whole point of Created Already in Motion is that you can't do all moral reasoning from scratch. Or, as Yvain put in in his Consequentialism FAQ, you don't need a complete theory of ballistics to avoid shooting yourself in the foot.

That said, "rely on society" is a flawed enough heuristic that almost everyone ought to do some moral reasoning for themselves. The majority of people tend to reject consequentialism in surveys, but there are compelling logical reasons to accept it. Death is widely consideed to be good, and seeking immortality to be immoral, but doing a bit of ethical reasoning tends to turn up different answers.

Moral questions have far greater consequences than day-to-day decisions; they're probably worth a little more of our attention.

(My main goal here is identifying points of disagreement, if any. Let me know if I've interpreted your post correctly.)

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-12T16:12:10.859Z · LW(p) · GW(p)

You have a good encapsulation of what I'm trying to say, yes.

I'm not arguing against "all moral reasoning from scratch," however, which I would regard as a strawman representation of rational ethics. (It was difficult to wholly avoid an apparent argument against morality from scratch, however, in establishing that rationality is not always rational, and trying to establish this in ethics as well, so I suspect I failed to some extent there, in particular the bit about the reasons for adopting rational ethics.)

My focus, although it might not have been plain, was primarily on day-to-day decisions; most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don't shoplift that candy bar, don't drink yourself into a stupor, don't cheat on your math test.

For most people, a rational ethics system costs far more than it provides in benefits. For a few people, it doesn't; either because they (like me) enjoy the act of calculation itself, or because they (say, a priest, or a counselor) are in a position such that they regularly encounter such Moral Questions, and must be capable of answering them sufficiently. We are, in fact, a -part- of society; relying on society therefore doesn't mean leaving Moral Questions unaddressed, but rather means leaving the expensive calculation to others, and evaluating the results (listening to the arguments), a considerably cheaper operation.

Replies from: KPier
comment by KPier · 2012-07-12T19:49:37.753Z · LW(p) · GW(p)

most people might encounter one or two serious Moral Questions in their entire -lives-; whether or not to leave grandma on life support, for example. Societal ethics are more than sufficient for day-to-day decisions; don't shoplift that candy bar, don't drink yourself into a stupor, don't cheat on your math test.

Agree.

For most people, a rational ethics system costs far more than it provides in benefits.

I don't think this follows. Calculating every decision costs far more than it provides in benefits, sure. But having a moral system for when serious questions do arise is definitely worth it, and I think they arrive more often than you realize (donating to effective/efficient charity, choosing a career, supporting/opposing gay marriage or abortion or universal health care).

We are, in fact, a -part- of society; relying on society therefore doesn't mean leaving Moral Questions unaddressed, but rather means leaving the expensive calculation to others, and evaluating the results (listening to the arguments), a considerably cheaper operation.

So are you saying that you agree people ought to spend time considering arguments for various moral systems, but that they shouldn't all bother with metaethics? Agreed. Or are you saying they shouldn't bother with thinking about "morality" at all, and should just consider the arguments for and against (for example) abortion independent of a bigger system?

And one note: I think you're misusing "rational". Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You're only getting the counterintuitive result "rationality is not always rational" because you're treating "rational" as synonymous with "logical" or ""optimized" or "thought-through".

I think you could improve the post - and make your point clearer, by replacing "rational" with one of these words.

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-12T20:20:27.665Z · LW(p) · GW(p)

"And one note: I think you're misusing "rational". Spending an hour puzzling over the optimal purchase of chips is not rational; spending an hour puzzling over whether to shoplift the chips is also not rational. You're only getting the counterintuitive result "rationality is not always rational" because you're treating "rational" as synonymous with "logical" or ""optimized" or "thought-through"."

  • I think this encapsulates our disagreement.

First, I challenge you to define rationality while excluding those mechanisms. No, I don't really, just consider how you would do it.

Can we call rationality as "A good decision-making process"? (Borrowing from http://lesswrong.com/lw/20p/what_is_rationality/ )

I think the disconnect is in considering the problem as one decision, or two discrete decisions. "A witch did it" is not a rational explanation for something, I hope we can agree, and I hope I established that one can rationally choose to believe this, even though it is an irrational belief.

The first decision is about what decision-making process to use to make a decision. "Blame the witch" is not a good process - it's not a process at all. But when the decision is unimportant, it may be better to use a bad decision making process than a good one.

Given two decisions, the first about what decision-making process to use, and the second to be the actual decision, you can in fact use a good-decision making process (rationally conclude) that a bad-decision making process (an irrational one) is sufficient for a particular task.

For your examples, picking one to address specifically, I'd suggest that it is ultimately unimportant on an individual basis to most people whether or not to support universal health care; their individual support or lack thereof has almost no effect on whether or not it is implemented. Similarly with abortion and gay marriage.

For effective charities, this decision-making process can be outsourced pretty effectively to somebody who shares your values; most people are religious, and their preacher may make recommendations, for example.

I'm not certain I would consider career choice an ethical decision, per se; I regard that as a case where rationality has a high payoff in almost any circumstances, however, and so agree with there, even if I disagree with its usefulness as an opposing example for the purposes of this debate.

Replies from: KPier
comment by KPier · 2012-07-13T02:43:58.544Z · LW(p) · GW(p)

Instrumental rationality is doing whatever has the best expected outcome. So spending a ton of time thinking about metaethics may or may not be instrumentally rational, but saying "thinking rationally about metaethics is not rational" is using the world two different ways, and is the reason your post is so confusing to me.

On your example of a witch, I don't actually see why believing that would be rational. But if you take a more straightforward example, say, "Not knowing that your boss is engaging in insider training, and not looking, could be rational," then I agree. You might rationally choose to not check if a belief is false.

Why is it necessary to muddy the waters by saying "You might rationally have an irrational belief?"

you can in fact use a good-decision making process (rationally conclude) that a bad-decision making process (an irrational one) is sufficient for a particular task.

Of course. You can decide that learning something has negative expected consequences, and choose not to learn it. Or decide that learning it would have positive expected consequences, but that the value of information is low. Why use the "rational" and "irrational" labels?

Something like half of women will consider an abortion; their support or lack thereof has an enormous impact on whether that particular abortion is implemented. And if you're proposing this as a general policy, the relevant question is whether overall people adopting your heuristic is good, meaning that the question of whether any given one of them can impact politics is less relevant. If lots of people adopt your heuristic, it matters.

For effective charities, everyone who gives to the religious organization selected by their church is orders of magnitude less effective than they could be. Thinking for themselves would allow them to save hundreds of lives over their lifetime.

comment by shminux · 2012-07-12T00:08:37.259Z · LW(p) · GW(p)

two people who share the same ethical values should arrive, provided neither makes logical errors, at approximately the same ethical system

I presume that you mean that terminal values determine instrumental values. This is not an obvious statement by any means, and is generally false for any realistic case.

cultural ethics are the answers that other people have already agreed upon; they are ethical computations which have already been computed, and while there may be errors, most of the potential errors an ethicist might arrive upon have already been weeded out.

This is idealized to uselessness. Most such computations are disputed (gun control? abortion? debt? lying? cheating?)

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-12T13:09:58.002Z · LW(p) · GW(p)

"I presume that you mean that terminal values determine instrumental values. This is not an obvious statement by any means, and is generally false for any realistic case."

  • Yet most people, when their sister is sick, would assume that administering penicillin is a good idea. That's actually a fantastic link in supporting my argument that ad-hoc ethics are too computationally expensive to calculate for most people.

"This is idealized to uselessness. Most such computations are disputed (gun control? abortion? debt? lying? cheating?)"

  • On the contrary, very few such computations are disputed (by the culture at large). Hence the sister and penicillin example.
Replies from: shminux
comment by shminux · 2012-07-12T16:32:52.565Z · LW(p) · GW(p)

Hence the sister and penicillin example.

Generalizing From One Example.

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-12T16:45:41.015Z · LW(p) · GW(p)

It was the example the very post you linked to was built upon.

[Edit] Which is to say, I wasn't generalizing from one example, I was demonstrating that a particular argument doesn't apply by demonstrating that its central example supports me.

comment by billswift · 2012-07-12T07:05:10.688Z · LW(p) · GW(p)

There is one significant question about ethics that has been skirted around, but, as far as I remember, never specifically addressed here. "Why should any particular person follow any ethical or moral rule?" Kai Nielsen has an entire book, Why Be Moral?, devoted to the issue, but doesn't come to a good reason.

Humans' inherited patterns of behavior are a beginning, Nielsen only addresses purely philosophical issues in the book, but still not adequate for what then becomes the question, "Why not defect?"

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-12T15:02:19.867Z · LW(p) · GW(p)

I believe the answer to this question is "Because the rule maximizes one's ethical values."

(Without getting into the act versus rule argument, which figures into my post, where I am, to some extent, arguing against act utilitarianism on the grounds that it is too computationally expensive.)

Of course, that leads directly into the question, "Why should any particular person hold any particular ethical value?" I don't believe this question has an answer that doesn't lead directly into another ethical value, which is why I hold ethical values as axioms.