Confusion about Normative Morality

post by JMiller · 2013-02-07T20:34:31.303Z · LW · GW · Legacy · 103 comments

Hi everyone,

If this has been covered before, I apologize for the clutter and ask to be redirected to the appropriate article or post.

I am increasingly confused about normative theories. I've read both Eliezer's and Luke's meta ethics sequences as well as some of nyan's posts, but I felt even more confused afterwards. Further, I happen to be a philosophy student right now, and I'm worried that the ideas presented in my ethics classes are misguided and "conceptually corrupt" that is, the focus seems to be on defining terms over and over again, as opposed to taking account of real effects of moral ideas in the actual world. 

I am looking for two things: first, a guide as to which reductionist moral theories approximate what LW rationalists tend to think are correct. Second, how can I go about my ethics courses without going insane?

Sorry if this seems overly aggressive, I am perhaps wrongfully frustrated right now.

Jeremy

103 comments

Comments sorted by top scores.

comment by shminux · 2013-02-07T23:41:29.820Z · LW(p) · GW(p)

As Jack mentioned and as Eliezer repeatedly said, even if a certain question does not make sense, the meta-question "why do people think that it makes sense?" nearly always makes sense. So, to avoid going insane, you can approach your ethics courses as "what thought process makes people make certain statements about ethics and morality?". Admittedly, this altered question belongs in cognitive science, rather than in ethics or philosophy, but your professors likely won't notice the difference.

Replies from: JMiller
comment by JMiller · 2013-02-08T15:54:52.441Z · LW(p) · GW(p)

Thanks! That's a very good exercise to try.

comment by [deleted] · 2013-02-07T20:51:44.323Z · LW(p) · GW(p)

Further, I happen to be a philosophy student right now, and I'm worried that the ideas presented in my ethics classes are misguided and "conceptually corrupt" that is, the focus seems to be on defining terms over and over again, as opposed to taking account of real effects of moral ideas in the actual world.

Second, how can I go about my ethics courses without going insane?

Good luck. Nearly everything I've seen written on morality is horribly wrong. I took a few ethics classes, and they are mostly junk. Maybe things are better in proper philosophy, but I doubt it.

as well as some of nyan's posts, but I felt even more confused afterwards.

That's worrying. Any particulars?

a guide as to which reductionist moral theories approximate what LW rationalists tend to think are correct.

If you mean things like "utilitarianism" and such, don't bother, no one has come up with one that works. I think the best approach is to realize that moral philosophy is a huge problem that hasn't been solved and no one knows how to solve (I'm "working" on it, as are many others), and all "solutions" right now are jumping the gun, and involve fundamental confusions. The best we have is a few hueristics that improve our moral intuitions, and our intuitions themselves. Eliezer's ethical injunctions are useful, thinking about consequences of actions is useful, remembering not to reduce all of morality to a few simple rules is useful, finding out what true morality feels like, and how to invoke your moral machinery is useful. Solving problems relevent to morality, like decision theory, is useful.

Good luck, though.

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-07T20:56:01.858Z · LW(p) · GW(p)

Good luck. Nearly everything currently written on morality is horribly wrong. I took a few ethics classes, and they are mostly junk. Maybe things are better in proper philosophy, but I doubt it.

I have a hunch things are a good bit better in proper philosophy than you think. Admittedly, most intro and medium level courses regarding ethics are pretty terrible (this is obviously only from personal experience.) If I had to make a guess as to why that is, it'd probably be for the same reason I think most of the rest of philosophy courses could be better: too much focus on history.

Replies from: JMiller
comment by JMiller · 2013-02-07T21:21:13.929Z · LW(p) · GW(p)

In my intermediate level course, we barely talk about history at all. It is supposed to focus on "developments" in the last thirty years or so. The problem I have is that most profs think that philosophy is able to go about figuring out the truth without things like empirism, scientific study, neuroscience, probability and decision theory. Everything is very "intuitive" and I find that difficult to grasp.

For example, when discussing deontolgy, I asked why there should be absolute "requirements" as an argument against consequentialism, seeing that if it's true that the best consequences would be take these requiremesnts into consequentialist accounts of outcomes, then that is what a conequentialist would (should) say as well! The professor's answer and that of many students was: "That's just the way it is. Some things ought not be done, only because they must ought not be done". That is a hard pill for me to swallow. In this case I am much more comfortable with Eliezer's Ethical Injunctions.

(The prof was not necessarily promoting dentology but was arguing on it's behalf.)

Replies from: Kaj_Sotala, BerryPick6, fubarobfusco, bryjnar, LauralH
comment by Kaj_Sotala · 2013-02-08T13:42:26.146Z · LW(p) · GW(p)

Note that you could reverse this conversation: a deontologist could ask you why we should privilege the consequences so much, instead of just doing the right things regardless of the consequences. I would expect that your response would be pretty close to "that's just the way it is, it's the consequences that are the most important" - at least, I know that mine would be. And the deontologist would find this a very hard pill to swallow.

As Alicorn has pointed out, you can't understand deontology by requiring an explanation on consequentialism's terms, and you likewise can't understand consequentialism by requiring an explanation on deontology's terms. At some point, your moral reasoning has to bottom out to some set of moral intuitions which are just taken as axiomatic and cannot be justified.

Replies from: BerryPick6, JMiller
comment by BerryPick6 · 2013-02-08T15:39:14.178Z · LW(p) · GW(p)

Note that you could reverse this conversation: a deontologist could ask you why we should privilege the consequences so much, instead of just doing the right things regardless of the consequences. I would expect that your response would be pretty close to "that's just the way it is, it's the consequences that are the most important" - at least, I know that mine would be. And the deontologist would find this a very hard pill to swallow.

Except that deontology cares about consequences as well, so there's no need to convince them that the consequences of our actions have moral weight. If act A's consequences violate the Categorical Imperative, and act B's consequences don't, then the Kantian (for example) will pick act B.

The friction between deontology and consequentialism is that they disagree about what should be maximized, a distinction which is often simplified to consequentialists wanting to maximize the 'Good' and deontologists wanting to maximize the 'Right'.

I'll agree that past this point, much of the objections to the other side's positions hit 'moral bedrock' and intuitions are often seen as the solution to this gap.

Replies from: None
comment by [deleted] · 2013-02-10T17:38:17.513Z · LW(p) · GW(p)

If act A's consequences violate the Categorical Imperative, and act B's consequences don't, then the Kantian (for example) will pick act B.

For Kant, (and for all the Kantians I know of), consequences aren't evaluable in terms of the categorical imperative. This is something like a category mistake. Kant is pretty explicit that the consequences of an action well and truly do not matter to the moral value of an action. He would say, I think, that there is no way to draw boundary lines around 'consequences' that doesn't place all moral weight on something like the intention of the action.

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-10T17:42:11.592Z · LW(p) · GW(p)

He would say, I think, that there is no way to draw boundary lines around 'consequences' that doesn't place all moral weight on something like the intention of the action.

Well then the Kantian would pick B because he intends to not violate the CI with his actions? I'm not actually sure how this is different than valuing the consequences of your actions at all?

Replies from: None
comment by [deleted] · 2013-02-10T17:58:59.774Z · LW(p) · GW(p)

Well then the Kantian would pick B because he intends to not violate the CI with his actions?

In your initial set up, you said that A and B differ in that A's consequences violate the CI, while B's consequences do not. I'm claiming that, for Kant, consequences aren't evaluable in terms of the CI, and so we don't yet have on the table a way for a Kantian to distinguish A and B. Consequences aren't morally evaluable, Kant would say, in the very intuitive sense in which astronomical phenomena aren't morally evaluable (granting that we sometimes assess astronomical phenomena as good or bad in a non-moral sense).

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-10T18:08:32.346Z · LW(p) · GW(p)

I once again find Kantianism immensely counter-intuitive and confusing, so at this point I must thank you for correcting my misconceptions and undoing my rationalizations. :)

Replies from: None
comment by [deleted] · 2013-02-10T19:36:44.186Z · LW(p) · GW(p)

I'll try to present an argument toward Kant's views in a clear way. The argument will consist of a couple of hopefully non-puzzling scenarios for moral evaluation, an evaluation I expect at least to be intuitive to you (though perhaps not endorsed wholeheartedly), leading to the conclusion that you do not in fact concern yourself with consequences when making a moral evaluation. At some point, I expect, I'll make a claim that you disagree with, and at that point we can discuss, if you like, where the disagreement lies exactly. So:

It's common for consequentialists to evaluate actions in terms of expected rather than actual consequences: the philanthropist who donates to an efficient charity is generally not thought less morally good if some uncontrollable and unpredictable event prevents the good she expected to achieve. While we are ready to say that what happened in such a case was bad, we would not say that it was a moral bad, at least not on the philanthropists part.

If we grant this, then we have already admitted that the important factor in moral evaluations are not any actual events in the world, but rather something like expected consequences. In other words, moral evaluation deals with a mental event related to an action (i.e. the expectation of a certain consequence), not, or at least not directly, the consequence of that event.

And Kant would go further to point out that it's not quite just expected consequences either. We do not evaluate equally a philanthropist who donates to an efficient charity to spite her neighbor (expecting, but ignoring, the fact that this donation will also do some good for others) and one who donates out of a desire to do some good for others (say, expecting but ignoring the fact that this donation will also upset her neighbor). Both philanthropists expect the same consequences to play out, but we do not evaluate them equally.

So it is not expected (rather than actual) consequences that are the important factor in moral evaluations, because we can detect differences in our evaluations even when these are equal. Rather, Kant would go on to say, we evaluate actions on the basis of the reasons people have for bringing about the consequences they expect. (There are other options here, of course, and so the argument could go on).

If you've accepted every premise thus far, I think you're pretty close to being in range of Kant's argument for the CI. Has that helped?

Replies from: BerryPick6, TheOtherDave
comment by BerryPick6 · 2013-02-10T20:02:38.150Z · LW(p) · GW(p)

It's common for consequentialists to evaluate actions in terms of expected rather than actual consequences: the philanthropist who donates to an efficient charity is generally not thought less morally good if some uncontrollable and unpredictable event prevents the good she expected to achieve. While we are ready to say that what happened in such a case was bad, we would not say that it was a moral bad, at least not on the philanthropists part.

I don't accept this premise. A philanthropist whose actions lead to good consequences is morally better than a philanthropist whose actions lead to less-good consequences, wholly independent of their actual intention. This just seems like one of the fundamental aspects of consequentialism, to me.

And Kant would go further to point out that it's not quite just expected consequences either. We do not evaluate equally a philanthropist who donates to an efficient charity to spite her neighbor (expecting, but ignoring, the fact that this donation will also do some good for others) and one who donates out of a desire to do some good for others (say, expecting but ignoring the fact that this donation will also upset her neighbor). Both philanthropists expect the same consequences to play out, but we do not evaluate them equally.

Further, I would evaluate these two philanthropists exactly the same way, as long as the externalities of spiting neighbors don't escalate to a level where they have substantial moral weight. Someone who saves a child because he is interesting in seducing their mother and someone who saves a child out of pure altruism may not be equally moral, but if you only have this single instance with which to judge them, then they must be considered so.

Replies from: None
comment by [deleted] · 2013-02-10T20:24:34.518Z · LW(p) · GW(p)

A philanthropist whose actions lead to good consequences is morally better than a philanthropist whose actions lead to less-good consequences, wholly independent of their actual intention.

So suppose two people, Abe and Ben, donated to an efficient charity. Abe intends to do some good for others. Ben intends this as the first but crucial stage of an elaborate plan to murder a rival. This plan is foiled, with the result that Ben's money simply goes to the charity and does its work as normal. You would say that the actions of Abe and Ben are morally equal?

Assuming Ben's plan was foiled for reasons beyond his control or expectation, would you then say that the deciding factor in determining the moral worth of Ben's action was something beyond his control or expectation?

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-10T20:42:39.263Z · LW(p) · GW(p)

So suppose two people, Abe and Ben, donated to an efficient charity. Abe intends to do some good for others. Ben intends this as the first but crucial stage of an elaborate plan to murder a rival. This plan is foiled, with the result that Ben's money simply goes to the charity and does its work as normal. You would say that the actions of Abe and Ben are morally equal?

Yes, their particular acts of charity were morally equal, so long as their donations were equal.

Assuming Ben's plan was foiled for reasons beyond his control or expectation, would you then say that the deciding factor in determining the moral worth of Ben's action was something beyond his control or expectation?

The deciding factor in determining the moral worth of Ben's actions was "out of his hands," to a certain extent. He isn't awarded point for trying.

Replies from: None
comment by [deleted] · 2013-02-10T21:01:43.002Z · LW(p) · GW(p)

Yes, their particular acts of charity were morally equal, so long as their donations were equal....The deciding factor in determining the moral worth of Ben's actions was "out of his hands," to a certain extent.

Hm! Those are surprising answers. I drew my initial argument from Kant's Groundwork, and so far as I can tell, Kant doesn't expect his reader to give the answer you did. So I'm at a loss as to what he would say to you now. I'm no Kantian, but I have to say I find myself unable to judge Abe and Ben's actions as you have.

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-10T21:09:17.692Z · LW(p) · GW(p)

Hm! Those are surprising answers.

From the way you had written the previous few comments, I had a feeling you weren't expecting me to react as I did (and I have to note, you have been by far the more logically polite partner in this discussion so far.)

I drew my initial argument from Kant's Groundwork, and so far as I can tell, Kant doesn't expect his reader to give the answer you did.

This seems a common occurrence in the philosophy of that era. Hume is constantly asking rhetorical questions of his readers and assuming that they answer the same way he does...

I'm no Kantian, but I have to say I find myself unable to judge Abe and Ben's actions as you have.

If I had to guess, I would say that our disagreement boils down to a definitional one rather than one involving empirical facts, in a rather unsurprising manner.

Replies from: None
comment by [deleted] · 2013-02-10T21:29:26.640Z · LW(p) · GW(p)

If I had to guess, I would say that our disagreement boils down to a definitional one rather than one involving empirical facts, in a rather unsurprising manner.

Could you elaborate on this?

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-10T21:36:14.888Z · LW(p) · GW(p)

We aren't in disagreement about any facts, but are simply using the term 'moral judgement' in different ways. I take moral judgement to be an after-the-fact calculation, and you take it to be a statement about intentionality and agency. You would, presumably, agree with me that Abe and Ben's actions netted the same results, and I will agree with you that Abe's motivations were "in better faith" than Ben's, so we've essentially reached a resolution.

Replies from: None
comment by [deleted] · 2013-02-10T23:27:08.158Z · LW(p) · GW(p)

Well, I would say that Abe and Ben's respective actions have different moral value, and you've said that they have the same moral value. I think we at least disagree about this, or do you think we're using some relevant terms differently?

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-11T12:56:20.698Z · LW(p) · GW(p)

I think we at least disagree about this, or do you think we're using some relevant terms differently?

I think we disagree on the meaning of terms related to the word 'moral' and nothing further. We aren't generating different expectations, and there's no empirical test we could run to find out which one of us is correct.

Replies from: None
comment by [deleted] · 2013-02-11T14:45:38.507Z · LW(p) · GW(p)

Hm, I think you may be right. I cannot for the life of me think of an empirical test that would decide the issue.

comment by TheOtherDave · 2013-02-10T19:52:28.082Z · LW(p) · GW(p)

Presumably, a consequentialist would assert that insofar as I evaluate a philanthropist who acts out of spite differently than a philanthropist who acts out of altruism even if (implausibly) I expect both philanthropists to cause the same consequences in the long run, I am not making a moral judgment in so doing, but some other kind of judgment, perhaps an aesthetic one.

Replies from: Qiaochu_Yuan, None
comment by Qiaochu_Yuan · 2013-02-10T20:04:08.257Z · LW(p) · GW(p)

The reason I would evaluate a philanthropist who acts out of spite differently from a philanthropist who acts out of altruism is precisely because I don't expect both philanthropists to cause the same consequences in the long run.

Replies from: TheOtherDave, None
comment by TheOtherDave · 2013-02-10T20:55:02.489Z · LW(p) · GW(p)

Yes, I agree. That's why I said "implausibly". But the hypothetical hen proposed presumed this, and I chose not to fight it.

comment by [deleted] · 2013-02-10T20:35:25.488Z · LW(p) · GW(p)

This seems like a judgement about the philanthropists, rather than the act of donating. My example was intended to discuss the act, not the agent.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-10T23:44:19.191Z · LW(p) · GW(p)

Your wording suggests otherwise: "We do not evaluate equally a philanthropist who donates to an efficient charity to spite her neighbor..."

Replies from: None
comment by [deleted] · 2013-02-10T23:47:31.347Z · LW(p) · GW(p)

You're right, that was careless of me. I intended the hypothetical only to be about the evaluations of their respective actions, not them as people. This is at least partly because Kantian deontology (as I understand it) doesn't allow for any direct evaluations of people, only actions.

comment by [deleted] · 2013-02-10T20:20:39.273Z · LW(p) · GW(p)

This wouldn't be a convincing reply, I think, unless the consequentialist could come up with some reason for thinking such an evaluation is aesthetic other than 'if it were a moral evaluation, it would conflict with consequentialism'. That is, assuming, the consequentialist wants to appeal to common, actual moral evaluation in defending the plausibility of her view. She may not.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-02-10T21:03:37.419Z · LW(p) · GW(p)

This wouldn't be a convincing reply

Convincing to whom?
I mean, I agree completely that a virtue ethicist, for example, would not find it convincing.
But neither is the assertion that it is a moral judgment convincing to a consequentialist.

If I've understood you, you expect even a consequentialist to say "Oh, you're right, the judgment that a spiteful act of philanthropy is worse than an altruistic act of philanthropy whose expected consequences are the same is a moral judgment, and therefore moral judgments aren't really about expected consequences."

It's not at all clear to me that a consequentialist who isn't confused would actually say that.

Replies from: None
comment by [deleted] · 2013-02-10T21:26:15.979Z · LW(p) · GW(p)

[Not] Convincing to whom?

Me? Hopefully, the consequentialist as well.

Imagine this conversation:

X: Behold A and B in their hypothetical shenanigans. That you will tend to judge the action of A morally better than that of B is evidence that you make moral evaluations in accordance with moral theory M (on which they are morally dissimilar) rather than moral theory N (according to which they are equivalent). This is evidence for the truth of M.

Y: I grant you that I judge A to be better than B, but this isn't a moral judgement (and so not evidence for M). This is, rather, an aesthetic judgement.

X: What is your reason for thinking this judgement is aesthetic rather than moral?

Y: I am an Nist. If it were a moral judgement, it would be evidence for M.

X should not find this convincing. Neither should Y, or anyone else. Y's argument is terrible.

We could fix Y's argument by having him go back and deny that he judges A's act to be morally different from B's. This is what Berry did. Or Y could defend his claim, on independent grounds, that his judgement is aesthetic and not moral. Or Y could go back and deny that his actual moral evaluations being in accordance with M are evidence for M.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-02-10T22:28:11.746Z · LW(p) · GW(p)

(shrug) At the risk of repeating myself: what Y would actually say supposing Y were not a conveniently poor debater is not "I am an Nist" but rather "Because what makes a judgment of an act a moral judgment is N, and the judgment of A to be better than B has nothing to do with N."

X might disagree with Y about what makes a judgment a moral judgment -- in fact, if X is not an Nist, it seems likely that X does disagree -- but X simply insisting that "A is better than B" is a moral judgment because X says so is unconvincing.

We could fix Y's argument by having him go back and deny that he judges A's act to be morally different from B's.

There's no going back involved. In this example Y has said all along that Y doesn't judge A's act to be morally different from B's.

Replies from: None
comment by [deleted] · 2013-02-10T23:40:21.934Z · LW(p) · GW(p)

It seems to me that what you're suggesting constitutes logical rudeness on the consequentialist's part. The argument ran like this:

Take a hypothetical case involving A and B. You are asked to make a moral judgement. If you judge A and B's actions differently, you are judging as if M is true. If you judge them to be the same, you are judging as if N is true.

The reply you provided wouldn't be relevant if you said right away that that A and B's actions are morally the same. It's only relevant if you've judged them to be different (in some way) in response to the hypothetical. Your reply is then that this judgement turns out not to be a moral judgement at all, but an irrelevant aesthetic judgement. This is logically rude because I asked you to make a moral judgement in the first place. You should have just said right off that you don't judge the two cases differently.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-02-11T00:55:46.035Z · LW(p) · GW(p)

If someone asks me to make a moral judgment about whether A and B's actions are morally the same, and I judge that they are morally different, and then later I say that they are morally equivalent, I'm clearly being inconsistent. Perhaps I'm being logically rude, perhaps I'm confused, perhaps I've changed my mind.

If someone asks me to compare A and B, and I judge that A is better than B, and then later I say that they are morally equivalent, another possibility is that I was not making what I consider a moral judgment in the first place.

Replies from: None
comment by [deleted] · 2013-02-11T02:12:02.159Z · LW(p) · GW(p)

I'm confused as to why, upon being asked for a moral evaluation in the course of a discussion on consequentialism and deontology, someone would offer me an aesthetic evaluation they themselves consider irrelevant to the moral question. I don't think my request for an evaluation was very ambiguous: Berry understood and answered accordingly, and it would surely be strange to think I had asked for an aesthetic evaluation in the middle of a defense of deontology. So I don't understand how your suggestion would add anything to the discussion.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-02-11T04:03:06.076Z · LW(p) · GW(p)

In the hypothetical discussion you asked me to consider, X makes an assertion about Y's moral judgments, and Y replies that what X is referring to isn't a moral judgment. Hence, I said "In this example Y has said all along that Y doesn't judge A's act to be morally different from B's," and you replied "It seems to me that what you're suggesting constitutes logical rudeness on the consequentialist's part."

I, apparently incorrectly, assumed we were still talking about your hypothetical example.

Now, it seems you're talking instead about your earlier conversation with Berry, which I haven't read. I'll take your word for it that my suggestion would not add anything to that discussion.

Replies from: None
comment by [deleted] · 2013-02-11T04:37:48.049Z · LW(p) · GW(p)

Now, it seems you're talking instead about your earlier conversation with Berry, which I haven't read.

Dave, I think you're pulling my leg. Your initial comment to me was from one of my posts to Berry, so of course you read it! I'm going to tap out.

comment by JMiller · 2013-02-08T16:02:15.521Z · LW(p) · GW(p)

I didn't think about it like that, that's interesting. As I said though, I don't think consequentialists and deontologists are so far apart. If I had to argue as a consequentialists I guess I would say that consequences matter because they are real effects, whereas moral intuitions like rightness don't change anything apart from the mind of the agent. Example: if incest is wrong only because it is wrong, (assume there are no ill effects, including the lack of genetic diversity), to me it seems like the deontologists must argue what exactly makes it wrong. In terms of an analogous situation where it is the consequentialist defending him or herself, s/he can say that the consequences matter because they are dependant variables that change because of "independent" actions of agents. ( I mean independent mathematically, not in some libertarian free will sense).

Thanks for your help.

Replies from: None
comment by [deleted] · 2013-02-10T18:08:21.061Z · LW(p) · GW(p)

If I had to argue as a consequentialists I guess I would say that consequences matter because they are real effects, whereas moral intuitions like rightness don't change anything apart from the mind of the agent.

This strikes me as begging the question. You say here that consequences matter because they are real effects [and real effects matter]. But the (hardcore) deontologist won't grant you the premise that real effects matter, since that is exactly what his denial of consequentialism amounts to: the effects of an action don't matter to its moral value.

If you grant my criticism, this might be a good way to connect your views to the mainstream: write up with a criticism of a specific, living author's defense of deontology, arguing validly from mutually accepted premises. Keep it brief, run it by your teacher, and then send it to that author. You're very likely to get a response, I think, and this will serve to focus your attention on real points of disagreement.

Replies from: JMiller
comment by JMiller · 2013-02-10T22:34:04.518Z · LW(p) · GW(p)

Hey Hen,

Thanks for your suggestion, I like it.

I see how it appears that I was begging the question. I was unclear with what I meant. When I say that "consequences matter because they are real effects", I only mean that consequences imply observable differences in outcomes. Rightness for its own sake seems to me to have no observational qualities, and so I think it is a bad explanation, because it can explain (or in this case, justify) any action. I think you are correct that I need to defend why real effects matter, though.

Jeremy

comment by BerryPick6 · 2013-02-07T21:34:44.384Z · LW(p) · GW(p)

The ways in which this reminds me of my classroom experience are too many to count, but if the professor said something as idiotic as that to you, I'm really at a loss. Has he never heard of meta-ethics? Never read Mackie or studied Moral Realism?

Replies from: JMiller
comment by JMiller · 2013-02-07T21:40:42.464Z · LW(p) · GW(p)

Right? I would venture to guess that over 50% of students in my department are of the continental tradition and tend to think in anti-realist terms. I would then say 40% or more are of the analytic tradition, and love debating what things should be called, instead of facts. the remianing 10% are, I would say, very diverse, but I have encountered very few naturalists.

These numbers might be very inflated because of the neagtive associations I am experiencing currently. Nevertheless I am confident that I am correct within ten percentage points in either direction.

I think the professor really has some sophisticated views, but for the sake of the class level he is "dumbing it down" to intuitive "analysis". He doesn't often share his opinion in order to foster more debate and less "guesing the teacher's passoword" which i think is a good thing for most philosophy students.

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-07T21:44:07.161Z · LW(p) · GW(p)

Right? I would venture to guess that over 50% of students in my department are of the continental tradition and tend to think in anti-realist terms. I would then say 40% or more are of the analytic tradition, and love debating what things should be called, instead of facts. the remianing 10% are, I would say, very diverse, but I have encountered very few naturalists.

Out of curiosity, where to you go to school?

Replies from: JMiller
comment by JMiller · 2013-02-07T21:45:48.978Z · LW(p) · GW(p)

McGill University in Montreal. You?

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-07T21:49:25.020Z · LW(p) · GW(p)

The Open University in Israel (I'm not, strictly speaking, out of highschool yet, so this is all I got.)

comment by fubarobfusco · 2013-02-08T07:30:18.787Z · LW(p) · GW(p)

Well, that's pretty much the deontological claim: that there is something to an act being wrong other than its consequences.

For instance, some would assert that an act of incestuous sex is wrong even if all the standard negative consequences are denied: no deformed babies, no unhappy feelings, no scandal, and so on. Why? Because they say there exists a moral fact that incest is wrong, which is not merely a description or prediction of incestuous acts' effects.

Replies from: roystgnr
comment by roystgnr · 2013-02-08T15:49:02.773Z · LW(p) · GW(p)

"An incestuous act of sex at time t" is a descriptive statement of the world which could be able to change the output of a utility function, just as "a scandal at time t + 1 week" or "a deformed baby born at time t + 9 months" could, right? Now, my personal utility function doesn't seem to put any (terminal) value on the first statement either, but if someone else's utility function does, what makes mine "consequentialist" and theirs not?

comment by bryjnar · 2013-02-08T04:18:09.106Z · LW(p) · GW(p)

That's pretty weird, considering that so-called "sophisticated" consequentialist theories (where you can say something like: although in this instance it would be better for me to do X than Y, overall it would be better to have a disposition to do Y than X, so I shall have such a disposition) have been a huge area of discussion recently. And yes, it's bloody obvious and it's a scandal it took so long for these kinds of ideas to get into contemporary philosophy.

Perhaps the prof meant that such a consequentialist account appears to tell you to follow certain "deontological" requirements, but for the wrong reason in some way. In much the same way that the existence of a vengeful God might make acting morally also selfishly rational, but if you acted morally out of self-interest then you would be doing it for the wrong reasons, and wouldn't have actually got to the heart of things.

Alternatively, they're just useless. Philosophy has a pretty high rate of that, but don't throw out the baby with the bathwater! ;)

Replies from: JMiller
comment by JMiller · 2013-02-08T16:06:13.073Z · LW(p) · GW(p)

Yeah, we read Railton's sophisticated consequentialism, which sounded pretty good. Norcross on why consequentialism is about offering suggestions and not requirements was also not too bad. I feel like the texts I am reading are more valuable than the classes, to be frank. Thanks for the input!

Replies from: BerryPick6, None
comment by BerryPick6 · 2013-02-08T16:17:12.275Z · LW(p) · GW(p)

To answer a question you gave in the OP, Jackson's views are very close to what Eliezer's metaethics seem to be, and Railton has some similarities with Luke's views.

Replies from: JMiller
comment by JMiller · 2013-02-08T16:31:41.755Z · LW(p) · GW(p)

Hmmm that's right! I can't believe I didn't see that, thanks. I think Railton is more similar to Luke then Jackson is to Eliezer though, if I understand Eliezer well enough. Is there a comparison anywhere outlining the differences between what Eliezer and Luke think across different fields?

comment by [deleted] · 2013-02-09T19:32:19.276Z · LW(p) · GW(p)

You should try some Brad Hooker. One of the most defensible versions of consequentialism out there.

Replies from: JMiller
comment by JMiller · 2013-02-09T19:42:00.891Z · LW(p) · GW(p)

Cool, I will check him out. Thanks.

comment by LauralH · 2013-02-08T02:23:42.199Z · LW(p) · GW(p)

So the professor was playing Devil's Advocate, in other words? I'm not familiar with the "requirements" argument he's trying, but like a lot of people here, that's because I think philosophy classes tend to be a waste of time. For primarily the reasons you list in the first paragraph. I'm a consequentialist, myself.

Do you actually think you're having problems with understanding the Sequences, or just in comparing them with your Ethics classes?

Replies from: JMiller
comment by JMiller · 2013-02-08T16:11:06.214Z · LW(p) · GW(p)

It isn't that I don't understand the sequences on their own. It's more that I don't see a) how they relate to the "mainstream" (though I read Luke's post on the various connections, morality seems to be sparse on the list, or I missed it). And b) what Eliezer in particular is trying to get across. The topics in the sequence are very widespread and don't seem to be narrowing in on a particular idea. I found a humans guide to words many times more useful. Luke's sequence was easier, but then there is a lot less material.

I think he was playing devil's advocate. Thanks for the comment.

Replies from: LauralH
comment by LauralH · 2013-02-11T23:37:12.477Z · LW(p) · GW(p)

I think EY's central point is something like: just because there's no built-in morality for the universe, doesn't mean there isn't built-in morality for humans. At the same time, that "moral sense" does need care and feeding, otherwise you get slavery - and thinking spanking your kids is right.

(But it's been a while since I've read the entire ME series, so I could have confused it with something else I've read.)

comment by wedrifid · 2013-02-08T04:43:40.313Z · LW(p) · GW(p)

Sorry if this seems overly aggressive, I am perhaps wrongfully frustrated right now.

It doesn't come across that way (to me at least). While you are being direct and assertive your expressions are all about you, your confusion, your goals and your experience. If you changed the emphasis away from yourself and onto the confusion or wrongness of others and used the same degree of assertiveness it would be a different matter entirely.

Replies from: JMiller
comment by JMiller · 2013-02-08T16:11:39.517Z · LW(p) · GW(p)

Thanks!

comment by Jack · 2013-02-07T23:06:05.509Z · LW(p) · GW(p)

I'm a moral non-realist and for that reason I find (and when in college- found) normative moral theories to be really silly. Just as a class on theology seems pretty silly to someone who doesn't believe in God so does normative moral theory to someone who doesn't think there is anything real to describe in normative theory. But I think such courses can still be productive if you translate all the material in natural/sociological terms. I.e. it can still be interesting to learn how people think about "God"-- not the least of which is that God bares some resemblance to actually possible entities. Similarly, you can think about the normative theory stuff you encounter in philosophy departments as attempts by relatively intelligent people to theorize about their own moral intuitions-- like extremely crude attempts at coherent extrapolated volition.

It might be helpful to think about what an AGI would do if programmed with the different normative theories you encounter and that might give you some insight into the complexity of the problem. But in general, I recommend metaethics if you're going to take ethics classes.

"LW rationalists" tend to be consequentialists and often utilitarians (though that is not universally so) but no one has a robust and definitive theory of normative ethics-- and one should be extremely skeptical of anyone who claims to have.

Replies from: Nisan, IlyaShpitser, JMiller, buybuydandavis
comment by Nisan · 2013-02-08T02:44:56.770Z · LW(p) · GW(p)

no one has a robust and definitive theory of normative ethics-- and one should be extremely skeptical of anyone who claims to have.

.

one should

Tee hee.

comment by IlyaShpitser · 2013-02-09T16:27:56.930Z · LW(p) · GW(p)

Similarly, you can think about the normative theory stuff you encounter in philosophy departments as attempts by relatively intelligent people to theorize about their own moral intuitions-- like extremely crude attempts at coherent extrapolated volition.

That's sort of like saying the internal combustion engine is an extremely crude attempt at a perpetual motion machine. Is there anything concrete published on CEV? Is there any evidence this is a well-defined/possible problem? Is there anything except a name?

Without evidence, the default on "grand theories" should be "you are a crank and don't know what you are talking about."

Replies from: Jack, Eliezer_Yudkowsky
comment by Jack · 2013-02-09T18:02:30.416Z · LW(p) · GW(p)

My comment said absolutely zero about the extent to which I think CEV is possible.

That's sort of like saying the internal combustion engine is an extremely crude attempt at a perpetual motion machine.

I don't think this analogy get's the levels of analysis of these two things right or accurately conveys my position on them. When I said normative theory was a extremely crude attempt at coherent extrapolated volition I was definitely not saying that all moral philosophy until now was a footnote to Eliezer Yudkowsky, or anything like that. I was not comparing normative theory with the theory of CEV. I was comparing moral theorizing to the actual act of determining the coherent extrapolated volition of a group of people. CEV isn't a normative theory, it's much more like a theory for how to find the correct normative theory (in the tradition of reflective equilibrium or, say, Habermasian discourse ethics). When people do normative theory on their own they are making extremely crude attempts at the ideals of the above.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-09T17:01:59.621Z · LW(p) · GW(p)

I can see why you disagree w/ grandparent, but please note that CEV isn't supposed to be a grand new ethical theory. Somewhere in the background of 'why do CEV rather than something else' is a metaethical theory most closely akin to analytic descriptivism / moral functionalism - arguably somewhat new in the details, arguably not all that new, but at any rate the moral cognitivism part is not what CEV itself is really about. The main content of CEV looks like reflective equilibrium or a half-dozen other prior ethical theories and is meant to be right, not new.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2013-02-09T22:48:54.830Z · LW(p) · GW(p)

Sorry, my point was not that CEV was a normative theory, but that one cannot be compared unfavorably with "vaporware." The worry is that CEV might be like Leibniz's "Characteristica Universalis".

comment by JMiller · 2013-02-08T16:13:35.674Z · LW(p) · GW(p)

Thanks Jack. Is it ever frustrating, (or is it more often fun) to be on a forum with such a large percentage of realists?

I like the theological analogy. I need to think about it some more because I think there are some important differences.

Replies from: buybuydandavis, Jack
comment by buybuydandavis · 2013-02-09T01:26:37.443Z · LW(p) · GW(p)

I think the percentage of moral "realists" is lower here than in the general population.

comment by Jack · 2013-02-08T18:49:31.944Z · LW(p) · GW(p)

Well, I stopped taking ethics classes and focused on metaphysics and philosophy of science. But that was more just a matter of interests. I got more frustrated with professors tolerating the epistemic relativism popular among students.

Replies from: JMiller
comment by JMiller · 2013-02-08T18:51:45.807Z · LW(p) · GW(p)

I completely understand.

comment by buybuydandavis · 2013-02-09T01:21:09.860Z · LW(p) · GW(p)

I'm worried that the ideas presented in my ethics classes are misguided and "conceptually corrupt"

No need to worry about it - have no doubt that they're conceptually corrupt.

From your comments, I think you would appreciate Stirner:

Man, your head is haunted; you have wheels in your head! You imagine great things, and depict to yourself a whole world of gods that has an existence for you, a spirit-realm to which you suppose yourself to be called, an ideal that beckons to you. You have a fixed idea!

Do not think that I am jesting or speaking figuratively when I regard those persons who cling to the Higher, and (because the vast majority belongs under this head) almost the whole world of men, as veritable fools, fools in a madhouse. What is it, then, that is called a "fixed idea"? An idea that has subjected the man to itself. When you recognize, with regard to such a fixed idea, that it is a folly, you shut its slave up in an asylum. And is the truth of the faith, say, which we are not to doubt; the majesty of (e. g.) the people, which we are not to strike at (he who does is guilty of – lese-majesty); virtue, against which the censor is not to let a word pass, that morality may be kept pure; – are these not "fixed ideas"? Is not all the stupid chatter of (e. g.) most of our newspapers the babble of fools who suffer from the fixed idea of morality, legality, Christianity, etc., and only seem to go about free because the madhouse in which they walk takes in so broad a space?

You write:

I'm a moral non-realist and for that reason I find (and when in college- found) normative moral theories to be really silly. Just as a class on theology seems pretty silly to someone who doesn't believe in God so does normative moral theory to someone who doesn't think there is anything real to describe in normative theory. But I think such courses can still be productive if you translate all the material in natural/sociological terms.

Isn't it strange that the people who believe in fantasy commands from existence are the ones called moral "realists"?

But I think there is something real to describe in normative theory - one can describe how the moral algorithms in our heads work, and the distribution of different algorithms through the population. Haidt has made progress in turning morality from an exercise in battling insanities to a study of how our moral sense works, in the same way one might study how our sense of taste or smell works. Then given any particular set of moral algorithms, one can derive normative claims from that set.

Replies from: Jack
comment by Jack · 2013-02-09T01:28:23.319Z · LW(p) · GW(p)

But I think there is something real to describe in normative theory - how the moral algorithms in your head work

There is lots of exciting descriptive work to do. But that isn't normative theory, it's descriptive moral psychology.

Replies from: buybuydandavis
comment by buybuydandavis · 2013-02-09T01:30:51.964Z · LW(p) · GW(p)

You can still have normative theory in the axiomatic sense - if (these moral algorithms), then these results.

comment by Viliam_Bur · 2013-02-11T14:30:27.494Z · LW(p) · GW(p)

A few thoughts, hopefully useful for you:

Deontological morality is simply an axiom. "You should do X!" End of discussion.

If you want to continue the discussion, for example by asking "why?" (why this specific axiom, and not any other), you are outside of its realm. The question does not make sense for a deontologist. At best they will provide you a circular answer: "You should do X, because you should do X!" An eloquent deontologist can make the circle larger than this, if you insist.

On the other hand, any other morality could be seen as an instance of deontological morality for a specific value of "X". For example "You should maximize the utility of the consequences of your choices" = consequentialism. (If you say that we should maximize the utility of consequences because of some Y, for example because it makes people happy, again the question is: why Y?)

So every normative morality has its axioms, and any evaluation of which axioms are better must already use some axioms. Even if we say that e.g. self-consistent axioms seem better than self-contradictory axioms, even that requires some axiom, and we could again ask: "why"?

There is no such thing as a mind starting from a blank slate and ever achieving anything other than a blank state, because... seriously, what mechanism would it use to make its first step? Same thing with morality: if you say that X is a reason to care about Y, you must already care about X, otherwise the reasoning will leave you unimpressed. (Related: Created Already In Motion.)

So it could be said that all moralities are axiomatic, and in this technical sense, all of them are equal. However, some of those axioms are more compatible with a human mind, so we judge them as "better" or "making more sense". It is a paradox that if we want to find a good normative morality, we must look at how human brains really work. And then if we find that human brains somehow prefer X, we can declare "You should do X" a good normative morality.

Please note that this is not circular. It does not mean "we should always do what we prefer", but rather "we prefer X; so now we forever fix this X as a constant; and we should do X even if our preferences later change (unless X explicitly says how our actions should change according to changes in our future preferences)". As an example -- let's suppose that my highest value is pleasure, and I currently like chocolate, but I am aware that my taste may change later. Then my current preference X is that I should eat what I like, whether that is a chocolate or something else. Even if today I can't imagine liking something else, I still wish to keep this option open. On the other hand, let's suppose I love other people, but I am aware that in a future I could accidentally become a psychopath who loves torturing people. Then my current preference X is that I should never torture people. I am aware of the possible change, but I disagree with it now. There is a difference between a possible development that I find morally acceptable, and a possible development that I find morally unacceptable, and that difference is encoded in my morality axiom X.

The preferences should be examined carefully; I don't know how to say it exactly, but even if I think I want something now, I may be mistaken. For example I can be mistaken of some facts, which can lead me to wrong conclusion about my preferences. So I would prefer a preferences-extraction process which would correct my mistakes and would instead select things I would prefer if I knew all the facts correctly and had enough intelligence to understand it all. (Related: Ideal Advisor Theories and Personal CEV.)

Summary: To have a normative morality, we need to choose an axiom. But an arbitrary axiom could result in a morality we would consider evil or nonsensical. To consider it good, we much choose an axiom reflecting what humans already want. (Or, for an individual morality, what the individual wants.) This reflection should assume more intelligence and better information than we already have.

Replies from: Larks, JMiller
comment by Larks · 2013-02-11T15:55:07.303Z · LW(p) · GW(p)

Deontological morality is simply an axiom. "You should do X!" End of discussion.

This is not true. Deontological systems have modes of inference. e.g.

P1) You should not kill people P2) Sally is a person C) You should not kill Sally

would be totally legitimate to a deontologist

comment by JMiller · 2013-02-11T15:12:52.621Z · LW(p) · GW(p)

Viliam! Thank you!

That was very clear, except for one thing. It seems like you are conflating human desires with morality. The obvious (to me) question is: what happens if, instead of currently loving other people and being aware that I may become a psychopath later, I am a psychopath now and realize I may disposed to become a lover of people later?

I do see how any moral theory becomes deontological at some level. But because the world is complex and the human brain is crazy, I feel like that level ought to be as high as possible in order to obtain the largest amount of sophistication and mental awareness of our actions and their consequences. (More on this in a second). Perhaps I am looking at it backwards, and the simplest, most direct moral rules would be better. While that might be true, I feel like if all moral agents were to introspect and reason properly, such a paradigm would not satisfy us. Though I claim no awesome reasoning or introspection powers, it is unsatisfying to me, at least.

Above I mention consequences again. I don't think this is question begging because I think that I can turn around your argument. Any consequentialism says "don't do X because X would cause Y and Y is bad". Any morality including deontological theories can be interpreted as saying the same thing, one level down. So, "don't do X, not because X would cause Y, but because it might and we aren't sure, so lets just say X is bad. Therefore, don't do X." I don't think this is faulty reasoning at all. In fact, I think it is a safe bet most of the time, (very much using Eliezer's Ethical Injuntions). What I am concerned with about Deontology is that it seems absolute to me. This is why I prefer the injunctions over old school deontology, because it takes into account our error prone and biased brains.

Thanks for the discussion!

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-02-11T17:35:28.861Z · LW(p) · GW(p)

It seems like you are conflating human desires with morality.

I am probably using the words incorrectly, because I don't know how philosophers define them, or even whether they can agree on a definition. I essentially used "morality" to mean "any system which says what you should", and added an observation that if you take literally any such system, most of them will not fit your intuition of morality. Why? Because they recommend things you find repulsive or just stupid. But this is a fact about you or about humans in general, so in order to find "a system which says what you should, and it makes sense and is not repulsive", you must study humans. Specifically, human desires.

In other words, I define "morality" as "a system of 'shoulds' that humans can agree with".

Paperclip maximizers, capable of reflexivity and knowing game theory, could derive their own "system of 'shoulds'" they could agree with. It could include rules like "don't destroy your neighbor's two paperclips just to build one yourself", which would be similar to our morality, but that's because the game theory is the same.

But it would be game theory plus paperclip-maximizer desires, so even if it would contain some concepts of friendship and non-violence (cooperating with each other in the iterated Prisonner's Dilemma's) which would make all human hippies happy, when given a choice "sending all sentient beings into eternal hell of maximum suffering in exchange for a machine that tiles the universe with the paperclips" would seem to them like a great idea. Don't ever forget it when dealing with paperclip maximizers.

what happens if [...] I am a psychopath now and realize I may disposed to become a lover of people later?

If I am a psychopath now, I don't give a **** about morality, do I? So I decide according to whatever psychopaths consider important. (I guess it would be according to my whim at the moment.)

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-11T18:42:45.509Z · LW(p) · GW(p)

In other words, I define "morality" as "a system of 'shoulds' that humans can agree with".

If you want a name for your position on this (which, as far as I can tell, is very well put,) a suitable philosophical equivalent is Moral Contractualism, a la Thomas Scanlon in "What We Owe To Each Other." He defines certain kinds of acts as morally wrong thus:

An act is wrong if and only if any principle that permitted it would be one that could reasonably be rejected by people moved to find principles for the general regulation of behaviour that others, similarly motivated, could not reasonably reject.

comment by [deleted] · 2013-02-09T05:11:11.951Z · LW(p) · GW(p)

I've taken introductory philosophy class and my experience was somewhat similar. I remember my head getting messed with somewhat in the short term, as not so worthwhile lines of thought chewed up more of my brainpower than they probably deserved, but I think in the long term this hasn't stuck with me. I ended up coping with that class the same way I used to cope with sunday school: by using it as an a opportunity to note real examples of the different failure modes I've read about, in real time. I don't think you have to worry too much about being corrupted.

Replies from: JMiller
comment by JMiller · 2013-02-09T05:35:23.342Z · LW(p) · GW(p)

Thanks for the encouragement!

Replies from: None
comment by [deleted] · 2013-02-09T05:45:35.278Z · LW(p) · GW(p)

I might as well also mention also that Luke recommended a chapter from miller's contemporary metaethics

pdfs linked here-

Specifically, to read the section on ethical reductionism in this comment- http://lesswrong.com/lw/g9l/course_recommendations_for_friendliness/89b5

comment by Larks · 2013-02-07T22:21:02.028Z · LW(p) · GW(p)

which reductionist moral theories approximate what LW rationalists tend to think are correct

We tend to like Harry Frankfurt.

Replies from: BerryPick6, None, Manfred
comment by BerryPick6 · 2013-02-07T22:45:37.534Z · LW(p) · GW(p)

I only know him from the Frankfurt Cases regarding the Free Will debate. What reductionist level moral theories is he a proponent of?

comment by Manfred · 2013-02-07T23:23:52.192Z · LW(p) · GW(p)

Haven't heard of him, but it sure looks like you have a good point.

comment by RomeoStevens · 2013-02-07T21:19:01.781Z · LW(p) · GW(p)

My cursory take is that maybe you're mixing descriptive and prescriptive ethics?

Replies from: JMiller
comment by JMiller · 2013-02-07T21:24:19.660Z · LW(p) · GW(p)

Thanks for the idea. In a way, I think they are simillar. Normative ethics is traditionally defined as "the way things ought to be" and descriptive ethics is "the way people think things are". But, the way things ought to be are only the way things are on another level.

If you mean that I am confusing what people think with what is the case, I am having difficulting understanding what from my comments led you to think that.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-02-08T01:14:14.080Z · LW(p) · GW(p)

I don't think of them as being in the same bucket. Descriptive ethics to me is something like "we noticed that people claimed to value one thing and did something else, so we did an experiment to test it." And prescriptive ethics is decision theories.

What gave me the idea is this sentence: "as opposed to taking account of real effects of moral ideas in the actual world. "

which sounds like thinking about descriptive ethics while the post title refers to prescriptive ethics.

Replies from: bogus, JMiller
comment by bogus · 2013-02-08T02:45:53.269Z · LW(p) · GW(p)

FYI, normative ethics tends to include a lot more than decision theory. It also includes Kantian reasoning based on so-called "principles of rational agency". And, in practice, it includes moral reasoning based on the morals and values that human people and societies broadly agree on. The informal evaluation of "right versus right" that we do in order to solve disputes in everyday life (assuming that these do not turn into full-blown legal or political disputes) would also fall under normative ethics, since we do broadly agree on how such "balancing" should work in general, even though we'll disagree about specific outcomes.

FWIW, I think the term "descriptive ethics" should be taboo-ed and deprecated, because it is mildly "Othering" and patronizing. Just call it morality. Nobody thinks they are doing "descriptive ethics" when they do everyday moral reasoning based on their peculiar values. But that's what it gets called by moral philosophers/ethicists, since "describing" morals from an outside, supposedly objective POV is what their work involves.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-02-08T02:55:04.748Z · LW(p) · GW(p)

those aren't decision theories?

Replies from: bogus
comment by bogus · 2013-02-08T03:08:53.891Z · LW(p) · GW(p)

Um, no? To me, 'decision theory' means a formal object such as CDT or UDT/TDT. These have little to do with ethics persay, even though TDT apparently does capture some features of ethical reasoning, such as the "reflective" character of the Kantian categorical imperative.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-02-08T03:14:43.050Z · LW(p) · GW(p)

Fuzzy heuristics based ethical reasoning seems to involve some screening off of the space of possible decision theories the agent regards as valid to me.

After all, our work on decision theories is to get everything to add up to normality (in the "I don't know what friendliness is, but I know it when I see it" sense)

Replies from: bogus
comment by bogus · 2013-02-08T03:36:19.693Z · LW(p) · GW(p)

Perhaps we have different ideas of what "ethics" involves. To me, ethical reasoning is at its core a way of informally solving disputes by compromising among value systems. This is what Kant seems to be getting at with his talk of different "principles of rational agency". We also include common human values as a part of "normative ethics", but strictly speaking that should perhaps be categorized as morality.

comment by JMiller · 2013-02-08T01:56:28.014Z · LW(p) · GW(p)

Ah I see. What I ment by that is the tendency to argue about terminology and not content, that's all. Sorry for the confusion.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-02-08T03:11:08.911Z · LW(p) · GW(p)

I think arguing over semantics is a natural reaction to edge cases. The case is extreme so suddenly the boundaries of what we mean by the word "child" becomes magnified in importance.

comment by JMiller · 2013-02-07T20:44:37.178Z · LW(p) · GW(p)

I realize that my ideas and questions can themselves already be "diseased". I'd like to try to be open to re-learn, though I understand the process may be painful. If you decide to help me, I only ask that you can handle the frustration of trying to teach someone who knows bad tricks.

(I have had the excruciating experience of trying to teach students who have learned the wrong previous skills. Granted, this was in a physical, martial arts perspective, but it strikes me that mental "muscle memory" is just as harmful and stubborn, if not more so, than actual muscle memory).

comment by BerryPick6 · 2013-02-07T20:39:44.882Z · LW(p) · GW(p)

I found myself extremely confused by nyan's recent posts on the matter, but I think I understood the other sequences you mentioned quite well (particularly Luke's.) What, specifically, do you find yourself confused about?

As an aside, I'm also currently studying philosophy, and although I started out with a heavy focus on moral-phil, I've steadily found myself drawn to the more 'techy' fields and away from things like ethics...

Replies from: None, JMiller
comment by [deleted] · 2013-02-07T20:53:24.616Z · LW(p) · GW(p)

I found myself extremely confused by nyan's recent posts on the matter,

This is bad. What confused you? Anything you want explained?

Replies from: JMiller, BerryPick6
comment by JMiller · 2013-02-07T21:14:10.489Z · LW(p) · GW(p)

Actually, I think the most recent one is very useful. I especially like the idea of measuring utilities as probabiities (re: whales vs orgasms.)

I found the original whale one you linked above to be the most confusing, because it highlights issues without explaining how to go about measuring "awesomeness". I suppose you are right though in that nobody really knows what is correct.

Note, when I say I am confused, I don't mean that your writting in particular caused confusion. More, it is that with so many different opinions and sources of ideas, I am having difficulty deciding what I ought to think to be true.

Replies from: None
comment by [deleted] · 2013-02-07T22:21:34.020Z · LW(p) · GW(p)

it highlights issues without explaining how to go about measuring "awesomeness".

Oops. If "measure" comes anywhere near that post, I failed to communcate the point.

We don't have something we can measure yet. There is no procedure that can be understood on the intellectual/verbal level to calculate what is right. Thinking about it on the verbal level is all sorts of confusing (see for example, everything written about the topic). However, we do have a hardware implementation of approximately what we want; our moral intuitions. The trick is to invoke these moral intuitions without invoking all the confusion left over from trying to think about it verbally. Invoking the hardware intuitions through the "awesome" verbal concept bypasses the cruft attached to "morality", "right", etc.

This is of course not a complete solution, as it is not explicit, and our moral intuitions are full of bugs, and "awesome" isn't quite right. Also, as we use this, "awesome" may become similarly corrupted.

Replies from: JMiller
comment by JMiller · 2013-02-07T22:33:00.929Z · LW(p) · GW(p)

Actualy, given your above comment, it is my desire to "measure" that is the problem. Your post DID do a good job of staying away from the concept, which is what you intended. I didn't realize that was part of the point, but now I see what you mean. Thanks for clearing that up for me.

"See for example, everything written about the topic"

Nice.

comment by BerryPick6 · 2013-02-07T20:57:55.350Z · LW(p) · GW(p)

This is bad. What confused you? Anything you want explained?

I'm still working through your most recent one, and I feel like coming up with questions would be unfair before I (a) reread it and (b) process and isolate my confusion.

If I still find myself with questions after that, I will take you up on your offer. :)

comment by JMiller · 2013-02-07T20:47:24.230Z · LW(p) · GW(p)

So have I. I am a lot more comfortable with logic and computability. Right now, the debate between consequentialism and deontology is hard for me to grasp, because I feel like by tabooing words I can re write any dilemma to sympathize with either theory.

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-07T20:52:40.169Z · LW(p) · GW(p)

Right now, the debate between consequentialism and deontology is hard for me to grasp, because I feel like by tabooing words I can re write any dilemma to sympathize with either theory.

If I can recommend a textbook (I don't know how acceptable a solution that is, but whatever) Normative Ethics helped me overcome a lot of confusion about terms and theories and realize that it's not all just pure wordplay. It definitely helped me get a better handle on the specifics of the debate between the various normative moral positions.

Replies from: JMiller
comment by JMiller · 2013-02-07T21:10:39.670Z · LW(p) · GW(p)

Thanks for the recommendation. Parts of it are required reading for two of my courses.