Deontological Decision Theory and The Solution to Morality
post by Tesseract · 2011-01-10T16:15:22.184Z · LW · GW · Legacy · 92 commentsContents
Asking the Question The Inconsistency of Consequentialism The Solution to Morality None 92 comments
Asking the Question
Until very recently, I was a hedonic utilitarian. That is, I held ‘happiness is good’ as an axiom – blurring the definition a little by pretending that good emotions other than strict happiness still counted because it made people “happy” to have them -- and built up my moral philosophy from there. There were a few problems I couldn’t quite figure out, but by and large, it worked: it produced answers that felt right, and it was the most logically consistent moral system I could find.
But then I read Three Worlds Collide.
The ending didn’t fit within my moral model: it was a scenario in which making people happy seemed wrong. Which raised the question: What’s so great about happiness? If people don’t want happiness, how can you call it good to force it on them? After all, happiness is just a pattern of neural excitation in the brain; it can’t possibly be an intrinsic good, any more than the pattern that produces the thought “2+2=4”.
Well, people like being happy. Happiness is something they want. But it’s by no means all they want: people also want mystery, wonder, excitement, and many other things – and so those things are also good, quite independent of their relation to the specific emotion ‘happiness’. If they also desire occasional sadness and pain, who am I to say they’re wrong? It’s not moral to make people happy against their desires – it’s moral to give people what they want. (Voila, preference utilitarianism.)
But – that’s not a real answer, is it?
If axiom ‘happiness is good’ didn’t match my idea of morality, that meant I wasn’t really constructing my morality around it. Replacing that axiom with ‘preference fulfillment is good’ would make my logic match my feelings better, but it wouldn’t give me a reason to have those feelings in the first place. So I had to ask the next question: Why is preference fulfillment good? What makes it “good” to give other people what they want?
Why should we care about other people at all?
In other words, why be moral?
~
Human feelings are a product of our evolutionary pressures. Emotions, the things that make us human, are there because they caused the genes that promoted them to become more prevalent in the ancestral environment. That includes the emotions surrounding moral issues: the things that seem so obviously right or wrong seem that way because that feeling was adaptive, not because of any intrinsic quality.
This makes it impossible to trust any moral system based on gut reaction, as most people’s seem to be. Our feelings of right and wrong were engineered to maximize genetic replication, so why should we expect them to tap into objective realms of ‘right’ and ‘wrong’? And in fact, people’s moral judgments tend to be suspiciously biased towards their own interests, though proclaimed with the strength of true belief.
More damningly, such moralities are incapable of coming up with a correct answer. One person can proclaim, say, homosexuality to be objectively right or wrong everywhere for everyone, with no justification except how they feel about it, and in the same breath say that it would still be wrong if they felt the other way. Another person, who does feel the other way, can deny it with equal force. And there’s no conceivable way to decide who’s right.
I became a utilitarian because it seemed to resolve many of the problems associated with purely intuitive morality – it was internally consistent, it relied on a simple premise, and it could provide its practitioners a standard of judgment for moral quandaries.
But even utilitarianism is based on feeling. This is especially true for hedonic utilitarianism, but little less for preference – we call people getting what they want ‘good’ because it feels good. It lights up our mirror neurons, triggers the altruistic instincts encoded into us by evolution. But evolution’s purposes are not our own (we have no particular interest in our genes’ replication) and so it makes no sense to adopt evolution’s tools as our ultimate goals.
If you can’t derive a moral code from evolution, then you can’t derive it from emotion, the tool of evolution; if you can’t derive morality from emotion, then you can’t say that giving people what they want is objectively good because it feels good; if you can’t do that, you can’t be a utilitarian.
Emotions, of course, are not bad. Even knowing that love was designed to transmit genes, we still want love; we still find it worthwhile to pursue, even knowing that we were built to pursue it. But we can’t hold up love as something objectively good, something that everyone should pursue – we don’t condemn the asexual. In the same way, it’s perfectly reasonable to help other people because it makes you feel good (to pursue warm fuzzies for their own sake), but that emotional justification can’t be used as the basis for a claim that everyone should help other people.
~
So if we can’t rely on feeling to justify morality, why have it at all?
Well, the obvious alternative is that it’s practical. Societies populated by moral individuals – individuals who value the happiness of others – work better than those filled with selfish ones, because the individually selfless acts add up to greater utility for everyone. One only has to imagine a society populated by purely selfish individuals to see why pure selfishness wouldn’t work.
This is a facile answer. First, if this is the case, why would morality extend outside of our societies? Why should we want to save the Babyeater children?
But more importantly, how is it practical for you? There is no situation in which the best strategy is not being purely selfish. If reciprocal altruism makes you better off, then it’s selfishly beneficial to be reciprocally altruistic; if you value warm fuzzies, then it’s selfishly beneficial to get warm fuzzies; but by definition, true selflessness of the kind demanded by morality (like buying utilons with money that could be spent on fuzzies) decreases your utility – it loses. Even if you get a deep emotional reward from helping others, you’re strictly better off being selfish.
So if feelings of ‘right’ and ‘wrong’ don’t correspond to anything except what used to maximize inclusive genetic fitness, and having a moral code makes you indisputably worse off, why have one at all?
Once again: Why be moral?
~
The Inconsistency of Consequentialism
Forget all that for a second. Stop questioning whether morality is justified and start using your moral judgment again.
Consider a consequentialist student being tempted to cheat on a test. Getting a good grade is important to him, and he can only do that if he cheats; cheating will make him significantly happier. His school trusts its students, so he’s pretty sure he won’t get caught, and the test isn’t curved, so no one else will be hurt by him getting a good score. He decides to cheat, reasoning that it’s at least morally neutral, if not a moral imperative – after all, his cheating will increase the world’s utility.
Does this tell us cheating isn’t a problem? No. If cheating became widespread, there would be consequences – tighter test security measures, suspicion of test grades, distrust of students, et cetera. Cheating just this once won’t hurt anybody, but if cheating becomes expected, everyone is worse off.
But wait. If all the students are consequentialists, then they’ll all decide to cheat, following the same logic as the first. And the teachers, anticipating this (it’s an ethics class), will respond with draconian anti-cheating measures – leaving overall utility lower than if no one had been inclined to cheat at all.
Consequentialism called for each student to cheat because cheating would increase utility, but the fact that consequentialism called for each student to cheat decreased utility.
Imagine the opposite case: a class full of deontologists. Every student would be horrified at the idea of violating their duty for the sake of mere utility, and accordingly not a one of them would cheat. Counter-cheating methods would be completely unnecessary. Everyone would be better off.
In this situation, a deontologist class outcompetes a consequentialist one in consequentialist terms. The best way to maximize utility is to use a system of justification not based on maximizing utility. In such a situation, consequentialism calls for itself not to be believed. Consequentialism is inconsistent.
So what’s a rational agent to do?
The apparent contradiction in this case results from thinking about beliefs and actions as though they were separate. Arriving at a belief is an action in itself, one which can have effects on utility. One cannot, therefore, arrive at a belief about utility without considering the effects on utility that holding that belief would have. If arriving at the belief “actions are justified by their effect on utility” doesn’t maximize utility, then you shouldn’t arrive at that belief.
However, the ultimate goal of maximizing utility cannot be questioned. Utility, after all, is only a word for “what is wanted”, so no agent can want to do anything except maximize utility. Moral agents include others' utility as equal to their own, but their goal is still to maximize utility.
Therefore the rule which should be followed is not “take the actions which maximize utility”, but “arrive at the beliefs which maximize utility.”
But there is an additional complication: when we arrive at beliefs by logic alone, we are effectively deciding not only for ourselves, but for all other rational agents, since the answer which is logically correct for us must also be logically correct for each of them. In this case, the correct answer is the one which maximizes utility – so our logic must take into account the fact that every other computation will produce the same answer. Therefore we can expand the rule to “arrive at the beliefs which would maximize utility if all other rational agents were to arrive at them (upon performing the same computation).”
[To the best of my logical ability, this rule is recursive and therefore requires no further justification.]
This rule requires you to hold whatever beliefs will (conditional upon them being held) lead to the best results – even when the actions those beliefs produce don’t, in themselves, maximize utility. In the case of the cheating student, the optimal belief is “don’t cheat” because that belief being held by all the students (and the teacher simulating the students’ beliefs) produces the best results, even though cheating would still increase utility for each individual student. The applied morality becomes deontological, in the sense that actions are judged not by their effect on utility but by their adherence to the pre-set principle.
The upshot of this system is that you have to decide ahead of time whether an approach based on duty (that is, on every agent who considers the problem acting the way that would produce the best consequences if every agent who considers the problem were to act the same way) or on utility (individual computation of consequences) actually produces better consequences. And if you pick the deontological approach, you have to ‘forget’ your original goal – to commit to the rule even at the cost of actual consequences – because if it’s rational to pursue the original goal, then it won’t be achieved.
~
The Solution to Morality
Let’s return to the original question.
The primary effect of morality is that it causes individuals to value others’ utility as an end in itself, and therefore to sacrifice their own utility for others. It’s obvious that this is very good on a group scale: a society filled with selfless people, people who help others even when they don’t expect to receive personal benefit, is far better off than one filled with people who do not – in a Prisoner’s Dilemma writ large. To encourage that sort of cooperation (partially by design and partially by instinct), societies reward altruism and punish selflessness.
But why should you, personally, cooperate?
There are many, many times when you can do clearly better by selfishness than by altruism – by theft or deceit or just by not giving to charity. And why should we want to do otherwise? Our alruistic feelings are a mere artifact of evolution, like appendices and death, so why would we want to obey them?
Is there any reason, then, to be moral?
Yes.
Because that reasoning – that your own utility is maximized by selfishness – literally cannot be right. If it were right, then it would be the answer all rational beings would arrive at, and if all rational beings arrived at that answer, then none of them would cooperate and everyone would be worse off. If selfish utility maximizing is the correct answer for how to maximize selfish utility, selfish utility is not maximized. Therefore selfishness is the wrong answer. Each individual’s utility is maximized only if they deliberately discard selfish utility as the thing to be maximized. And the way to do that is for each one to adopt a duty to maximize total utility, not only their own – to be moral.
And having chosen collective maximization over individual competition – duty over utility – you can no longer even consider your own benefit to be your goal. If you do so, holding morality as a means to selfishness’s end, then everyone does so, and cooperation comes crashing down. You have to ‘forget’ the reason for having morality, and hold it because it's the right thing to do. You have to be moral even to the point of death.
Morality, then, is calculated blindness – a deliberate ignorance of our real ends, meant to achieve them more effectively. Selflessness for its own sake, for selfishness's sake.
[This post lays down only the basic theoretic underpinnings of Deontological Decision Theory morality. My next post will focus on the practical applications of DDT in the human realm, and explain how it solves various moral/game-theoretic quandaries.]
92 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2011-01-10T16:50:04.350Z · LW(p) · GW(p)
Most of your questions are already answered on the site, better then you attempt answering them. Read up on complexity of value, metaethics sequence, decision theory posts (my list) and discussion of Prisoner's Dilemma in particular.
What you describe as egoist consequentialist's reasoning is actually reasoning according to causal decision theory, and when you talk about influence of beliefs on consequences, this can be seen as considering a form of precommitment (which allows patching some of CDT's blind spots). If you use TDT/UDT/ADT instead, the problem goes away, and egoistic consequentialists start cooperating.
Replies from: SilasBarta, Tesseract↑ comment by SilasBarta · 2011-01-10T18:21:35.204Z · LW(p) · GW(p)
I agree that Tesseract's post needs more familiarity with the decision theory articles (mainly those taking apart Newcomb's problem, esp. this). However, the complexity of value and metaethics sequences don't help much. Per an EY post summarizing them I can't find atm, the only relevant insight here from those articles is that, "Your ethics are part of your values, so your actions should take them into account as well."
This leaves unanswered the questions of a) why we classify certain parts of our values as "ethics", and b) whether those ethics are properly a terminal or instrumental value. These are what I tried to address in this article, with my answers being that a) Those are the parts where we intuitively rely on acausal "consequences" (SAMELs in the article), and b) instrumental.
Replies from: Tesseract, Vladimir_Nesov, shokwave↑ comment by Tesseract · 2011-01-11T21:01:17.128Z · LW(p) · GW(p)
Your article is an excellent one, and makes many of the same points I tried to make here.
Specifically,
...in Dilemma B, an ideal agent will recognize that their decision to pick their favorite ice cream at the expense of another person suggests that others in the same position will do (and have done) likewise, for the same reason.
is the same idea I was trying to express with the 'cheating student' example, and then generalized in the final part of the post, and likewise the idea of Parfitian-filtered decision theory seems to be essentially the same as the concept in my post of ideally-rational agents adopting decision theories which make them consciously ignore their goals in order to achieve them better. (And in fact, I was planning to include in my next post how this sort of morality solves problems like Parfit's Hitchhiker when functionally applied.)
Upon looking back on the replies here (although I have yet to read through all the decision theory posts Vladimir recommended), I realize that I haven't been convinced that I was wrong -- that there's a flaw in my theory I haven't seen -- only that the community strongly disapproves. Given that your post and mine share many of the same ideas, and yours is at +21 while mine is at -7, I think that the differences are that a. mine was seen as presumptuous (in the vein of the 'one great idea'), and b. I didn't communicate clearly enough (partially because I haven't studied enough terminology) and include answers to enough anticipated objections to overcome the resistance engendered by a. I think I also failed to clearly make the distinction between this as a normative strategy (that is, one I think ideal game-theoretic agents would follow, and a good reason for consciously deciding to be moral) and as a positive description (the reason actual human beings are moral.)
However, I recognize that even though I haven't yet been convinced of it, there may well be a problem here that I haven't seen but would if I knew more about decision theory. If you could explain such a problem to me, I would be genuinely grateful -- I want to be correct more than I want my current theory to be right.
Replies from: SilasBarta, SilasBarta↑ comment by SilasBarta · 2011-01-12T22:05:29.621Z · LW(p) · GW(p)
Okay, on re-reading your post, I can be more specific. I think you make good points (obviously, because of the similarity with my article), and it would probably be well-received if submitted here in early '09. However, there are cases where you re-treaded ground that has been discussed before without reference to the existing discussions and concepts:
The apparent contradiction in this case results from thinking about beliefs and actions as though they were separate. Arriving at a belief is an action in itself, one which can have effects on utility. One cannot, therefore, arrive at a belief about utility without considering the effects on utility that holding that belief would have. If arriving at the belief “actions are justified by their effect on utility” doesn’t maximize utility, then you shouldn’t arrive at that belief.
Here you're describing what Wei Dai calls "computational/logical consequences" of a decision in his UDT article.
This rule requires you to hold whatever beliefs will (conditional upon them being held) lead to the best results – even when the actions those beliefs produce don’t, in themselves, maximize utility.
Here you're describing EY's TDT algorithm.
The applied morality becomes deontological, in the sense that actions are judged not by their effect on utility but by their adherence to the pre-set principle.
The label of deontological doesn't quite fit here, as you don't advocate adhering to a set of categorical "don't do this" rules (as would be justified in a "running on corrupted hardware" case), but rather, consider a certain type of impact your decision has on the world, which itself determines what rules to follow.
Finally, I think you should have clarified that the relationship between your decision to (not) cheat and others' decision is not a causal one (though still sufficient to motivate your decision).
I don't think you deserved -7 (though I didn't vote you up myself). In particular, I stand by my initial comment that, contra Vladimir, you show sufficient assimilation of the value complexity and meta-ethics sequences. I think a lot of the backlash is just from the presentation -- not the format, or writing, but needing to adapt it to the terminology and insights already presented here. And I agree that you're justified in not being convinced you're wrong.
Hope that helps.
EDIT: You also might like this recent discussion about real-world Newcomblike problems, which I intend to come back to more rigorously
Replies from: Tesseract↑ comment by Tesseract · 2011-01-13T03:37:51.077Z · LW(p) · GW(p)
Very much, thank you. Your feedback has been a great help.
Given that others arrived at some of these conclusions before me, I can see why there would be disapproval -- though I can hardly feel disappointed to have independently discovered the same answers. I think I'll research the various models more thoroughly, refine my wording (I agree with you that using the term 'deontology' was a mistake), and eventually make a more complete and more sophisticated second attempt at morality as a decision theory problem.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-01-13T04:34:04.437Z · LW(p) · GW(p)
Great, glad to hear it! Looking forward to your next submission on this issue.
↑ comment by SilasBarta · 2011-01-11T22:11:16.041Z · LW(p) · GW(p)
Thanks for the feedback. Unfortunately, the discussion on my article was dominated by a huge tangent on utility functions (which I talked about, but was done in a way irrelevant to the points I was making). I think the difference was that I plugged my points into the scenarios and literature discussed here. What bothered me about your article was that it did not carefully define the relationship between your decision theory and the ethic you are arguing for, though I will read it again to give a more precise answer.
↑ comment by Vladimir_Nesov · 2011-01-10T18:50:24.859Z · LW(p) · GW(p)
The idea of complexity of values explains why "happiness" or "selfishness" can't be expected to capture the whole thing: when you talk about "good", you mean "good" and not some other concept. To unpack "good", you have no other option than to list all the things you value, and such list uttered by a human can't reflect the whole thing accurately anyway.
Metaethics sequence deals with errors of confusing moral reasons and historical explanations: evolution's goals are not your own and don't have normative power over your own goals, even if there is a surface similarity and hence some explanatory power.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-01-10T19:00:49.553Z · LW(p) · GW(p)
I agree that those are important things to learn, just not for the topic Tesseract is writing about.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-01-10T19:03:35.975Z · LW(p) · GW(p)
I agree that those are important things to learn, just not for the topic Tesseract is writing about.
What do you mean? Tesseract makes these exact errors in the post, and those posts explain how not to err there, which makes the posts directly relevant.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-01-10T19:14:10.171Z · LW(p) · GW(p)
Tesseract's conclusion is hindered by not having read about the interplay between decision theory and values (i.e. how to define a "selfish action", which consequences to take into consider, etc.), not the complexity of value as such. Tesseract would me making the same errors on decision theory even if human values were not so complex, and decision theory is the focus of the post.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-01-10T19:48:59.153Z · LW(p) · GW(p)
Might not be relevant to "Tesseract's conclusion", but is relevant to other little conclusions made in the post along the way, even if they are all independent and don't damage each other.
↑ comment by shokwave · 2011-01-10T18:51:18.348Z · LW(p) · GW(p)
However, the complexity of value and metaethics sequences don't help much.
They may not have much in the way of factual conclusions to operate by, but they are an excellent introduction to how to think about ethics, morality, and what humans want - which is effectively the first and last thirds of this post.
Replies from: Vaniver↑ comment by Vaniver · 2011-01-10T19:39:43.850Z · LW(p) · GW(p)
they are an excellent introduction to how to think about ethics, morality, and what humans want
Huh? It struck me as pretty poor, actually.
Replies from: orthonormal, shokwave↑ comment by orthonormal · 2011-01-10T21:05:56.622Z · LW(p) · GW(p)
It's not well-constructed overall, but I wish I had a nickel every time someone's huge ethical system turned out to be an unconscious example of rebelling within nature, or something that gets stuck on the pebblesorter example.
Replies from: Vaniver↑ comment by Vaniver · 2011-01-10T21:30:00.549Z · LW(p) · GW(p)
Right, but reversed stupidity is not intelligence. I mean, he can only get away with the following because he's left his terms so fuzzy as to be meaningless:
And if, in the meanwhile, it seems to you like I've just proved that there is no morality... well, I haven't proved any such thing. But, meanwhile, just ask yourself if you might want to help people even if there were no morality. If you find that the answer is yes, then you will later discover that you discovered morality.
That is, one would be upset if I said "there is a God, it's Maxwell's Equations!" because the concept of God and the concept of universal physical laws are generally distinct. Likewise, saying "well, morality is an inborn or taught bland desire to help others" makes a mockery of the word 'morality.'
Replies from: orthonormal↑ comment by orthonormal · 2011-01-10T21:49:11.655Z · LW(p) · GW(p)
I think your interpretation oversimplifies things. He's not saying "morality is an inborn or taught bland desire to help others"; he's rather making the claim (which he defers until later) that what we mean by morality cannot be divorced from contingent human psychology, choices and preferences, and that it's nonsense to claim "if moral sentiments and principles are contingent on the human brain rather than written into the nature of the universe, then human brains should therefore start acting like their caricatures of 'immoral' agents".
↑ comment by shokwave · 2011-01-11T04:06:37.839Z · LW(p) · GW(p)
I am not sure what you mean. Do you mean that the way Eliezer espouses thinking about ethics and morality in those sequences is a poor way of thinking about morality? Do you mean that Eliezer's explanations of that way are poor explanations? Both? Something else?
Replies from: Vaniver↑ comment by Vaniver · 2011-01-11T17:15:54.537Z · LW(p) · GW(p)
The methodology is mediocre, and the conclusions are questionable. At the moment I can't do much besides express distaste; my attempts to articulate alternatives have not gone well so far. But I am thinking about it, and actually just stumbled across something that might be useful.
Replies from: shokwave↑ comment by shokwave · 2011-01-12T03:15:38.765Z · LW(p) · GW(p)
The methodology is mediocre
I'm going to have to disagree with this. The methodology with which Eliezer approaches ethical and moral issues is definitely on par with or exceeding the philosophy of ethics that I've studied. I am still unsure whether you mean the methodology he espouses using, or the methods he applied to make the posts.
↑ comment by Tesseract · 2011-01-11T05:11:24.339Z · LW(p) · GW(p)
Your objection and its evident support by the community is noted, and therefore I have deleted the post. I will read further on the decision theory and its implications, as that seems to be a likely cause of error.
However, I have read the meta-ethics sequence, and some of Eliezer's other posts on morality, and found them unsatisfactory -- they seemed to me to presume that morality is something you should have regardless of the reason for it rather than seriously questioning the reasons for possessing it.
On the point of complexity of value, I was attempting to use the term 'utility' to describe human preferences, which would necessarily take into account complex values. If you could describe why this doesn't work well, I would appreciate the correction.
That said, I'm not going to contend here without doing more research first (and thank you for the links), so this will be my last post on the subject.
Replies from: ata, orthonormal↑ comment by ata · 2011-01-11T06:05:55.554Z · LW(p) · GW(p)
they seemed to me to presume that morality is something you should have regardless of the reason for it rather than seriously questioning the reasons for possessing it.
One thing to consider: Why do you need a reason to be moral/altruistic but not a reason to be selfish? (Or, if you do need a reason to be selfish, where does the recursion end, when you need to justify every motive in terms of another?)
↑ comment by orthonormal · 2011-01-13T23:09:39.782Z · LW(p) · GW(p)
On the topic of these decision theories, you might get a lot from the second half of Gary Drescher's book Good and Real. His take isn't quite the same thing as TDT or UDT, but it's on the same spectrum, and the presentation is excellent.
comment by Alicorn · 2011-01-10T17:41:29.768Z · LW(p) · GW(p)
I'd just like to announce publicly that my commitment to deontology is not based on conflating consequentialism with failing at collective action problems.
Replies from: WrongBot, cousin_it↑ comment by WrongBot · 2011-01-10T19:25:35.803Z · LW(p) · GW(p)
Indeed, it would be quite odd to adopt deontology because one expected it to have positive consequences.
Replies from: Perplexed, shokwave↑ comment by shokwave · 2011-01-10T19:37:45.817Z · LW(p) · GW(p)
Not as odd as you might think. I've made the point to several people that I am a consequentialist who, by virtue of only dealing with minor moral problems, behaves like a deontologist most of the time. The negative consequences of breaking with deontological principles immediately (and possibly long-term) outweigh the positive consequence improvement that the consequentialist action offers over the deontological action.
I imagine if Omega told you in no uncertain terms that one of the consequences of you being consequentialist is that one of your consequentialist decisions will have, unknown to you, horrific consequences that far outweigh all moral gains made - if Omega told you this, a consequentialist would desire to adopt deontology.
↑ comment by cousin_it · 2011-01-10T18:04:28.563Z · LW(p) · GW(p)
Can you give a simple example where your flavor of deontology conflicts with consequentialism?
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T18:06:56.075Z · LW(p) · GW(p)
I don't push people in front of trolleys. (Cue screams of outrage!)
Replies from: JoshuaZ, Clippy, cousin_it↑ comment by Clippy · 2011-01-10T19:59:33.011Z · LW(p) · GW(p)
I'm even better: I don't think metal should be formed into trolleys or tracks in the first place.
Replies from: jimrandomh, Vladimir_Nesov↑ comment by jimrandomh · 2011-01-10T20:08:22.897Z · LW(p) · GW(p)
How would you transport ore from mines to refineries and metal from refineries to extruders, then? Some evils really are necessary. I prefer to focus on the rope, which ought not to be securing people to tracks.
↑ comment by Vladimir_Nesov · 2011-01-10T20:09:50.023Z · LW(p) · GW(p)
Please go away.
Replies from: Clippy↑ comment by cousin_it · 2011-01-10T18:15:15.063Z · LW(p) · GW(p)
How about the original form of the dilemma? Would you flip a switch to divert the trolley to a track with 1 person tied to it instead of 5?
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T18:28:31.555Z · LW(p) · GW(p)
No.
(However, if there are 5 people total, and I can arrange for the train to run over only one of those same people instead of all five, then I'll flip the switch on the grounds that the one person is unsalvageable.)
Replies from: JGWeissman, jimrandomh, shokwave↑ comment by JGWeissman · 2011-01-10T18:42:10.576Z · LW(p) · GW(p)
I would predict that if the switch were initially set to send the trolley down the track with one person, you also would not flip it.
But suppose that you first see the two paths with people tied to the track, and you have not yet observed the position of the switch. As you look towards it, is there any particular position that you hope the switch is in?
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T18:47:06.220Z · LW(p) · GW(p)
I might have such hopes, if I had a way to differentiate between the people.
(And above, when I make statements about what I would do in trolley problems, I'm just phrasing normative principles in the first person. Sufficiently powerful prudential considerations could impel me to act wrongly. For instance, I might switch a trolley away from my sister and towards a stranger just because I care about my sister more.)
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-01-10T19:02:20.434Z · LW(p) · GW(p)
Find a point of balance, where the decision swings. What about sister vs. 2 people? Sister vs. million people? Say, balance is found at N people, so you value N+1 strangers more than your sister, and N people less. Then, N+1 people can be used in place of sister in the variant with 1 person on the other track: just as you'd reroute the train from your sister and to a random stranger, you'd reroute the train from N+1 strangers (which are even more valuable) and to one stranger.
Then, work back from that. If you reroute from N+1 people to 1 person, there is the smallest number M of people that you won't reroute from M people but would from all k>M. And there you have a weak trolley problem, closer to the original formulation.
(This is not the strongest problem with your argument, but an easy one, and a step towards seeing the central problem.)
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T19:17:19.818Z · LW(p) · GW(p)
Um, my prudential considerations do indeed work more or less consequentialistically. That's not news to me. They just aren't morality.
Replies from: jimrandomh, Vladimir_Nesov, Armok_GoB↑ comment by jimrandomh · 2011-01-10T19:25:26.769Z · LW(p) · GW(p)
Wait a second - is theree a difference of definitions here? That sounds a lot like what you'd get if you started with a mixed consequentialist and deontological morality, drew a boundary around the consequentialist parts and relabeled them not-morality, but didn't actually stop following them.
Replies from: shokwave, Alicorn↑ comment by shokwave · 2011-01-10T19:29:20.164Z · LW(p) · GW(p)
I presume prudential concerns are non-moral concerns. In the way that maintaining an entertainment budget next to your charity budget while kids are starving in poorer countries is not often considered a gross moral failure, I would consider the desire for entertainment to be a prudential concern that overrides or outweighs morality.
↑ comment by Alicorn · 2011-01-10T19:28:23.580Z · LW(p) · GW(p)
I guess that would yield something similar. It usually looks to me like consequentialists just care about the thing I call "prudence" and not at all about the thing I call "morality".
Replies from: TheOtherDave, jimrandomh↑ comment by TheOtherDave · 2011-01-10T20:35:18.151Z · LW(p) · GW(p)
That seems like a reasonable summary to me. Does it seem to you that we ought to? (Care about morality, that is.)
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T21:10:39.836Z · LW(p) · GW(p)
I think you ought to do morally right things; caring per se doesn't seem necessary.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-01-10T21:15:58.596Z · LW(p) · GW(p)
Fair enough.
Does it usually look to you like consequentialists just do prudential things and not morally right things?
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T21:23:18.018Z · LW(p) · GW(p)
Well, the vast majority of situations have no conflict. Getting a bowl of cereal in the morning is both prudent and right if you want cereal and don't have to do anything rights-violating or uncommonly destructive to get it. But in thought experiments it looks like consequentialists operate (or endorse operating) solely according to prudence.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-01-10T22:08:30.780Z · LW(p) · GW(p)
Agreed that it looks like consequentialists operate (1) solely according to prudence, if I understand properly what you mean by "prudence."
Agreed that in most cases there's no conflict.
I infer you believe that in cases where there is a conflict, deontologists do (or at least endorse) the morally right thing, and consequentialists do (oale) the prudent thing. Is that right?
I also infer from other discussions that you consider killing one innocent person to save five innocent people an example of a case with conflict, where the morally right thing to do is to not-kill an innocent person. Is that right?
===
(1) Or, as you say, at least endorse operating. I doubt that we actually do, in practice, operate solely according to prudence. Then again, I doubt that anyone operates solely according to the moral principles they endorse.
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T22:14:18.627Z · LW(p) · GW(p)
Right and right.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-01-10T22:33:56.591Z · LW(p) · GW(p)
OK, cool. Thanks.
If I informed you (1) that I would prefer that you choose to kill me rather than allow five other people to die so I could go on living, would that change the morally right thing to do? (Note I'm not asking you what you would do in that situation.)
==
(1) I mean convincingly informed you, not just posted a comment about it that you have no particular reason to take seriously. I'm not sure how I could do that, but just for concreteness, suppose I had Elspeth's power.
(EDIT: Actually, it occurs to me that I could more simply ask: "If I preferred...," given that I'm asking about your moral intuitions rather than your predicted behavior.)
Replies from: Alicorn↑ comment by jimrandomh · 2011-01-10T19:41:10.952Z · LW(p) · GW(p)
Does the importance of prudence ever scale without bound, such that it dominates all moral concerns if the stakes get high enough?
Replies from: Alicorn↑ comment by Vladimir_Nesov · 2011-01-10T19:44:35.059Z · LW(p) · GW(p)
Can't parse.
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T19:49:39.315Z · LW(p) · GW(p)
Easy reader version for consequentialists: I'm like a consequentialist with a cherry on top. I think this cherry on top is very, very important, and like to borrow moralistic terminology to talk about it. Its presence makes me a very bad consequentialist sometimes, but I think that's fine.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-01-10T20:06:13.146Z · LW(p) · GW(p)
Its presence makes me a very bad consequentialist sometimes, but I think that's fine.
If this cherry on top costs people lives, it's not "fine", it's evil incarnate. You should cut this part of yourself out without mercy.
(Compare to your Luminosity vampires, that are sometimes good, nice people, even if they eat people.)
Replies from: jimrandomh↑ comment by jimrandomh · 2011-01-10T20:36:14.507Z · LW(p) · GW(p)
I don't think cutting out deontology entirely would be a good thing. I do think that the relative weights of deontological and consequentialist rules needs to be considered, and that choosing inaction in a 5 lives:1 life trolley problem strongly suggests misweighting. But that's just a thought experiment; and I wouldn't consider it wrong to choose inaction in, say, a 1.2 lives:1 life trolley problem.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-01-10T21:18:21.305Z · LW(p) · GW(p)
I don't think cutting out deontology entirely would be a good thing. I do think that the relative weights of deontological and consequentialist rules needs to be considered, and that choosing inaction in a 5 lives:1 life trolley problem strongly suggests misweighting. But that's just a thought experiment; and I wouldn't consider it wrong to choose inaction in, say, a 1.2 lives:1 life trolley problem.
I agree (if not on 1.2 figure, then still on some 1+epsilon).
It's analogous to, say, prosecuting homosexuals. If some people feel bad emotions caused by others' homosexuality, this reason is weaker than disutility caused by the prosecution, and so sufficiently reflective bargaining between these reasons results in not prosecuting it (it's also much easier to adjust attitude towards homosexuality than one's sexual orientation, in the long run).
Here, we have moral intuitions that suggest adhering to moral principles and virtues, with disutility of overcoming them (in general, or just in high-stakes situations) bargaining against disutility of following them and thus making suboptimal decisions. Of these two, consequences ought to win out, as they can be much more severe (while the psychological disutility is bounded), and can't be systematically dissolved (while a culture of consequentialism could eventually make it psychologically easier to suppress non-consequentialist drives).
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T21:24:11.887Z · LW(p) · GW(p)
I think you mean "persecuting", although depending on what exactly you're talking about I suppose you could mean "prosecuting".
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-01-10T21:38:29.580Z · LW(p) · GW(p)
Unclear. I wanted to refer to legal acceptance as reflective distillation of social attitude as much as social attitude itself. Maybe still incorrect English usage?
↑ comment by Armok_GoB · 2011-01-13T22:49:50.314Z · LW(p) · GW(p)
I interpret this as that he currently acts consequentialist, but feel guilty after breaking a dentological principle, would behave in a more dentological fashion if he had more willpower, and would self modify to be purely dentological if he had the chance. Is this correct?
Replies from: Alicorn↑ comment by jimrandomh · 2011-01-10T19:00:11.955Z · LW(p) · GW(p)
What if it were 50 people? 500? 5*10^6? The remainder of all humanity?
My own position is that morality should incorporate both deontological and consequentialist terms, but they scale at different rates, so that deontology dominates when the stakes are very small and consequentialism dominates when the stakes are very large.
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T19:13:38.799Z · LW(p) · GW(p)
I am obliged to act based on my best information about the situation. If that best information tells me that:
I have no special positive obligations to anyone involved,
The one person is not willing to be run over to save the others (or simply willing to be run over e.g. because ey is suicidal), and
The one person is not morally responsible for the situation at hand or for any other wrong act such that they have waived their right to life,
Then I am obliged to let the trolley go. However, I have low priors on most humans being so very uninterested in helping others (or at least having an infrastructure to live in) that they wouldn't be willing to die to save the entire rest of the human species. So if that were really the stake at hand, the lone person tied to the track would have to be loudly announcing "I am a selfish bastard and I'd rather be the last human alive than die to save everyone else in the world!".
And, again, prudential concerns would probably kick in, most likely well before there were hundreds of people on the line.
Replies from: Yoreth↑ comment by Yoreth · 2011-01-10T21:42:16.253Z · LW(p) · GW(p)
Would it be correct to say that, insofar as you would hope that the one person would be willing to sacrifice his/her life for the cause of saving the 5*10^6 others, you yourself would pull the switch and then willingly sacrifice yourself to the death penalty (or whatever penalty there is for murder) for the same cause?
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T21:46:36.176Z · LW(p) · GW(p)
I'd be willing to die (including as part of a legal sentence) to save that many people. (Not that I wouldn't avoid dying if I could, but if that were a necessary part of the saving-people process I'd still enact said process.) I wouldn't kill someone I believed unwilling, even for the same purpose, including via trolley.
↑ comment by shokwave · 2011-01-10T18:47:10.370Z · LW(p) · GW(p)
I feel like the difference between "No matter what, this person will die" and "No matter what, one person will die" is very subtle. It seems like you could arrange thought experiments that trample this distinction. Would that pose a problem?
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T18:56:09.951Z · LW(p) · GW(p)
I don't remember the details, but while I was at the SIAI house I was presented some very elaborate thought experiments that attempted something like this. I derived the answer my system gives and announced it and everyone made outraged noises, but they also make outraged noises when I answered standard trolley problems, so I'm not sure to what extent I should consider that a remarkable feature of those thought experiments. Do you have one in mind you'd like me to reply to?
Replies from: shokwave↑ comment by shokwave · 2011-01-10T19:16:50.302Z · LW(p) · GW(p)
Not really. I am mildly opposed to asking trolley problem questions. I mostly just observed that, in my brain, there wasn't much difference between:
Set of 5 people where either 1 dies or 5 die.
Set of 6 people where either 1 dies or 5 die.
I wasn't sure exactly what work the word 'unsalvageable' was doing: was it that this person cannot in principle be saved, so er life is 'not counted', and really you have
Set of 4 people where either none die or 4 die?
Replies from: Alicorn↑ comment by Alicorn · 2011-01-10T19:18:13.071Z · LW(p) · GW(p)
Yes, that's the idea.
Replies from: shokwave↑ comment by shokwave · 2011-01-10T19:21:44.780Z · LW(p) · GW(p)
I see. My brain automatically does the math for me and sees 1 or 5 as equivalent to none or four. I think it assumes that human lives are fungible or something.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-01-12T02:41:15.033Z · LW(p) · GW(p)
That's a good brain. Pat it or something.
comment by Psychohistorian · 2011-01-10T19:57:03.893Z · LW(p) · GW(p)
I believe the entire first half of this can be summarized with a single comic.
comment by Will_Sawin · 2011-01-10T22:24:13.456Z · LW(p) · GW(p)
Even causal decision theorists don't need Kant to act in a manner that benefits all.
If N changes, together, are harmful, then at least one of those changes must be harmful in itself - a consequentialist evil. Maybe the students all thought that their choice would be one of the helpful, not one of the harmful, ones, in which case they were mistaken, and performed poorly because of it - not something you can solve with decision theory.
The small increase of the chance of anti-cheating reactions, as well as the diminished opinion of the school's future students relative to their grades, etc., are hard to see, but it is obvious that they together must add up to the same size loss as the individual student's gain.
If they all decide at the same time, it depends on the details of the decision theory whether it is rational to get stuck in a bad equilibrium out of multiple possibilities. No matter which you use, the students having a simple conversation beforehand will solve it.
Replies from: jimrandomh↑ comment by jimrandomh · 2011-01-10T23:00:12.303Z · LW(p) · GW(p)
No, because they each have their own incompatible definitions of good. A conversation beforehand is only helpful if they have a means of enforcing agreements.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-01-11T02:08:18.164Z · LW(p) · GW(p)
If everyone's an altruistic consequentialist, they have the same definition of good. If not, they're evil.
Replies from: Perplexed, ArisKatsaris↑ comment by Perplexed · 2011-01-11T02:27:54.856Z · LW(p) · GW(p)
If everyone is an omniscient altruistic consequentialist, that is.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-01-11T12:01:15.407Z · LW(p) · GW(p)
If they have limited information on the good, wouldn't a conversation invoke a kind of ethical Aumann's Agreement Theorem?
In general, if everyone agrees about some morality and disagrees about what it entails, that's a disagreement over facts, and confusion over facts will cause problems in any decision theory.
Replies from: Perplexed↑ comment by Perplexed · 2011-01-11T15:04:26.697Z · LW(p) · GW(p)
wouldn't a conversation invoke a kind of ethical Aumann's Agreement Theorem?
Yes, if there is time for a polite conversation before making an ethical decision. Too bad that the manufacturers of trolley problems usually don't allow enough time for idle chit-chat.
Still, it is an interesting conjecture. The eAAT conjecture. Can we find a proof? A counter-example?
Here is an attempt at a counter-example. I strongly prefer to keep my sexual orientation secret from you. You only mildly prefer to know my sexual orientation. Thus, it might seem that my orientation should remain secret. But then we risk that I will receive inappropriate birthday gifts from you. Or, what if I prefer to keep secret the fact that I have been diagnosed with an incurable fatal disease? What if I wish to keep this a secret only to spare your feelings?
Of course, we can avoid this kind of problem by supplementing our utility maximization principle with a second moral axiom - No Secrets. Can we add this axiom and still call ourselves pure utilitarians? Can we be mathematically consistent utilitarians without this axiom? I'll leave this debate to others.
It is an interesting exercise, though, to revisit the von Neumann/Savage/Aumann-Anscombe algorithms for constructing utility functions when agents are allowed to keep some of their preferences secret. Agents still would know their own utilities exactly, but would only have a range (or a pdf?) for the utilities of other agents. It might be illuminating to reconstruct game theory and utilitarian ethics incorporating this twist.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-01-11T17:49:22.476Z · LW(p) · GW(p)
The TDT users sees the problem as being that if he fights for a cause, that others may also fight for some less-important cause that they think is more important, leading to both causes being harmed. He responds by reducing his willingness to fight.
Someone who is morally uncertain (because he's not omniscient) realizes that the cause he is fighting for might not be the most important one, and that other's causes may actually be correct, which should reduce his willingness to fight by the same amount.
If we assume that all agents believe in the same complicated process for calculating the utilities, but are unsure how it works out in practice, then what they lack is a totally physical knowledge that should follow all the agreement theorems. If agent's extrapolated volitions are not coherent, this is false.
↑ comment by ArisKatsaris · 2011-01-11T04:09:36.325Z · LW(p) · GW(p)
Really? Is there a single good value in the universe? Happiness, comfort, fun, freedom, you can't even conceive someone who weighs the worth of these values slightly differently than someone else and yet both can remain non-evil?
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-01-11T11:59:47.976Z · LW(p) · GW(p)
Fair point. If they're slightly different, it should be a slight problem, and TDT would help that. If they're significantly different, it would be a significant problem, and you might be able to make a case that one is evil.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-01-11T16:38:56.118Z · LW(p) · GW(p)
If you can call someone "evil" even though they may altruistically work for the increase of the well-being of others, as they perceive it to be, then what's the word you'd use to describe people who are sadists and actively seek to hurt others, or people who would sacrifice the wellbeing of millions people for their own selfish benefit?
Your labelling scheme doesn't serve me in treating people appropriately, realizing which people I ought consider enemies and which I ought treat as potential allies -- nor which people strive to increase total (or average) utility and which people strive to decrease it.
So what's its point? Why consider these people "evil"? It almost seems to me as if you're working backwards from a conclusion, starting with the assumption that all good people must have the same goals, and therefore someone who differs must be evil.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-01-11T17:55:45.514Z · LW(p) · GW(p)
It depends on if you interpret "good" and "evil" as words derived from "should," as I was doing. Good people are those that act as they should behave, and evil people as those that act as they shouldn't behave. There is only one right thing to do.
But if you want to define evil another way, honestly, you're probably right. I would note that I think "might be able to make the case that" is enough qualification.
So, more clearly:
If everyone's extrapolated values are in accordance with my extrapolated values, information is our only problem, which we don't need moral and decision theories to deal with.
If our extrapolated values differ, then they may differ a bit, in which case we have a small problem, or a medium amount, in which case there's a big problem, or a lot, in which case there's a huge problem. I can rate them on a continuous scale as to how well they accord with my extrapolated values. The ones at the top, I can work with, and those at the bottom, I can work against. However TDT states that we should be nicer to those at the bottom so that they'll be nicer than us, whereas CDT does not, and therein lies the difference.
comment by Psychohistorian · 2011-01-10T19:54:09.172Z · LW(p) · GW(p)
Because that reasoning – that your own utility is maximized by selfishness – literally cannot be right. If it were right, then it would be the answer all rational beings would arrive at, and if all rational beings arrived at that answer, then none of them would cooperate and everyone would be worse off.
This is not at all true. The fact that if people acted as I do, there would be no stable equilibrium is largely immaterial, because my actions do not affect how others behave. Unless I value "acting in a way that can be universalized," the fact that I don't do so has no effect on my decision making.
If all I care about is the good grade, and I have no value for personal integrity and whatnot, then if I'm an egoist and only care about myself, I should cheat. Other people are not egoists, and I can take advantage of this. If I am not an egoist and value integrity, or am too risk-averse, I should not cheat. There aren't a lot of genuine egoists out there, so this isn't usually a problem. Most of the ones that do exist tend to be constrained by the risk of punishment in many cases.
comment by JoshuaZ · 2011-01-10T17:44:11.031Z · LW(p) · GW(p)
Consider a consequentialist student being tempted to cheat on a test. Getting a good grade is important to him, and he can only do that if he cheats; cheating will make him significantly happier. His school trusts its students, so he’s pretty sure he won’t get caught, and the test isn’t curved, so no one else will be hurt by him getting a good score. He decides to cheat, reasoning that it’s at least morally neutral, if not a moral imperative – after all, his cheating will increase the world’s utility.
Vlad has discussed below some of the problems with this claim. But there's a more serious issue: even under causal decision theory it is likely that such cheating will not increase the overall utility. Furture employers will look at his transcript. If they are hiring someone who is less qualified because that person cheated then they may not do the job as well as another individual.
ETA: Thinking about this slightly more, it sounds like you are trying to construct a least convenient world. In that case, your argument might go through.
comment by [deleted] · 2011-01-10T17:14:51.606Z · LW(p) · GW(p)
Because that reasoning – that your own utility is maximized by selfishness – literally cannot be right. If it were right, then it would be the answer all rational beings would arrive at, and if all rational beings arrived at that answer, then none of them would cooperate and everyone would be worse off. If selfish utility maximizing is the correct answer for how to maximize selfish utility, selfish utility is not maximized. Therefore selfishness is the wrong answer.
Considering that in the real world different people will have differing abilities to calculate the consequences of actions/beliefs at various levels of complexity, and differing abilities to deceive others about their own beliefs and actions, there could be a person who perceives himself to be above average in this ability for whom 'hypocrisy' could be the right answer.
comment by ata · 2011-01-10T21:15:01.888Z · LW(p) · GW(p)
You're begging the question by assuming that selfish motives are the only real, valid ultimate justifications for actions (including choices to self-modify not to be selfish), when in humans that is plainly false to begin with (see the Complexity of Value sequence). If you place a higher value on selfishness then most people, then maybe all of your moral deliberation will begin and end with asking "But how does that help me?", but for most people it won't. Perhaps a lot of people will confuse themselves into thinking that everything must go back to a selfish motive if they recurse too far into metaethics they don't understand, but ultimately... well, I'll let the Metaethics Sequence say it better than I can right now.
(That is: you seem to be unaware of the huge amount of existing discussion of this subject on Less Wrong, and you should read some of it before continuing with this series of posts. Vladimir Nesov's comment should point you in the right direction.)
comment by cousin_it · 2011-01-10T17:59:51.786Z · LW(p) · GW(p)
Morality is part of your preferences, same as the emotion of happiness and other things. It's implemented within your brain, the part that judges situations as "fair" or "unfair", etc. I don't understand why you want to go looking for something more objective than that. What if you eventually find that grand light in the sky, the holy grail of "objective morality" expressed as a simple beautiful formula, and it tells you that torturing babies is intrinsically good? Will you listen to it, or to the computation within your own brain?
Seconding Nesov's suggestion to read more of the sequences. The topic that interests you has been pretty thoroughly covered.
comment by shokwave · 2011-01-10T17:36:17.557Z · LW(p) · GW(p)
If cheating became widespread, there would be consequences
...
But wait. If all the students are consequentialists, then they’ll all decide to cheat, following the same logic as the first.
Emphasis mine. A consequentialist student will see that the consequence of them cheating is "everyone cheats -> draconian measures -> worse off overall". So they won't cheat. Or they will cheat in a way that doesn't cause everyone to cheat - only under special circumstances that they know won't apply to everyone all the time.
edit: It may seem a cheap trick to simply wave it off as consequences that will be foreseen. If in every single situation where consequentialists would lose, I simply say that losing is a consequence that consequentialists would take into account and avoid, then I'm not really talking about consequentialists so much as talking about "eternally-correct-decision-ists".
This trick isn't one, though, in this situation (and many others). I am putting all the difficulty of moral decisions purely on determining the consequences (not a moral activity). This situation (and other stag hunt / prisoner's dilemma / tragedy of the commons style situations) is easy enough for a student to model and predict. If all the students were dumb enough to be incapable of figuring out the draconian consequences (possible, as they are resorting to cheating!) then these students fail worse than nonconsequentialist students.
I would like to say that is a fact about the student, not a fact about consequentialism. But it is a fact about consequentialism that it doesn't work for bad modelers; if you can't predict the consequences accurately, you can't make decisions accurately. I think that is the strongest point against consequentialism at the moment: the claim that good performance from consequentalism requires intractable computation and that feasible consequentialism provides worse performance than an alternative.
Replies from: billswift↑ comment by billswift · 2011-01-10T17:58:03.252Z · LW(p) · GW(p)
Or they are there to actually learn something. I don't know about you, but I have yet to see any way to learn by cheating. Cheating is often, in more than just learning, non-productive to the potential cheater's goals.
Replies from: shokwave↑ comment by shokwave · 2011-01-10T18:32:00.375Z · LW(p) · GW(p)
Or they are there to actually learn something.
The view that secondary level education is about instilling desired behaviours or socialising children as much as it is about learning is very common and somewhat well-supported - and to the extent that schools are focused on learning, there is again a somewhat well-supported view that they don't even do a good job of this.
The view that tertiary level education is about obtaining a piece of paper that signals your hire-ability is widespread and common.
To the extent that potential cheaters have these goals in mind, cheating is more efficient than learning.