Seeking ethical rules-of-thumb for comparison
post by DataPacRat · 2012-06-03T04:36:25.012Z · LW · GW · Legacy · 44 commentsContents
44 comments
Rules-of-thumb are handy, in that they let you use a solution you've figured out beforehand without having to take the time and effort to re-derive it in the heat of the moment. They may not apply in all situations, they may not provide the absolutely maximally best answer, but in situations where you have limited time to come up with an answer, they can certainly provide the best answer that it's possible for you to come up with in the time you have to think about it.
I'm currently seeking fairly fundamental rules-of-thumb, which can serve as overall ethical guidelines, or even as the axioms for a full ethical system; and preferably ones that can pass at least the basic sniff-test of actually being usable in everyday life; so that I can compare them with each other, and try to figure out ahead of time whether any of them would work better than the others, either in specific sorts of situations or in general.
Here are a few examples of what I'm thinking of:
* Pacifism. Violence is bad, so never use violence. In game theory, this would be the 'always cooperate' strategy of the Iterated Prisoner's Dilemma, and is the simplest strategy that satisfies the criteria of being 'nice'.
* Zero-Aggression Principle. Do not /initiate/ violence, but if violence is used against you, act violently in self-defense. The foundation of many variations of libertarianism. In the IPD, this satisfies both the criteria of being 'nice' and being 'retaliating'.
* Proportional Force. Aim for the least amount of violence to be done: "Avoid rather than check, check rather than harm...". This meets being 'nice', 'retaliating', and in a certain sense, 'forgiving', for the IPD.
I'm hoping to learn of rules-of-thumb which are at least as useful as the ZAP; I know and respect certain people who base their own ethics on the ZAP, but reject the idea of proportional force, and am hoping to learn of additional alternatives so I can have a better idea of the range of available options.
Any suggestions?
44 comments
Comments sorted by top scores.
comment by drethelin · 2012-06-03T06:53:59.617Z · LW(p) · GW(p)
don't be a dick, don't forget to be awesome, and don't trust the skull.
Replies from: pleeppleep↑ comment by pleeppleep · 2012-06-04T19:24:03.423Z · LW(p) · GW(p)
That a Vlogbrothers reference?
Replies from: pragmatist↑ comment by pragmatist · 2012-06-04T19:31:56.917Z · LW(p) · GW(p)
I was thinking Planescape: Torment, the greatest computer game ever created.
Replies from: drethelin↑ comment by drethelin · 2012-06-04T20:09:45.323Z · LW(p) · GW(p)
things can be more than one thing
Replies from: pleeppleep↑ comment by pleeppleep · 2012-06-05T14:01:56.654Z · LW(p) · GW(p)
so... both?
comment by Shmi (shminux) · 2012-06-03T18:26:25.195Z · LW(p) · GW(p)
Proportional Force. Aim for the least amount of violence to be done
Why? It seems more rational to exert enough force to eliminate the threat forever without creating new ones.
Replies from: DataPacRat, drethelin↑ comment by DataPacRat · 2012-06-03T19:06:18.801Z · LW(p) · GW(p)
"For the good of the tribe, do not murder, not even for the good of the tribe." (From: http://lesswrong.com/lw/uv/ends_dont_justify_means_among_humans/ )
Here's a couple of excerpts from an email conversation I've recently had on exactly this idea:
Given that:
- with proper epistemology, there is always going to be a certain amount of doubt about whether force has been initiated against you, or by whom;
- that sometimes people make mistakes about the level of force they end up using;
- that it's immoral to create collateral damage that harms innocent third parties;
- that it's becoming ever-easier for people to have ever-greater amounts of destructive force at their disposal;
- and that even someone who initiates force against you can potentially repent and engage in voluntary positive-sum trade that benefits you...
... then it is in every individual's own long-term self-interest to:
- try to prevent the amount of force used in conflicts from escalating;
- to attempt to use the minimal amount of force required to defend themselves, their loved ones, and their property;
- to consider the use of greatly excessive "defensive" force to be immoral.
...
Or, to use a more concrete example - if somebody steals your ice cream, shooting them in the head is an immoral reaction, as it is possible to deal with such situations using less force, that using lethal force endangers nearby innocents without just cause, and so on. If someone else were to shoot dead an ice-cream thief, then it would be within reason for me to consider them to be a danger to myself and others, and to prepare to defend myself against them - and, depending on the situation and my abilities, to treat the shooter as if they had committed a crime and arrest them (or the equivalent, depending on the local judicial process).
↑ comment by drethelin · 2012-06-03T20:08:41.191Z · LW(p) · GW(p)
This just in: shminux jailed for killing someone who borrowed his pencil without asking, in order to eliminate the thread of pencil-thieving forever.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-03T21:42:37.395Z · LW(p) · GW(p)
getting jailed would count as a new threat, wouldn't it?
comment by Eugine_Nier · 2012-06-05T03:00:05.180Z · LW(p) · GW(p)
Keep in mind that when asked for good cached wisdom, people tend to prioritize showing off their cleverness over giving useful advise.
Replies from: DataPacRat↑ comment by DataPacRat · 2012-06-05T05:24:30.980Z · LW(p) · GW(p)
Can you recommend any ways to encourage maximizing the giving of useful advice, with a minimal of noise of cleverness-showing-off mixed in?
comment by maia · 2012-06-03T05:53:20.115Z · LW(p) · GW(p)
"Check consequentialism" seems like a useful thing to do in most situations. What outcomes can you expect from aggression vs. nonviolence, and which is preferred in the particular situation?
Replies from: DataPacRat↑ comment by DataPacRat · 2012-06-03T19:09:21.740Z · LW(p) · GW(p)
When there's time to consider it, that's certainly a valid approach. But when the elephants are stampeding towards you and your greatest enemy is sweeping down to steal your girl and the MacGuffin is rolling towards the edge of the cliff... just to use an example... then you just may be a teensy bit too busy to calmly and rationally consider the various consequences - all you may /have/ to go on are the thoughts you've pre-cached.
Replies from: maia↑ comment by maia · 2012-06-04T15:08:49.866Z · LW(p) · GW(p)
So you're specifically looking for extremely cheap heuristics to use.
Replies from: DataPacRat↑ comment by DataPacRat · 2012-06-04T15:36:45.047Z · LW(p) · GW(p)
That's pretty much I've been trying to get across as my definition of 'rules-of-thumb' for this post, yes. (Of course, it's hard not to overestimate how clear your explanations are, which is why I've been trying to re-explain that definition in different ways.)
Replies from: maia↑ comment by maia · 2012-06-04T15:44:47.314Z · LW(p) · GW(p)
It was clear that you wanted heuristics. The line for how cheap they need to be isn't clear, and is hard to define. I can't think of a situation I've personally faced where "check consequentialism" would have been too expensive.
Replies from: DataPacRat↑ comment by DataPacRat · 2012-06-04T18:55:19.039Z · LW(p) · GW(p)
What I'm really hoping to see are heuristics that take less than a second to think through; but if you want a well-defined line, how about I draw on a completely inapplicable cliche, and define it as a five-second rule - if it takes more than five seconds to think through, it's almost certainly too complicated for what I'm hunting for.
For example - at a bar, a bar fight breaks out next to you. The Pacifism rule-of-thumb is simple - don't start fighting, or even fighting back. ZAP doesn't take much more effort; don't start fighting, unless someone attacks you. (ZAP with 'common defense' is almost as easy - don't start fighting, unless someone attacks you or the people you've previously decided you're willing to fight to defend.) Proportional response is slightly more complicated still, but still doesn't take much thought - if nobody attacks you (or the people you'll fight for), don't get involved; if it's a relatively harmless fistfight brawl, don't pull a gun and start shooting; etc. At any given point, the limits on your actions that any of these ethical rules-of-thumb place are clear enough that you almost don't have to do any thinking at all to figure out.
'Check consequentialism' seems to be a guideline of a somewhat different nature. In the sort of nearby bar-brawl described above, it's hard to tell ahead of time what this heuristic would lead you to conclude - or to tell how long it would take you to figure out what to do - or make any predictions at all, really. It doesn't seem to place any specific limits on your actions, limits which may reduce your short-term benefits but also provide long-term gains (eg, "For the good of the tribe, don't murder").
It's entirely possible that I'm mixing up aspects that my baseline rules-of-thumb have in common with what's actually most useful about them; but since I seem to have gotten as far as I can in my reasoning on my own, it seems worthwhile to try to evoke some assistance from anyone I can here.
comment by pragmatist · 2012-06-04T02:06:51.120Z · LW(p) · GW(p)
Is there a reason all of your examples correspond to strategies in the IPD? Because that seems like a pretty bad framework for thinking about ethics. As an illustration of the inadequacy of the framework, consider what a terrible ethical rule ZAP is. In authorizing the use of violence only in self-defense, it privileges concern for your well-being over that of others to an extreme degree. Perhaps this is a good strategy from a prudential point of view, but it certainly doesn't seem like remotely the right strategy from a moral point of view. According to ZAP, if I see a thug assaulting a young child, I should refrain from violent intervention. On the other hand, if someone shoves me during an argument in a bar, I should respond with violence.
The problem is that by focusing on the IPD you have restricted the ethical arena to situations where your own reward is at stake. I would think this is precisely the wrong set of circumstances to focus on when developing moral principles. Adam Smith was on to something when he wrote that moral reasoning involves adopting the perspective of an impartial spectator, not the perspective of an agent whose interests are involved in the scenario.
Replies from: DataPacRat↑ comment by DataPacRat · 2012-06-04T04:42:27.091Z · LW(p) · GW(p)
The main interpretations of the ZAP I've seen described include the idea of using force when acting in the 'common defense' - that it can be reasonable to assume that someone suffering an attack would ask you to defend them if they could.
Another aspect of the ZAP seems to be that when force is initiated against you, then what changes is that you now have the /option/ of using force without moral qualm, not that you are automatically required to use it.
comment by pleeppleep · 2012-06-04T12:20:06.310Z · LW(p) · GW(p)
If your opinion of a person or character would go down for doing something in your place, don't do it.
If doing something will probably lead to an outcome that feels wrong, don't do it.
The latter takes very slight precedence over the former. If doing something is likely to lead to the opposite of either of these, do it.
This is really the only concept of morality that applies to me. It has the effect of making typically amoral actions justifiable if done in a likeable enough manner, and traditionally moral acts despicable if committed in a particularly pretentious or otherwise obnoxious way.
comment by shokwave · 2012-06-03T21:45:22.023Z · LW(p) · GW(p)
The most fundamental rule of thumb, the best overall guideline:
Play tit-for-tat.
That is, on the first encounter cooperate, and from then on scrupulously cooperate or defect as per the other's last move. Advice for this rule would be to take some time to ensure you communicate your cooperation moves clearly (make sure the other knows you're cooperating when you're cooperating, and let them figure out defect moves by themselves).
comment by buybuydandavis · 2012-06-03T20:56:59.233Z · LW(p) · GW(p)
Evaluate consequences at multiple orders of abstraction.
Some people evaluate only in the concrete in your face anecdote. Some people evaluate only from a God's eye perspective. Evaluate at multiple levels, and see if they agree.
Have less confidence the higher your level of abstractions go.
Maybe the overarching principle is a good one, but your ability to make accurate generalizations is less than your ability to determine whether being smashed in the head with a hammer will be unpleasant.
Along these lines, have respect for your own ignorance.
Always ask whether it's likely you have enough information and experience to be confident in your answer. For any hypothesis testing problem, include the hypothesis "I don't know what I'm talking about", and take it seriously.
comment by DavidAgain · 2012-06-05T18:00:13.882Z · LW(p) · GW(p)
I think that despite the fact they're based on repetiiton, ZAP and proportional both tend towards being too much based on individual cases rather than interaction with other [i]people[/i]. I think what people should (and often do) do is more based on their longer term experience of the other person, or sometimes other people in the same situation.
So I find it's good to assume cooperation with most of the people, most of the time, especially when you can clearly signal that you are taking that approach, and helpful to forgive rather than punish (I should add the proviso I'm psychologically inclined to this approach anyway, and therefore biased). However, with some people and sometimes with people I have stereotyped as a certain 'kind' (e.g. 'jocks') or people in a certain situation (e.g. 'on London public transport') I assume less cooperation from the beginning.
PS: have you considered the virtue ethics approach? I.e. 'do what a person who was like X would do' or 'act like the generous man would act' etc.
Replies from: DataPacRat↑ comment by DataPacRat · 2012-06-05T20:35:46.562Z · LW(p) · GW(p)
PS: have you considered the virtue ethics approach? I.e. 'do what a person who was like X would do' or 'act like the generous man would act' etc.
Now /there/ is an easily-remembered heuristic, which is fairly easy to think of in the heat of the moment, and which I hadn't consciously considered at all. I'm definitely going to add this one to my set for comparison with the others.
The trickiest part would seem to be selecting an appropriate role-model, or at least a decent archetype. Even if the person being used for comparison is fictional, such as HPMOR's Rationalist!Harry, or even if it's meta-fictional, such as GrownUp!Rationalist!Harry, I don' t think the standard LessWrong advice against generalizing from fictional evidence would apply - in this case, we wouldn't be trying to construct a model of reality from evidence that didn't actually happen, we'd be constructing a model of ethical behaviour, a rather different sort of thing.
Replies from: DavidAgain↑ comment by DavidAgain · 2012-06-07T07:46:23.170Z · LW(p) · GW(p)
Agreed: my take on virtue ethics would be that you are following your image of 'the good man', not trying to find out empirically what some specific good man did. So people can ask 'What would Jesus do?' and if they found out Jesus was actually really mean it shouldn't change their ethics. For what it's worth, I think Aristotle and Hume are both genuinely worth reading on this sort of thing: they've got some very useful folk-psychology insights. Though anyone who's seen how much this community uses the essentially Aristotelean concept of akrasia shouldn't be too surprised by that.
Another slightly differnt take (don't know if it's been identiied and named in academic philosophy) is the principle of transparency/scrutiny. The idea 'assume you're being watched' obviously exists in lots of religious contexts and can come in neurotic forms. But the principle that you should act as if all your actions are open to scrutiny has been suggested by various people (Zen teachers and Stoics amongst them, I think), and has some merit.
Professionally, as a civil servant, I actually rely to some degree on this. The fundamental codes of good practice when dealing with external bodies (papers, powerful trade bodies, political groups, the public...) are often best delivered not through attention to the minutae but by asking 'how would this look if it was subject to a Freedom of Information request?' Similarly, you can shortcut internalised excuses about a convenient decision that you know isn't really justified if you ask the question 'What would happen if this went to judicial review?'
The benefit of this approach is that it's not only a rule of thumb you can apply at the time, but a pricniple you can develop by logging when you've been uncertain of how to act and making a habit of actually opening the decision to scrutiny by others (a trusted friend, a rationalist community...) Knowing that it will be actually scrutinised makes it harder to try to push through a decision in your own mind, ignoring the objections that arise. It might also make it easier to take decisions that seem dubious but you truly believe are justified.
PS: I first thought of the actualy scrutiny idea in quasi-ethical contexts of personal goals. Decisions on whether a strict revision routine or diet can be interrupted by the exceptional circumstances of the Best Party Ever or the birthday cake your loving mother just made are not reliably made by the motivated individual involved. Submitting to external judgement on what's justified would lead most people to a more realistic assessment of what's a good reason and what's just an excuse.
comment by Will_Newsome · 2012-06-03T06:34:11.016Z · LW(p) · GW(p)
Go meta. If that doesn't work, go meta. If it does work, go meta. (This is especially useful for ethics but applies everywhere.)
Replies from: faul_sname, Jayson_Virissimo, Eugine_Nier↑ comment by faul_sname · 2012-06-03T06:43:47.252Z · LW(p) · GW(p)
Ah, the LW approach. I would argue exactly the opposite: look for examples of successful decision heuristics and emulate those. Check consequentialism only when your rules of thumb disagree.
Replies from: Karmakaiser↑ comment by Karmakaiser · 2012-06-04T14:58:06.106Z · LW(p) · GW(p)
A more nuanced view of going meta might be the Hansonian method of collecting a large amount puzzles and only going meta to find explanations that leave the fewest mysteries and greatest number of accurate predictions. The exhortation to wait until you have a large collection of mysteries that may have common threads seems to be essential to the way he thinks.
↑ comment by Jayson_Virissimo · 2012-06-06T09:45:38.717Z · LW(p) · GW(p)
This depends largely on how many cycles you have to burn before you have to make a "moral decision". If you are in a dark alleyway and someone is walking towards you brandishing a knife, then it probably isn't a good time to "go meta" (unless climbing up the fire escape is "going meta").
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-06-06T12:22:01.571Z · LW(p) · GW(p)
Personally, my "self" would not be called upon to try to solve that decision problem; decisions would be made by only semi-self-like cognitive processes. There may be more precise examples.
↑ comment by Eugine_Nier · 2012-06-05T03:12:42.986Z · LW(p) · GW(p)
Don't get so caught up going meta that you loose sight of the object level.
comment by Karmakaiser · 2012-06-04T17:24:46.534Z · LW(p) · GW(p)
You are not disturbed by things, but by the impressions you have of those things.
comment by wedrifid · 2012-06-04T04:26:03.242Z · LW(p) · GW(p)
I'm hoping to learn of rules-of-thumb which are at least as useful as the ZAP; I know and respect certain people who base their own ethics on the ZAP, but reject the idea of proportional force, and am hoping to learn of additional alternatives so I can have a better idea of the range of available options.
ZAP describes my own ethics. Proportional force is situational. Sometimes more than proportional force is the most useful approach. Especially when "that which does not kill him makes him stronger".
Replies from: DataPacRat↑ comment by DataPacRat · 2012-06-04T04:39:07.341Z · LW(p) · GW(p)
Out of curiosity, do the points raised in the message I quoted in http://lesswrong.com/r/discussion/lw/cu8/seeking_ethical_rulesofthumb_for_comparison/6qrj significantly affect your estimate of how likely it is that proportional force is a better rule-of-thumb than the ZAP?
Replies from: wedrifid↑ comment by wedrifid · 2012-06-04T05:07:11.718Z · LW(p) · GW(p)
Out of curiosity, do the points raised in the message I quoted in http://lesswrong.com/r/discussion/lw/cu8/seeking_ethical_rulesofthumb_for_comparison/6qrj significantly affect your estimate of how likely it is that proportional force is a better rule-of-thumb than the ZAP?
Not at all to be honest, in as much as they were already accounted for. A strict proportionate force policy is naive and gives worse outcomes as well as worse incentives for potential defectors. The best degree of response is situational and it would be worse for even the tribe if everyone was limited to only proportionate responses.
Replies from: DataPacRat↑ comment by DataPacRat · 2012-06-04T05:22:07.851Z · LW(p) · GW(p)
While the 'best' degree of response may be situational; I've been trying to look for responses which may not be best, but which, as cached thoughts, may provide a better response than could be thought of in situations with limited time to think.
May I ask what reasoning you're basing your preference of the ZAP to proportional response on, or perhaps for some explicit examples of situations which you're using which demonstrate the ZAP's superiority?
Replies from: wedrifid↑ comment by wedrifid · 2012-06-04T05:40:56.288Z · LW(p) · GW(p)
May I ask what reasoning you're basing your preference of the ZAP to proportional response on, or perhaps for some explicit examples of situations which you're using which demonstrate the ZAP's superiority?
A few points:
- People doing violence against me bad.
- Expectation of violence done upon them in retaliation makes people less likely to do violence to me.
- Sometimes expectation of greater amounts of violence is more disincentive.
Less violence done against me good.
People who initiate violence against me (or anyone I would prefer violence done against) sacrifice their rights to a corresponding, greater than directly linear degree. Refraining from extreme force against them is done for practical reasons, not ethical ones.
- It is not always possible or practical to give a proportionate response. If, when it is possible to respond, that response is artificially capped at 'proportionate' the expected retaliation is less than the original attack. If the game is sufficiently close to zero sum that means the other has an incentive to attack.
- Crippled enemies or rivals are not a threat. Bitter rivals are a significant ongoing threat.
comment by [deleted] · 2012-06-04T01:46:13.723Z · LW(p) · GW(p)
Make new mistakes.
comment by Andreas_Giger · 2012-06-04T00:29:29.415Z · LW(p) · GW(p)
Don't derive rules for your everyday life from carefully designed mathematical problems like IPD, especially not ethical rules.
Don't assume that labels used in a mathematical context correspond to what they're usually applied to in non-mathematical contexts.
comment by TwistingFingers · 2012-06-03T04:47:55.040Z · LW(p) · GW(p)
If it has fur, it might have rabies.
Replies from: faul_sname