Put Yourself in Manual Mode (aka Shut Up and Multiply)

post by lukeprog · 2011-03-27T06:13:20.780Z · LW · GW · Legacy · 25 comments

Contents

25 comments

Joshua Greene manages to squeeze his ideas about 'point and shoot morality vs. manual mode morality' into just 10 minutes. For those unfamiliar, his work is a neuroscientific approach to recommending that we shut up and multiply.

Greene's 10-minute video lecture.

25 comments

Comments sorted by top scores.

comment by [deleted] · 2011-03-27T07:28:37.675Z · LW(p) · GW(p)

I would be interested to hear, from those who regard themselves as very rational and not afraid to boast about it, how confused people are about these issues.

What is the rational response to all of the scientific proof that our moral intuition is inconsistent? Is it definitely necessary to resolve the inconsistencies? If we can describe some resolutions as "in favor of idealism" and others as resolutions "in favor of cynicism," which kind is best supported by rationality?

Many common life experiences also reveal inconsistencies in your moral intuitions. I have a feeling that there's a general trend of people resolving inconsistencies on the side of cynicism as they age -- for instance older people are more right-wing than younger people.

"Shut up and multiply" I think summarizes a resolution on the side of idealism.

Who's right? Or tell me why this isn't a useful way to look at the problem.

Replies from: Vladimir_M, atucker
comment by Vladimir_M · 2011-03-27T08:50:19.039Z · LW(p) · GW(p)

What is the rational response to all of the scientific proof that our moral intuition is inconsistent?

In my opinion, the trolley problems and the Singerian arguments discussed in this video are far from sufficient to show that our moral intuition is actually inconsistent (though it probably is). These examples lead to problematic results only under naive utilitarian assumptions, which are not only unrealistic, but also turn out to be much more incoherent than human folk morality on closer examination.

Our moral intuitions in fact do a very good job of arbitrating and coordinating human action given the unpredictability of the real world and the complexity of the game-theoretic issues involved, which utilitarianism is usually incapable of handling. (This is one of the main reasons why attempting explicit utilitarian calculations about real-life problems is one of the surest ways to divorce one's thinking from reality.) Singerians and other fervent utilitarians are enamored with their system so much that they see human deviations from it as ipso facto pathological and problematic, but as with other ideologues, when the world fails to conform to their system, it's usually a problem with the latter, not the former.

Replies from: cousin_it, None
comment by cousin_it · 2011-03-27T10:36:10.876Z · LW(p) · GW(p)

Our moral intuitions in fact do a very good job of arbitrating and coordinating human action given the unpredictability of the real world and the complexity of the game-theoretic issues involved, which utilitarianism is usually incapable of handling. (This is one of the main reasons why attempting explicit utilitarian calculations about real-life problems is one of the surest ways to divorce one's thinking from reality.)

If you have many convincing examples of this, you should write a post and sway a lot of people from utilitarianism.

Replies from: Kaj_Sotala, Vladimir_M, None, XiXiDu
comment by Kaj_Sotala · 2011-03-28T09:25:25.479Z · LW(p) · GW(p)

This isn't the exact thing Vladimir_M was talking about, but: An Impossibility Theorem for Welfarist Axiologies seems rather worrying for utilitarianism in particular, though you could argue that no ethical system fully escapes its conclusions.

In brief, the paper argues that if we choose for an ethical system the following three reasonable-sounding premises:

  • The Dominance Principle: If population A contains the same number of people as population B, and every person in A has higher welfare than any person in B, then A is better than B.
  • The Addition Principle: If it is bad to add a number of people, all of with welfare lower than the original people, then it is at least as bad to add a greater number of people, all with even lower welfare than the original people.
  • The Minimal Non-Extreme Priority Principle: There is a number n such that an addition of n people with very high welfare and a single person with negative welfare is at least as good as an addition of the same number of people but with very low positive welfare.

then we cannot help but to accept one of the following:

  • The Repugnant Conclusion: For any perfectly equal population with very high positive welfare, there is a population with very low positive welfare which is better.
  • The Sadistic Conclusion: When adding people without affecting the original people's welfare, it can be better to add people with negative welfare rather than positive welfare.
  • The Anti-Egalitarian Conclusion: A population with perfect equality can be worse than a population with the same number of people, inequality, and lower average (and thus lower total) positive welfare.

This seems rather bad. I haven't had a chance to work through the proof to make sure it checks out, however.

Unfortunately I couldn't find an ungated version of the paper to link to. However I did find this paper by the same author, where he argues that if we define

  • The Egalitarian Dominance Condition: If population A is a perfectly equal population of the same size as population B, and every person in A has higher welfare than every person in B, then A is better than B, other things being equal.
  • The General Non-Extreme Priority Condition: There is a number n of lives such that for any population X, and any welfare level A, a population consisting of the X-lives, n lives with very high welfare, and one life with welfare A, is at least as good as a population consisting of the X-lives, n lives with very low positive welfare, and one life with welfare slightly above A, other things being equal.
  • The Non-Elitism Condition: For any triplet of welfare levels A, B, and C, A slightly higher than B, and B higher than C, and for any one-life population A with welfare A, there is a population C with welfare C, and a population B of the same size as A U C and with welfare B, such that for any population X consisting of lives with welfare ranging from C to A, B U X is at least as good as A U C U X, other things being equal.
  • The Weak Non-Sadism Condition: There is a negative welfare level and a number of lives at this level such that an addition of any number of people with positive welfare is at least as good as an addition of the lives with negative welfare, other things being equal.
  • The Weak Quality Addition Condition: For any population X, there is a perfectly equal population with very high positive welfare, and a very negative welfare level, and a number of lives at this level, such that the addition of the high welfare population to X is at least as good as the addition of any population consisting of the lives with negative welfare and any number of lives with very low positive welfare to X, other things being equal.

then no axiology can satisfy all of these criteria. (I have not worked through this logic either.)

Replies from: cousin_it, Ghatanathoah
comment by cousin_it · 2011-03-28T09:44:19.632Z · LW(p) · GW(p)

The premises sound much less intuitive than the conclusions, and I accept the Repugnant Conclusion anyway.

Replies from: torekp
comment by torekp · 2011-04-03T02:07:46.844Z · LW(p) · GW(p)

Me too. I think the "Repugnancy" comes from picturing a very low but positive quality of life as some kind of dull gray monotone, instead of the usual ups and downs, and then feeling enormous boredom, and then projecting that boredom onto the scenario.

comment by Ghatanathoah · 2013-11-17T21:17:51.388Z · LW(p) · GW(p)

then we cannot help but to accept one of the following:

*The Repugnant Conclusion: For any perfectly equal population with very high positive welfare, there is a population with very low positive welfare which is better.

*The Sadistic Conclusion: When adding people without affecting the original people's welfare, it can be better to add people with negative welfare rather than positive welfare.

*The Anti-Egalitarian Conclusion: A population with perfect equality can be worse than a population with the same number of people, inequality, and lower average (and thus lower total) positive welfare.

This seems rather bad. I haven't had a chance to work through the proof to make sure it checks out, however.

I don't think this reveals any inconsistencies about moral reasoning at all. Upon reflection it seems obvious to me that the vast majority of the human population accepts the Sadistic Conclusion and considers it morally obvious. And I think that they are right to do so.

What makes me say this? Well, let's dissect the Sadistic Conclusion. Basically, it is a specific variant of another, broader conclusion, which can be stated thusly:

If it the addition of a person or persons with positive welfare can sometimes be bad, then it is sometimes preferable to do other bad things than to add that person or persons. Examples of these other bad things include harming existing people to avoid adding that person, failing to increase the welfare of existing people in order to avoid adding that person, or the Sadistic Conclusion.

What would a world where people accepted this conclusion look like? It would be a world where people refrained from doing pleasurable things in order to avoid adding another person to the world (for instance, abstaining from sex out of fear of getting pregnant). It would be a world where people spent money on devices to prevent the addition of more people, instead of on things that made them happy (for example, using money to buy condoms instead of candy). It would be a world where people chose to risk harm upon themselves rather than add more people to the world (by having surgical procedures like vasectomies, which have a nonzero risk of complications). In other words, it's our world.

So why does the Sadistic Conclusion seem unpalatable, even though it's obvious that pretty much everyone accepts the principle it is derived from? I think it's probably same reason that people reject the transplant variant of the trolley problem, even though nearly everyone accepts the normal version of it. The thought of directly doing something awful to other people makes us squeemish and queasy, even if we accept the abstract principle behind it in other situations.

But how could the addition of more people with positive welfare be bad? I think we probably have some sort of moral principle that smaller populations with higher welfare per person are better than larger populations with lower levels of welfare per person, even if the total amount of welfare per person is larger overall in the larger population. (If you don't believe in the concept of "personal identity" just replace the word "person" with the phrase "sets of experiences that are related in certain ways," it doesn't change anything).

A helpful way of looking at it would be to consider this principle on an individual level, rather than a population wide one. Suppose, as in Parfit's classic example someone gets me addicted to a drug that causes me to have a burning desire to take it, and then gives me a lifetime supply of that drug. I have more satisfied desires than I used to have, but my life has not been made any better. This is because I have a set of higher-order preferences about what sort of desires I want to have, if giving me a new desire conflicts with those higher-order preferences then my life has been made worse or the same, not better.

Similarly if adding more people conflicts with higher-order moral principles about how the world should be, adding them makes the world worse, not better. Before I understood this there was a short, dark time where I actually accepted the Repugnant Conclusion, rather than the Sadistic one. Fortunately those dark days are over.

Incidentally, I think the Sadistic Conclusion is poorly named, as it still considers the addition of people with negative welfare to be a bad thing.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-11-29T19:18:04.805Z · LW(p) · GW(p)

Well, let's dissect the Sadistic Conclusion. Basically, it is a specific variant of another, broader conclusion, which can be stated thusly:

If it the addition of a person or persons with positive welfare can sometimes be bad, then it is sometimes preferable to do other bad things than to add that person or persons.

Wait what? That's the direct opposite of the Sadistic Conclusion. If the Sadistic Conclusion was commonly accepted, then people would abstain from using contraception if they thought that they could create new suffering-filled lives that way. And if they thought their kids were about to live happy lives, they might try to arrange it so that the kids would live miserable lives instead.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-11-30T01:24:15.927Z · LW(p) · GW(p)

That's not the Sadistic Conclusion as presented by Arrhenius. Arrhenius' Sadistic Conclusion is that, if it is bad to add more people with positive welfare, then it might be less bad to add someone with negative welfare instead of a large amount of people with positive welfare. Obviously the amount of people with negative welfare must be considerably smaller than the amount of people with positive welfare in order for the math to check out.

Under Arrhenius' Sadistic Conclusion adding unhappy, miserable lives is still a very bad thing. It makes the world a worse place, and adding no one at all would be preferable. Adding a miserable life isn't good, it's just less bad than adding a huge amount of lives barely worth living. Personally, I think the conclusion is misnamed, since it doesn't consider adding suffering people to be good.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-11-30T11:10:05.168Z · LW(p) · GW(p)

Okay, you're right that the Sadistic Conclusion does consider it better to avoid adding any people at all, and says that it's better to add people with negative welfare only if we are in a situation where we have to add someone.

So you're saying that by spending resources on not creating the new lives, people are essentially choosing the "create a life with negative welfare" option, but instead of creating a new life with negative welfare, an equivalent amount is subtracted from their own welfare. Am I understanding you correctly?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-12-06T12:41:51.820Z · LW(p) · GW(p)

So you're saying that by spending resources on not creating the new lives, people are essentially choosing the "create a life with negative welfare" option, but instead of creating a new life with negative welfare, an equivalent amount is subtracted from their own welfare. Am I understanding you correctly?

Yes, that's what I was trying to say.

comment by Vladimir_M · 2011-03-27T20:16:02.043Z · LW(p) · GW(p)

Yes, that is actually one of my writing ideas on which I keep procrastinating.

Replies from: None
comment by [deleted] · 2012-09-09T18:38:09.758Z · LW(p) · GW(p)

Please stop procrastinating and just publish the draft as it is in discussion, you can then improve it. :)

comment by [deleted] · 2011-03-27T13:45:41.095Z · LW(p) · GW(p)

Many people's goals on this web-site are sufficiently grandiose to make utilitarianism unsettling. How repugnant a thing would you be willing to do to increase by a small but measurable amount the probability of human civilization birthing an astronomical utopia? It would be easy to write a post with lots of variations on this but I think it would be in bad taste (not a utilitarian reason not to do it, though).

Replies from: Richard_Kennaway, benelliott
comment by Richard_Kennaway · 2011-03-27T18:24:59.730Z · LW(p) · GW(p)

A few ideas have been given elsewhere:

'No, it is real. The Brotherhood, we call it. You will never learn much more about the Brotherhood than that it exists and that you belong to it. I will come back to that presently.' He looked at his wrist-watch. 'It is unwise even for members of the Inner Party to turn off the telescreen for more than half an hour. You ought not to have come here together, and you will have to leave separately. You, comrade'--he bowed his head to Julia--'will leave first. We have about twenty minutes at our disposal. You will understand that I must start by asking you certain questions. In general terms, what are you prepared to do?'

'Anything that we are capable of,' said Winston.

O'Brien had turned himself a little in his chair so that he was facing Winston. He almost ignored Julia, seeming to take it for granted that Winston could speak for her. For a moment the lids flitted down over his eyes. He began asking his questions in a low, expressionless voice, as though this were a routine, a sort of catechism, most of whose answers were known to him already.

'You are prepared to give your lives?'

'Yes.'

'You are prepared to commit murder?'

'Yes.'

'To commit acts of sabotage which may cause the death of hundreds of innocent people?'

'Yes.'

'To betray your country to foreign powers?'

'Yes.'

'You are prepared to cheat, to forge, to blackmail, to corrupt the minds of children, to distribute habit-forming drugs, to encourage prostitution, to disseminate venereal diseases--to do anything which is likely to cause demoralization and weaken the power of the Party?'

'Yes.'

'If, for example, it would somehow serve our interests to throw sulphuric acid in a child's face--are you prepared to do that?'

'Yes.'

'You are prepared to lose your identity and live out the rest of your life as a waiter or a dock-worker?'

'Yes.'

'You are prepared to commit suicide, if and when we order you to do so?'

'Yes.'

'You are prepared, the two of you, to separate and never see one another again?'

'No!' broke in Julia.

It appeared to Winston that a long time passed before he answered. For a moment he seemed even to have been deprived of the power of speech. His tongue worked soundlessly, forming the opening syllables first of one word, then of the other, over and over again. Until he had said it, he did not know which word he was going to say. 'No,' he said finally.

comment by benelliott · 2011-03-27T14:17:03.146Z · LW(p) · GW(p)

I suggest you read through EYs ethics posts, they provide an interesting answer to this question (if you already have read them then I apologise).

Replies from: None
comment by [deleted] · 2011-03-27T14:23:37.215Z · LW(p) · GW(p)

Literally to the question "How repugnant a thing should you be willing to do...?" If you gave me a link I would be grateful.

Replies from: benelliott
comment by benelliott · 2011-03-27T14:42:22.314Z · LW(p) · GW(p)

The way he phrased the question was "would you kill babies if it was the right thing to do?", but the intended meaning is the same.

Its a short sequence of posts, not all directly related to that question, 1 2 3 4 5 6

There may be others related to it that I missed, the sequences are long and I don't have time to trawl through them all.

comment by XiXiDu · 2011-03-27T17:47:56.808Z · LW(p) · GW(p)

Not directly about moral intuitions but intuitions in general:

Sozou’s idea is that uncertainty as to the nature of any underlying hazards can explain time inconsistent preferences. Suppose there is a hazard that may prevent the pay-off from being realised. This would provide a basis (beyond impatience) for discounting a pay-off in the future. But suppose further that you do not know what the specific probability of that hazard being realised is (although you know the probability distribution). What is the proper discount rate?

Sozou shows that as time passes, one can update their estimate of the probability of the underlying hazard. If after a week the hazard has not occurred, this would suggest that the probability of the hazard is not very high, which would allow the person to reduce the rate at which they discount the pay-off. When offered with a choice of one or two bottles of wine 30 or 31 days into the future, the person applies a lower discount rate in their mind than for the short period because they know that as each day passes in which there has been no hazard preventing the pay-off, their estimate of the hazard’s probability will drop.

This example provides a nice evolutionary explanation of the shape of time preferences. In a world of uncertain hazards, it would be appropriate to apply a heavier discount rate for a short-term pay-off. It is rational and people who applied that rule would not have lower fitness than those who apply a constant discount rate.

Evolution and irrationality — Evolving Economics

In Study 1, college students’ preferences for different brands of strawberry jams were compared with experts’ ratings of the jams. Students who analyzed why they felt the way they did agreed less with the experts than students who did not. In Study 2, college students’ preferences for college courses were compared with expert opinion. Some students were asked to analyze reasons; others were asked to evaluate all attributes of all courses. Both kinds of introspection caused people to make choices that, compared with control subjects’, corresponded less with expert opinion. Analyzing reasons can focus people’s attention on nonoptimal criteria, causing them to base their subsequent choices on these criteria. Evaluating multiple attributes can moderate people’s judgments, causing them to discriminate less between the different alternatives.

Thinking Too Much: Introspection Can Reduce the Quality of Preferences and Decisions

And of course Pascal's Mugging: Tiny Probabilities of Vast Utilities:

I'd sooner question my grasp of "rationality" than give five dollars to a Pascal's Mugger because I thought it was "rational".

comment by [deleted] · 2011-03-27T09:46:58.554Z · LW(p) · GW(p)

Our moral intuitions in fact do a very good job of arbitrating and coordinating human action

I strongly agree with this. This is a great explanation for why people have moral intuition, for why it's good (in an informal sense) that they do and unimportant (in that same sense) that it's not very consistent. But I'm not sure how to treat my own flawed moral intuition in light of my larger goal of holding true beliefs.

I don't expect a rational analysis of this issue to be more useful to me than folk morality but I do expect or at least wish for it to be more interesting.

Utilitarian assumptions turn out to be much more incoherent than human folk morality

This is also great.

comment by atucker · 2011-03-27T14:25:23.777Z · LW(p) · GW(p)

I would be interested to hear, from those who regard themselves as very rational and not afraid to boast about it, how confused people are about these issues.

I'm not super rational, but I might as well say... I'm kind of confused about it, but what I've been doing for the last few months seems to be working so far.

Basically, I think that my moral intuitions are pretty solid at telling me which goals are good, but that my default emotions are pretty bad at weighing the relevant considerations, and lead to a large share of my inconsistency.

Lets take the Opium Wars as an (albeit obscure, but IMO very demonstrative) example.

The British wanted lots of Chinese goods, like tea. However, the Chinese didn't really want any British stuff, besides silver. The British didn't want to run out of silver, so they started selling lots of Opium in China. Soon, Opium became a strategic resource (to the point the East India Company conquered more territory to grow more of it) which maintained the balance of trade by keeping portions of the Chinese population addicted.

Eventually, China got annoyed and the Emperor banned Opium sales. So Britain declared war to keep their ports opened. Fighting ensued. People died. The Chinese lost, and Opium continued to be forced through their ports.

Its pretty easy to say that the British Opium-selling companies are evil, but keep in mind that they were still bankrolled by the British Consumer. People died because other people wanted cheaper tea. The whole system took a few less than moral people, and multiplied their actions to be strong enough to force a country to do things.

Lets pretend that I'm living in London back then. I think murder is bad. I like tea, and following economic forces I prefer cheaper tea. So I buy from the East India Trading Company. If I were fully informed, I would probably decide that I don't want to support them. However, my emotions don't make me feel like I'm murdering and pushing drugs on the Chinese when I buy their tea. So I keep doing it.

Even worse, I might (back then) actually just be racist against the Chinese, and consider them subhuman because my emotions make a halo effect around British Civilization, and our inherent superiority to the world. We're helping them! Civilizing the Barbarians! It's the White Man's Burden to help them do reasonable things like trade!

Replies from: None
comment by [deleted] · 2011-03-27T14:51:03.138Z · LW(p) · GW(p)

The opium wars are not obscure!

I bet that your reading of the opium wars is in accord with that of many respectable historians and discord with many other respectable historians. Your account of the opium wars, like any account of any historical situation of large scale, is tendentious. I don't think history is always a great place to contemplate morality.

Still, when you say

what I've been doing for the last few months seems to be working so far

what do you mean?

Replies from: atucker
comment by atucker · 2011-03-27T15:03:59.362Z · LW(p) · GW(p)

The opium wars are not obscure!

Yay!

I bet that your reading of the opium wars is in accord with that of many respectable historians and discord with many other respectable historians.

Fair enough. I guess it would've been better to start with a more personal example.

What do you mean?

I trust my moral intuitions about if something is ultimately good or bad, but spend time reflecting on my emotions, which I often act contrary to.

Often when I'm annoyed its the result of someone misunderstanding something, or me not eating recently or sleeping enough. When I'm working with someone on a goal that I've determined is good (like, my FIRST team or something) and I feel the urge to snap at someone, I try to not do it. It would feel right, but snapping would probably do things contradictory to my goals.

Suspending emotions is easier when I run through a checklist of why I might be feeling it. For instance when I'm tired (often a forerunner to me becoming lazy, or irritable) I ask myself if I'm actually just hungry. If I think that's why, I go eat and things are better, and my actions are more consistent.

comment by Marius · 2011-03-27T20:34:29.633Z · LW(p) · GW(p)

Joshua Greene appears to create a false dichotomy: we can either trust our intuitions, or we can "shut up and multiply" (it is interesting that Yudkowsky has already (http://lesswrong.com/lw/if/your_strength_as_a_rationalist/ ) shown the problem with mistrusting our intuitions and taking our model too seriously by "shutting up and multiplying". There is a third answer, which is actually the most traditional answer: find the best model (whether by researching models that more closely approximate the hypothetical, or by asking someone who has developed such a model by working in an appropriate field). So, when we are thinking about donations to charity, we know that helping the wounded is a dissimilar task. Using a model designed to work in nearby emergencies is likely to produce poor results when contemplating distant charities tackling ongoing problems. Instead, we are better served using a model that best approximates the actual situation. If we look at smart, decent people who work for overseas charities, we discover that their models emphasize striking a balance between aid for others, our loved ones, and ourselves. This is different from our behavior in emergencies (the bleeding stranger on the road)- and it's not that they're wrong. They just have a more appropriate model for that kind of situation.

comment by Marius · 2011-03-27T20:30:55.260Z · LW(p) · GW(p)

Joshua Greene appears to create a false dichotomy: we can either trust our intuitions, or we can "shut up and multiply" (it is interesting that Yudkowsky has already shown the problem with mistrusting our intuitions and taking our model too seriously by "shutting up and multiplying". There is a third answer, which is actually the most traditional answer: find the best model (whether by researching models that more closely approximate the hypothetical, or by asking someone who has developed such a model by working in an appropriate field). So, when we are thinking about donations to charity, we know that helping the wounded is a dissimilar task. Using a model designed to work in nearby emergencies is likely to produce poor results when contemplating distant charities tackling ongoing problems. Instead, we are better served using a model that best approximates the actual situation. If we look at smart, decent people who work for overseas charities, we discover that their models emphasize striking a balance between aid for others, our loved ones, and ourselves. This is different from our behavior in emergencies (the bleeding stranger on the road)- and it's not that they're wrong. They just have a more appropriate model for that kind of situation.