Utilitarian doppelgangers vs. making everything smell like bananas

post by Rob Bensinger (RobbBB) · 2021-02-20T23:57:34.724Z · LW · GW · 8 comments

[epistemic status: attempt to quickly summarize how I feel about efforts to promote more diverse moral foundations, push back against cost-benefit comparisons as "utilitarianism", or split the difference between this notion of "utilitarianism" and rival approaches]

 

I think human value is incredibly complicated; and I don't think it's strictly about "harm/care" (to use Haidt's term), or happiness, or suffering. Indeed, I suspect a utopian society would value transhuman versions of "beauty", "sanctity", etc. that sometimes have nothing to do with [LW(p) · GW(p)] optimizing anyone's subjective experience.

And yet, in everyday decision-making and applied ethics, I think the utilitarians, effective altruists, welfare-maximizers, etc. are basically always right. In deciding on an ideal initial response to COVID-19, or an ideal budget breakdown for an aid program, or anything else that affects large numbers of people, it would be unimaginably foolish to try to find a "balance" between the simple "prevent as much disability and death as you can" goal and some other moral framework.

Why?

Well, I think the world is pretty messed up. I think that noticing this makes the case for the SLLRNUETSSTAWMC view pretty trivial. (SLLRNUETSSTAWMC = "superficially looks like (reflective, non-two-boxing) utilitarianism even though strictly speaking things are way more complicated".)

Imagine that you lived in a world dominated by a bunch of political coalitions with organizing principles like "all other moral concerns should be subordinate to making everything smell as much like rancid meat as possible" and "our overriding duty is to make everything smell as much like isoamyl acetate as possible".

Imagine further that this has caused people who care about things like insecticide-treated bednets and world peace to self-identify as "oriented toward harm/care", in contrast to the people who are "oriented toward smell".

The take-away from this shouldn't be "there's nothing good or valuable about making things smell nicer". The take-away should be "the people who agitate about smell in today's world are largely optimizing for totally the wrong smells, and their level of concern for smell is radically out of step with what's actually going on in the world". So out of step, in fact, that if you literally just completely ignore smell in all your altruistic activities (at least until after we've ended disease, warfare, hunger, etc.), you'll do way, way better at improving the future than if you tried to optimize some compromise between the coalitions' views.

Or, to quote from Feeling Moral [LW · GW] (critiquing those who base humanitarian decisions on "which option feels more just and righteous and pure" rather than "which option actually helps others the most"):

You know what? This isn’t about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feelings of comfort or discomfort with a plan. Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn’t even a feather in the scales, when a life is at stake. Just shut up and multiply.

And, from The "Intuitions" Behind "Utilitarianism" [LW · GW]:

I don't say that morality should always be simple.  I've already said that the meaning of music is more than happiness alone [? · GW], more than just a pleasure center lighting up.  I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination.  And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize - that the valuation of this one event is more complex than I know. 

But that's for one event.  When it comes to multiplying by quantities and probabilities, complication is to be avoided - at least if you care more about the destination than the journey.  When you've reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as "Shut up and multiply."

Where music is concerned, I care about the journey.

When lives are at stake, I shut up and multiply.

It is more important that lives be saved, than that we conform to any particular ritual in saving them.  And the optimal path to that destination is governed by laws that are simple, because they are math [? · GW].

Or: there's a difference between cognitive operations that feel affectively "cold", versus acting in ways that are actually "cold-blooded". If your child is dying of cancer and you're staying up late night after night googling research papers to try to figure out how to save their life, the process may not feel as vital and alive as fighting off a hungry lion to save your child, but... who cares how vital and alive it feels?

I'll care about that stuff after the world's immediate catastrophes have been solved. And during my off hours, when I'm actually listening to music.

8 comments

Comments sorted by top scores.

comment by romeostevensit · 2021-02-21T07:00:26.341Z · LW(p) · GW(p)

Maybe moral entrepreneur-ism implies that values expand and shrink in accordance with the handicap principle.

comment by TAG · 2021-02-21T05:14:49.892Z · LW(p) · GW(p)

And yet, in everyday decision-making and applied ethics, I think the utilitarians, effective altruists, welfare-maximizers, etc. are basically always right. In deciding on an ideal initial response to COVID-19, or an ideal budget breakdown for an aid program, or anything else that affects large numbers of people, it would be unimaginably foolish to try to find a “balance” between the simple “prevent as much disability and death as you can” goal and some other moral framework.

The standard objections to utilitarianism insist that you should find a balance between diffuse benefits and concentrated harms. In fact, the contrarian approach to covid is to place the burden on older and less healthy people so that the rest don't have to endure lockdowns. And that is typically given a utilitarian justification! So the conventioinal approach is less utilitarian -- it compromises utilitarianism with a notion of equal rights, instead of offsetting the lives of older people by the number of QUALYs they have left

comment by Gerald Monroe (gerald-monroe) · 2021-02-21T21:08:43.345Z · LW(p) · GW(p)

offsetting the lives of older people by the number of QUALYs they have left

Arguably, by failing to account for this we are simply doing the math wrong.  The second part - "rest don't have to endure lockdowns" - the problem with that is you need to somehow equate years in lockdown to years lost when an older person dies.  If someone had n QUALYs, how many years of lockdown equal that?  That seems to be a value judgement, if we knew the ratio we could make the right decision but how does someone decide what the ratio is?

comment by TAG · 2021-02-21T22:21:34.404Z · LW(p) · GW(p)

If you assume that we are doing utilitarianism, then we might be getting the maths wrong. But the same evidence could mean we are not doing utilitarianism. And there is other evidence that we are not doing utilitarianism, such as the existence of laws and rights.

comment by Gerald Monroe (gerald-monroe) · 2021-02-21T23:53:48.660Z · LW(p) · GW(p)

Oh sure. I meant, ok, utilitarianism has a small advantage over other frameworks.  Notable, that it's actually correct.  Saying you care about something like say "human rights" and acting according to some list of "principles" doesn't produce the optimal outcome for the thing you said about.  The optimal outcome is whatever action is (predicted to based on unbiased past data) maximize the actual utility, say, those human rights.  

The advantage of other ethical frameworks is simply that say you might work out the math and figure out that going on a killing spree of say, FDA members, maximizes utility.  But you might be wrong.  Sure, new FDA members might actually listen to evidence and approve additional covid vaccines, but there may be extremely complex impossible to model side effects.  (I am assuming that "you" doing this is a dictator like Stalin, so you are not personally going to suffer any consequence for purging the FDA).  From a utilitarian perspective it's correct, even a 50% chance to save 100k lives would be worth 1000 deaths, but the new bureaucrats might kill even more people.  (by, for example, giving their reports to you as pseudoscience and lying to you, which is usually what happens in dictatorial regimes)

Let me try an example:

rational consequentialism : "I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to consequences for my well being is to murder my grandmother".

rational utilitarianism : "I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to good consequences for the majority of my fellow humans is to murder my grandmother".

[assorted other ethical frameworks]: it's wrong to murder your grandmother because it goes against principle #n.  it's wrong to murder your grandmother because a fair poll of your community members would be against it.  It's wrong to murder your grandmother because the law says it is.

I think I have it right.

Note that for vehicle autonomy, the very same situation can and will come up.  "using a rational algorithm, trained on a large amount of data, the best action with respect to consequences for the well being of the driver, is to accelerate a maximum throttle into traffic, evading the cross traffic, to prevent collision with the out of control truck about to squash this car".  

[assorted other ethical frameworks]: it's wrong to accelerate into traffic because it endangers other drivers.  It's wrong to accelerate into traffic because the law says so.

comment by TAG · 2021-02-22T00:17:58.360Z · LW(p) · GW(p)

Oh sure. I meant, ok, utilitarianism has a small advantage over other frameworks. Notable, that it’s actually correct.

That has never been shown.

Saying you care about something like say “human rights” and acting according to some list of “principles” doesn’t produce the optimal outcome for the thing you said about.

That's question beging. You have to define optimal outcome as greatest utility to come to that conclusion . If you define optimal outcome as "greatest utility without violating rights", then it turns out utilitarianism isnt correct.

he advantage of other ethical frameworks is simply that say you might work out the math and figure out that going on a killing spree of say, FDA members, maximizes utility. But you might be wrong.

If you can't calculate utility, then you aren't doing utilitarianism

Like every other defender of utilitarianism, you have switched to defending rule consequentialism.

comment by Gerald Monroe (gerald-monroe) · 2021-02-22T23:09:23.787Z · LW(p) · GW(p)

Consequentialism is a superset of utilitarianism. "Only the consequences matter vs we must seek good consequences for the greatest number".

In practice they are identical for actors with good intentions. Using both ethical frameworks, the most despicable action is allowed and is the right thing to do IF it, based on the data, will result in the best predicted outcome.

I have inserted in 2 assumptions : we don't know ahead of time the consequences of an action merely what we predict they are, and some consequences are so indirect they can't be modeled so we are forced to ignore them.

By DEFINITION though you cannot take an action better, in a real universe with limited knowledge and cognition, than the action predicted to have the "best" outcomes.

Let me try an example:

rational consequentialism : "I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to consequences for my well being is to murder my grandmother".

rational utilitarianism : "I have reviewed a large amount of the data, and using a rational algorithm, determined the best action with respect to good consequences for the majority of my fellow humans is to murder my grandmother".

[assorted other ethical frameworks]: it's wrong to murder your grandmother because it goes against principle #n. it's wrong to murder your grandmother because a fair poll of your community members would be against it. It's wrong to murder your grandmother because the law says it is.

I think I have it right.

Note that for vehicle autonomy, the very same situation can and will come up. "using a rational algorithm, trained on a large amount of data, the best action with respect to consequences for the well being of the driver, is to accelerate a maximum throttle into traffic, evading the cross traffic, to prevent collision with the out of control truck about to squash this car".

[assorted other ethical frameworks]: it's wrong to accelerate into traffic because it endangers other drivers. It's wrong to accelerate into traffic because the law says so.

You can sort of see how I feel on this. While I also feel a 'shudder' about the thought of murdering someone's grandmother, ultimately if you actually want to do the greatest good for the greatest many - if your goal is to actually achieve whatever your principles are - vs merely giving the appearance of doing so - it appears pretty clear what algorithm you have to use.

comment by TAG · 2021-02-21T22:36:33.800Z · LW(p) · GW(p)

You know what? This isn’t about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more

Is that objective worth, or a feeling you have?