What ethical thought experiments can be reversed?

post by Mati_Roy (MathieuRoy) · 2021-03-06T15:12:09.826Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    6 Mati_Roy
    4 Olomana
    3 Dagon
    3 Mati_Roy
    3 Mati_Roy
    3 Mati_Roy
    3 Mati_Roy
    3 Mati_Roy
    3 Mati_Roy
None
2 comments

Some thought experiments follow this template:

  1. We have a moral intuition
  2. We make some computation to what this intuition implies
  3. We check how we feel about this implication, and it feels counter-intuitive

Then some people bite the (3) bullet. But bullets sometimes (always?) have a counter-bullet.

You can reverse those thought experiments: take ~(3) as your starting moral intuition, and then derive ~(1) which will be counter-intuitive.

For example, you can start with:

  1. I would care about saving a drowning person even if it came at the cost of ruining my suit
  2. There are a lot of metaphorically drowning people in the world
  3. Therefore I should donate all my money to effective poverty alleviation charities

This is called "shut up and multiply [? · GW]".

But you can also use the reverse:

  1. I don't want to donate all my money to effective poverty alleviation charities
  2. A drowning person would cost more to save because it would ruin my suit
  3. Therefore I shouldn't save a drowning person

This is called "shut up and divide [LW · GW]" (also related: Boredom vs. Scope Insensitivity [LW · GW]).

Step (2) might be eliminating a relevant feature which generates the counter-intuition, or it might be a way to open our eyes to something we were not seeing. And maybe for some thought experiment you find both the assumption and conclusion intuitive or counterintuitive. But that's not the object of this post.

Here I'm just interested in seeing what the reverse of ethical thought experiments look like. I'll put more examples as answer. I would like to know which other ethical thought experiments have this pattern -- that is, an ethical thought experiment that starts with an intuition to derive a counter-intuition, which can be reversed, to instead derive that the initial assumption is the wrong one.

Update: As I'm writing some of them, I realized some ethical thought experiment are presented as a clash of intuitions (so the "reverse" is part of the original presentation), whereas others seem to be trying to persuade the reader to bite the bullet on a certain counter-intuition, and omit to mention the reverse ethical thought experiment.

Answers

answer by Mati_Roy · 2021-03-06T15:53:06.804Z · LW(p) · GW(p)

The violinist

Original:

  1. We should save the violinist
  2. Fetuses are like violinists
  3. Therefore we should save fetuses

Reverse:

  1. We don't care about fetuses
  2. Violinists are like fetuses
  3. Therefore we don't care about violinists (metaphorically)
answer by Olomana · 2021-03-07T07:02:27.228Z · LW(p) · GW(p)

I would like to know which other ethical thought experiments have this pattern...

Isn't the answer just "all of them"?  The contrapositive of an implication is always true.

If (if X then Y) then (if ~Y then ~X).  Any intuitive dissonance between X and Y is preserved by negating them into ~X and ~Y.

comment by Mati_Roy (MathieuRoy) · 2021-03-07T13:59:13.443Z · LW(p) · GW(p)

Yeah that makes sense

answer by Dagon · 2021-03-06T18:31:44.783Z · LW(p) · GW(p)

Many of these calculations get more consistent if you bite just one fairly large bullet: sub-linear scaling (I generally go with logarithmic) of value.  Saving a marginal person at the cost of ruining a marginal suit is a value comparison, and the value of both people and suits can vary pretty widely based on context.  

The hardest part of this acceptance is that human lives are not infinite nor incomparable in value.  I also recommend accepting that value is personal and relative (each agent has a different utility function, with different coefficients for the value of categories and individual others), but that may not be fully necessary to resolve the simple examples you've given so far.

answer by Mati_Roy · 2021-03-06T15:50:31.067Z · LW(p) · GW(p)

Infanticide

Original:

  1. We don't care about killing a baby before birth
  2. A baby 1 minutes after birth is almost the same as a baby 1 minute before birth
  3. Therefore we don't care about killing a 1 minute-old baby

Reversed:

  1. We care about killing a 1 minute-old baby
  2. A baby 1 minutes after birth is almost the same as a baby 1 minute before birth
  3. Therefore we care about killing a baby before birth
comment by DanArmak · 2021-03-06T16:18:23.765Z · LW(p) · GW(p)

Isn't the original argument here just the Sorites "paradox"?

  1. We don't care about killing a single fertilized human cell
  2. A human of any age is almost the same as a human of that age minus one minute
  3. Therefore we don't care about killing a human of any age

This proves too much. No ethical system I'm familiar with holds that because (physical) things change gradually over time, no moral rule can distinguish two things.

Replies from: MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2021-03-07T05:31:02.080Z · LW(p) · GW(p)

Ah, I actually had just came up with that one (am now realizing "original" wasn't the right word for this one) -- thanks for bringing up this "paradox"!

answer by Mati_Roy · 2021-03-06T15:42:38.897Z · LW(p) · GW(p)

The Non-identity problem

Original:

  1. We only care about things if they are bad/good for someone
  2. Using a lot of resources isn't bad for people in the future, it just changes who lives in the future
  3. Therefore we don't mind that people in the future are having a bad time because of our consumption

Reversed:

  1. We care that people in the future are having a bad time because of our consumption
  2. Consuming isn't bad for specific people in the future, it just changes who lives in the future
  3. Therefore we don't only care about thing if they are bad/good* for someone, but also about what kind of lives we bring into existence
answer by Mati_Roy · 2021-03-06T15:34:56.098Z · LW(p) · GW(p)

Dust specs vs torture

I feel like this one was presented as a clash of 2 intuitions, so both the "reversed" is also in the original presentation.

Original:

  1. We prefer X people experiencing Y pain than 1,000 people experiencing 2*Y pain AND this preference is true for all X, Y element of the real numbers
  2. This can be chained together multiple times
  3. We prefer 1 person experiencing 50 years of torture to a googolplex people having specs of dust in their eyes

Reversed:

  1. We prefer a googolplex people having specs of dust in their eyes to 1 person experiencing 50 years of torture
  2. There's some threshold of pain for which we care lexically more about
  3. We can more about 1 person experiencing slightly more pain than this threshold than a large number of people experiencing slightly less pain than this threshold

keyword to search: lexical threshold negative hedonistic utilitarianism

comment by Measure · 2021-03-06T17:08:09.274Z · LW(p) · GW(p)

The original 1 seems pretty clearly false here if X >> 1000 for basically any value of Y.

Replies from: MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2021-03-07T05:37:18.894Z · LW(p) · GW(p)

Woops, I meant 1,000*X

Replies from: seed
comment by seed · 2021-03-07T09:46:32.544Z · LW(p) · GW(p)

And Y/2 pain, probably? (Or the conclusion doesn't follow.)

Replies from: MathieuRoy, MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2021-03-07T14:00:17.717Z · LW(p) · GW(p)

Ahhh, yep, thanks

comment by Mati_Roy (MathieuRoy) · 2022-06-19T02:25:45.643Z · LW(p) · GW(p)

Oops, right!

answer by Mati_Roy · 2021-03-06T15:21:53.030Z · LW(p) · GW(p)

Experience machine

Original:

  1. We only care about our happiness
  2. An hypothetical happiness machine could bring us the most happiness
  3. Therefore we want to live in happiness machine

Reversed:

  1. We don't want to live in an happiness machine
  2. An happiness machine only brings us happiness
  3. Therefore we care about other things than happiness
answer by Mati_Roy · 2021-03-06T15:17:00.459Z · LW(p) · GW(p)

Trolley problem / transplant

Original:

  1. We want to take actions to save more people
  2. Survival lotteries save more people just like pulling the lever does
  3. Therefore we support survival lotteries

Reversed:

  1. We don't support survival lotteries
  2. Pushing the lever is an action that changes who dies just like the survival lotteries does
  3. Therefore we don't support pulling the lever

Could do the same with pulling a lever vs pushing a person

answer by Mati_Roy · 2021-03-06T15:12:51.009Z · LW(p) · GW(p)

Utility monster

Original:

  1. We care about increasing happiness
  2. If there was a being that had by far the highest capacity for happiness, they might be the best way to increase happiness even at the cost of everyone else
  3. We care about utility monsters the most (which violates the egalitarian intuition)

Reversed:

  1. We care about each beings equally
  2. If there was a being that had by far the highest capacity for happiness, we still wouldn't give them more resources
  3. We don't care about increasing total happiness

2 comments

Comments sorted by top scores.

comment by lsusr · 2021-03-06T23:58:57.945Z · LW(p) · GW(p)

This type of argument is called "proof of contradiction". You start by supposing is true. Then you do a bunch of a logic which assumes is true. If, at the end, you prove something wrong then is false. Proofs by contradiction are frequently used in mathematics where (compared to morality) it's easy to ensure your logic remains ironclad.

Replies from: MathieuRoy
comment by Mati_Roy (MathieuRoy) · 2021-03-07T05:44:55.430Z · LW(p) · GW(p)

I feel like this is something different; X isn't proven true or false here -- we just prove that if X then Y, and then also if ~Y then ~X