Utilitarianism is the only option

post by aelwood · 2022-12-03T17:14:19.532Z · LW · GW · 7 comments

7 comments

Comments sorted by top scores.

comment by Dagon · 2022-12-03T18:30:21.454Z · LW(p) · GW(p)

This doesn't address any of the strong objections to Utilitarianism (around whether and how individual values get aggregated).

No conscious being can deny that suffering is real.

I deny that "real" is a well-defined word in this sentence.  I experience suffering (and joy and other psychological states), but I can't measure them very well, and I can't compare those experiences to what (if anything) any other cluster of spacetime experiences.  I'm willing to stipulate that such things are, in fact, common.  But I don't stipulate that they aggregate in any useful way, nor that they're important to anything except themselves.

comment by quanticle · 2022-12-03T22:00:04.972Z · LW(p) · GW(p)

Any coherent ethical theory must aim to attain a world-state with less suffering.

I think that's a misunderstanding of the word "coherent". A coherent ethical theory is one that aims to attain a world state that is logically consistent with itself. Maybe that means less suffering. Maybe that means more suffering. Maybe that means extreme suffering for some and very little suffering for others. All of these world-states are logically consistent, and, thus it's possible to create coherent ethical theories that justify any of them.

comment by Richard_Kennaway · 2022-12-06T22:03:30.623Z · LW(p) · GW(p)

Any modification to the thought experiment to get around these problems would require a fantastical level of secrecy, which wouldn’t play out in reality.

That sounds like a hope which reality may or may not be benign enough to fulfil. There are philosophers who argue that the surgeon should kill the one to harvest their organs to save five, and who do not hastily back away from the conclusion, but say yes, yes he should, and not only keep the act secret, but keep secret the doctrine of true consequentialism, which is not for the public. See "Secrecy in Consequentialism: A Defence of Esoteric Morality" by Katarzyna de Lazari-Radek and Peter Singer. The surgeon well-placed to save five lives by cutting up one in secret is morally obliged to do so. The best action is always compulsory, and this is the best action.

They characterise the "esoteric morality" of their title by the following tenets:

• There are acts which are right only if no one – or virtually no one – will get to know about them.

• Some people know better, or can learn better, than others what it is right to do in certain circumstances.

• There are at least two different sets of instruction, or moral codes, suitable for the different categories of people.

• Though the consequentialist believes that acts are right only if they have consequences at least as good as anything else the agent could have done, the consequentialist may need to discourage others from embracing consequentialism.

• Paradoxically, it may be the case that philosophers who support esoteric morality should not do so openly, because as Sidgwick said: ‘it seems expedient that the doctrine that esoteric morality is expedient should itself be kept esoteric’

They go on to say that despite various philosophers arguing against it, "Esoteric morality is a necessary part of a consequentialist theory, and all of the points above can be defended." They proceed to defend them.

The reference to Sidgwick is to his book, "The Methods of Ethics", whose thesis is summarised (and agreed with) by the authors:

Sidgwick famously divided society into ‘enlightened utilitarians’ who may be able to live by ‘refined and complicated’ rules that admit exceptions, and the rest of the community to whom such sophisticated rules ‘would be dangerous.’

I've quoted all this just to point out that there are consequentialists, notably Peter Singer, inspiration for EA, who take consequentialism to be absolutely axiomatic and firmly bite every bullet. Although not to the extent of not publishing their esoteric morality.

Eliezer has written, "Go three-quarters of the way from deontology to utilitarianism and then stop. You are now in the right place. Stay there at least until you have become a god." de Lazari-Radek and Singer say: We are sufficiently enlightened to be able to be total utilitarians, and therefore we must. Deontology is a second-best that is all that the less mentally able can handle.

comment by Slider · 2022-12-03T19:30:53.771Z · LW(p) · GW(p)

Knowing your own suffering is on a pretty solid footing. But in taking into account how we impact others we do not have direct perception. Essentially I deploy a theory-of-mind that blob over there probably corresponds to the same kind of activity that I am. But this does not raise anywhere near to self-evident bar. Openness or closedness has no import here. Even if I am that guy over there, if I don't know whether they are a masochist or not I don't know whether causing them to experience pain is a good action or not.

The other reason we have to be cautious when following valence utilitarianism is that there’s no way to measure conscious experience. You know it when you have it, but that’s it.

Does this take imply that if you are employing numbers in your application of utilitarianism that you are doing a misapplication? How can we analyse that a utility monster does not happen if we are not allowed to compare experiences?

The repugnancy avoidance has an issue of representation levels. If you have a repugnant analysis, agreeing with its assumptions is inconsistent to disagreeing with its conclusions. That is when you write down a number (which I know was systematically distanced from) to represent suffering, the symbol manipulations do not ask permission to pass a "intuition filter". Sure you can say after reflecting a long time on a particular formula that its incongruent and "not the true formula". But in order to get the analysis started you have to take some stance (even if it uses some unusual and fancy maths or whatever). And the basedness of that particular stance is not saved by it having been possible that we could have chosen another. "If what I said is wrong, then I didn't mean it" is a way to be "always right" but forfeits meaning anything. If you just use your intuitive feelings on whether a repugnant conclusion should be accepted or not and do not refer at all to the analysis itself, the analysis is not a gear in your decision procedure.

Open individualism bypassing population size problem I could not really follow. We still phase a problem of generating different experience viewpoints. Would it not still follow that it is better to have a world like Game of Thrones with lots of characters in constanly struggling conditions than a book where the one single protagonist is the only character. Sure both being "books" gives a ground to compare them on but if comparability keeps addition it would seem that more points of view leads to more experience. That is if we have some world state with some humans etc and an area of flat space and then consider it contrasting to a state where instead of being flat there is some kind of experiencer there (say a human). Even if we disregard borders it seems this is a strict improvement in experience. Is it better to be one unified brain or equal amount of neurons split into separate "mini-experiencers"? Do persons with multiple personality conditions contribute more experience weight to the world? Do unconcious persons contribute less weight? Does each ant contribute as much as a human? Do artists count more? The repugnant steps can still be taken.

comment by ZT5 · 2022-12-10T00:17:24.474Z · LW(p) · GW(p)

Then, given logarithmic scales of valence and open or empty individualism, it’s always going to be easier to achieve a high utility world-state with a few beings enjoying spectacular experiences, rather than filling the universe with miserable people. This is especially true when you play out a realistic scenario, and take the resources available into account.

In the least convenient possible world [LW · GW]where this isnt't the case, do you accept the repugnant conclusion?

I think that preference utilitarism dissolves a lot of the problems with utilitarism, including the one of the "repugnant conclusion". Turns out, we do care about other things than merely a really big Happiness Number. Our values are a lot more complicated than that. Even though, ultimately, all existence is about some sort of conscious experience, otherwise, what would be the point?

P. S. Thanks for the post and the links! I think this is an important topic to address.

comment by YimbyGeorge (mardukofbabylon) · 2022-12-05T21:48:37.907Z · LW(p) · GW(p)

There is a common Idea that there is only one soul experiencing reality. So like Feynman manentioned an electron travels back in time as a positron and then starts again as another electron. Similarly a single soul is experiencing all living things. If you are that soul you may compute your utility accordingly to your taste.

comment by TAG · 2022-12-04T17:56:48.264Z · LW(p) · GW(p)

Utilitarianism is not based on the sole axiom that suffering exists. It also requires it to be measurable, to be commensurable between subjects and so on.

For example, take the the rogue surgeon thought experiment. If you only care about maximising the number of living people, it could make sense for surgeons to go around kidnapping healthy people and butchering them for their organs, which can then be transplanted into terminal patients, ultimately saving more people than are killed. However, this doesn’t take into account all the collateral effects caused by the fear and insecurity that this kind of practice would unleash on the general population, not to mention the violent deaths of the victims

A utilitarian society wouldn't have rogue surgeons, but would have organ harvesting. The maximum utility is gained by harvesting organs in some organised , predictable way, removing the fear and uncertainty.