Any Utilitarianism Makes Sense As Policy
post by George3d6 · 2022-08-30T09:55:00.346Z · LW · GW · 6 commentsThis is a link post for https://www.epistem.ink/p/any-utilitarianism-makes-sense-as
Contents
i - Negative Utilitarianism ii - Applied To Other Systems iii - Reconsideration iv - Practicality And God None 6 comments
There’s been a lot of discussions around utilitarianism as of late in the blogoblob I hang out in. So fine, I’ll bite:
Most issues with any system of ethics arise when you try to make it absolute. Be it the repugnant conclusion, theodicy, solipsism, preference-towards non-existence; Or any other sort of driving axioms to infinity causing you to conclude seagulls should pluck out your eyeballs.
The problem is that all of these issues arise under at least 3 of the following assumptions:
- The system of ethics is to be implemented by an omnipotent being
- The system of ethics is to be agreed upon by all people and acted in accordance with at all times
- The actors implementing the system of ethics have perfect knowledge of the future
- The system of ethics is not to be revisited, modified, or replaced in the future
i - Negative Utilitarianism
Take as an example negative utilitarianism, which can be taken to say something like:
Suffering is the only thing we ought to concern ourselves with, our goal is to minimize suffering for all conscious and potentially conscious beings.
If one applies assumptions (4), (3) and (1) or (2), this leads to the main counter-argument to negative utilitarianism, which is:
Then, our ultimate goal ought to be eliminating all conscious life from the universe in such a way as to cause no suffering in the process.
This sounds horrible, but that’s fine because we are NOT talking about how to implement morality for an omnipotent, omniscient, and unchanging god. We are talking about how to implement morality for… a governing institution of some sort.
So how would a negative utilitarian government’s constitution sound like, starting from the same definition of the ideology?
We, the legislators, hereby agree that our main goal will be reducing suffering for every conscious being we know of. Under all currently commonly agreed upon definitions, such as minimizing the infliction of unwanted pain, trying to fulfill all fundamental needs (hunger, shelter, friendship) or needs which will be identified as fundamental.
In order to ensure this charter gets enacted, we shall prioritize our own constituents, as they are the ones giving us the power to act, but shall direct a portion of our effort towards those beings we think most underprivileged even if they are not under our governance.
We don’t care about preference fulfillment, so all forms of business and recreation will be unregulated by us in so far as they ensure no runoff suffering is caused by their practice. We will collect as much money from these business endeavors as we can, in return providing the stability and infrastructure of our state, and maintaining a fluctuating equilibrium where these businesses are happy to keep operating and paying taxes since those funds are our main tools for accomplishing our first order negative-utilitarian goals.
In other words, a negative utilitarian government is one that leaves citizens alone and collects taxes, intervening only to support those in most need and sustain the shared goods required for communities to form and thrive. In so far as it ventures to extend its reach, its prime goal will be high-utility actions such as finding ways to reduce or end factory farming, helping non-citizens that are most in need (e.g. going through starvation, hunger, and war) and create an infrastructure that will encourage more businesses to come under it and give it money,
Wait… is this like, just a very high functioning implementation of most liberal governments that already exist? Yes
Like, the kind of implementation most people would want no matter where they politically align themselves? Pretty much, the difference between, say, a US progressive and libertarian might be that they have slightly different definitions of “suffering”, different levels of trust in government efficiency or corporate ethics, and different conceptions about the tax regime business could operate under. But I think either would agree a charter like the above is, in principle, one they’d endorse.
ii - Applied To Other Systems
Christian morality is really bad, if God were to exist and enforce it, it’s nonsensical and cruel.
Happily enough, to the best of our knowledge, God doesn’t exist and doesn’t enforce it. Instead, it gets implemented by institutions like Churches. Which, historically speaking, were actually pretty nifty… funding art, preserving science and knowledge, helping the poor, helping mediate conflicts, and fostering a sense of kinship and community.
Kantian morality is really bad if one tries to follow it directly, but it’s really good if codified in a system of law… say, the common law; Established hundreds of years before Kant, still considered a fairly solid system of law, and having as its core tenant trying to reach an approximation of Kantian ethics without assuming what a “moral” action actually is, instead leaving that up to a random sample of the population (the jury), and striving to simply apply it as consistently as possible in all cases (i.e. respecting the categorical imperative).
Preference utilitarianism is horrible if you assume you’ve got God-like powers and a perfect understanding of the brain, then decide to tile the universe with neuronal tissue bathed in endorphins. But, if applied at, say, the policy level of a large corporation, like Amazon… the results are, eh, on the whole pretty ok? (I know it’s popular to hate on amazon, so feel free to pick any other e-commerce or delivery business that you think gets this better)
iii - Reconsideration
Also, because systems of ethics apply to individuals and institutions, not to… an omnipotent, ominscient god. This means we can revisit them.
Assume that in 50 years from now we’ve basically eradicated all infectious disease, and war, and hunger, and rampant pollution, and factory farming.
We realize our negative-utilitarian government is now focusing on dumb and silly questions like “are parasitic wasps causing more suffering to ants then the suffering killing all parasitic wasps with bioengineered fungi would take”, since all low hanging fruits are taken.
At that point the government can just say:
Guys, this is getting kind of silly, and nobody’s intuition is aligned with what we're doing anymore. Should we switch to a preference utilitarian charter that will guide us towards colonizing space, unlocking the mysteries of reality, and finding new and amazing peak experiences that people can partake in?
Or its citizens can slowly shift their support towards institutions that do that.
In practice this is less efficient and looks more like:
In the last 200 years the Catholic church has switched to the business of colonization, taking away women’s rights, hiding pedophilia, and funding extremists … so we should probably start donating to the against malaria foundation and voting for secular politicians.
With loads of “bad” being generated while the change was happening, and loads of “bad” still being generated as the dying remnants of the institution struggled to maintain power, becoming ever more destructive and zero-sum.
This is unideal, but there are no easy solutions, and it’s no reason to stop trying. Things are imperfect, we need to try and make them as perfect as can be while keeping the wheels churning, that’s engineering 101.
iv - Practicality And God
Going back to our unstated assumptions people judge systems of ethics under:
- The system of ethics is to be implemented by an omnipotent being
- The system of ethics is to be agreed upon by all people and acted in accordance with at all times
- The actors implementing the system of ethics have perfect knowledge of the future
- The system of ethics is not to be revisited, modified, or replaced in the future
These are all… provably wrong? I mean, depends on your standard, but roughly so, at least.
(1) Can’t be “proven” wrong but there’s no proof the other way either, and the existence of omnipotence doesn’t fit with any other things we’ve observed thus far. Even if an agent could/does exist, there’s no reason to think we could program a system of ethics into it
(2) Again, we can’t “prove” humans can’t all act in accordance with a system of ethics. But I know of no individual who of their own accord can always be consistent with their own ethics. Even if I narrow my sample to individuals with well-thought-out ethics, which are relatively intelligent and powerful. So given that we can’t solve this problem for n=1 it seems very hard to solve for n=8,000,000,000
(3) The theoretical version of this is something like “to compute the next state of the universe you need a machine that’s part of the universe which predicts its own future state, which would include the prediction itself, this is a paradox”. The applied version of this sounds something like “All the compute in the world can’t get even close to perfectly modeling a single bacteria”, and the universe has trillions of those, and they represent 1/<basically infinity> of all things to be modeled, and these things interact.
(4) Is just… dumb? How often do you revisit and revise your theories and behavior? How often do we as a society do that? Maybe, every couple of hours. So why are we thinking about a system that holds to infinity with no changes?
I still can’t fathom why people like reasoning under these constraints.
My first reaction is that it’s the “most challenging environment” to model something in, if it works under these assumptions it always works.
But that’s just not true.
These assumptions make things easy.
Thinking about ethics (or any other philosophical issue) in the real world, with all its unknown and fuzziness and imperfections is actually much harder.
So then, using those 4 assumptions we are reasoning under constraints that are both false and impractical for the problem we are trying to model.
Indeed, the constraints make us ignore the “real” problem in implementing a system of ethics, all of which are fairly practical.
You don’t need to come up with a model that’s that good, it just needs to be… better than the Catholic Church and most NGOs. That’s a pretty low bar, like, “don’t kill or physically injure people and use at least 20% of the money for something other than aggrandizing yourself or perpetuating your growth” is enough to probably get you past that bar.
On the other hand, getting more people to donate to you, establishing a new country following your more-ethical constitution, or solving the 1001 other coordination problems at hand, those are the hard problems that we should think about and write about.
Nor do I think this is a problem related specifically to effective altruism or “nerds”. Most philosophers from Socrates onwards seem to fall prey to a version of these assumptions and it poisons their whole thinking, a poison which spews upon the philosophical tradition as a hole.
Nor is it specific to Abrahamic religions, this sort of nonsense seems even more present in thinking stemming from Buddhist traditions, for example. For all I might want to shit on them, thinkers from Christian tradition were able to pull their heads out of their asses for a sufficiently long moment to come up with the scientific method.
Maybe it’s just to embarrassing to think about the real world, so we simply gravitate towards a wish-fulfillment universe to do our thinking, and millennia of accumulated memes have made the above universe a “respectable” candidate to construct theories in. Or maybe it’s just an evolved quirk of how the minute and irrelevant part of our brain that handles symbolic thinking operates, a quick which, for some reason, was really useful at hunting mammoths or whatever, so it stuck with us.
I say it’s embarrassing to think about the real world because whenever I think about it I get ashamed of how little I can do, it’s things like “get this piece of code to run a few times faster” or “reduce the tax burden by 5%” or “convince Mary she should really pursue that research project and not become an analyst for a political committee because of slightly higher pay” or “find a slightly better alternative to our significance test for these particular datasets” or “increase AUC of this model by 0.04 or more”.
If you design and popularize a mosquito net that’s 10% cheaper to produce or 5% more efficient for the same price you’ve already saved millions of lives, most of us are not smart enough to even begin thinking about something like that, and those that are… well, it sounds so inglorious, so barbaric and insignificant, “IS THIS WHAT I’VE BEEN TRAINING FOR ALL OF THESE YEARS!?” — scream conscious symbolic cognition — “IT CAN’T BE, I NEED TO DESIGN THE AXIOMS ON WHICH GODS ARE TO SPIN THE WEB OF LIFE ITSELF!”
6 comments
Comments sorted by top scores.
comment by deepthoughtlife · 2022-08-30T14:22:35.558Z · LW(p) · GW(p)
I strongly disagree. It would be very easy for a non-omnipotent, unpopular, government that has limited knowledge of the future, that will be overthrown in twenty years to do a hell of a lot of damage with negative utilitarianism, or any other imperfect utilitarianism. On a smaller scale, even individuals could do it alone.
A negative utilitarian could easily judge that something that had the side effect of making people infertile would cause far less suffering than not doing it, causing immense real world suffering amongst the people who wanted to have kids, and ending civilizations. If they were competent enough, or the problem slightly easier than expected, they could use a disease that did that without obvious symptoms, and end humanity.
Alternately, a utilitarian that valued the far future too much might continually cause the life of those around them to be hell for the sake of imaginary effects on said far future. They might even know those effects are incredibly unlikely, and that they are more likely to be wrong than right due to the distance, but it's what the math says, so...they cause a civil war. The government equivalent would be to conquer Africa (success not necessary for the negative effects, of course), or something like that, because your country is obviously better at ruling, and that would make the future brighter. (This could also be something done by a negative utilitarian to alleviate the long-term suffering of Africans).
Being in a limited situation does not automatically make Utilitarianism safe. (Nor any other general framework.) The specifics are always important.
Replies from: George3d6↑ comment by George3d6 · 2022-08-31T12:56:04.699Z · LW(p) · GW(p)
A negative utilitarian could easily judge that something that had the side effect of making people infertile would cause far less suffering than not doing it, causing immense real world suffering amongst the people who wanted to have kids, and ending civilizations. If they were competent enough, or the problem slightly easier than expected, they could use a disease that did that without obvious symptoms, and end humanity.
But you're thinking of people completely dedicated to an ideology.
That's why I'm saying a "negative utilitarian charter" rather than "a government formed of people autistically following a philosophy"... much like, e.g. the US government has a "liberal democratic" charter, or the USSR had a "communist" charter of sorts.
In practice these things don't come about because member in the organization disagree, secret leak, conspiracies are throttled by lack of consensus, politicians voted out, engineered solutions imperfect (and good engineers and scinetists are aware of as much)
Replies from: deepthoughtlife↑ comment by deepthoughtlife · 2022-08-31T15:51:45.361Z · LW(p) · GW(p)
It doesn't take many people to cause these effects. If we make them 'the way', following them doesn't take an extremist, just someone trying to make the world better, or some maximizer. Both these types are plenty common, and don't have to make it fanatical at all. The maximizer could just be a small band of petty bureaucrats who happen to have power over the area in question. Each one of them just does their role, with a knowledge that it is to prevent overall suffering. These aren't even the kind of bureaucrats we usually dislike! They are also monsters, because the system has terrible (and knowable) side effects.
comment by Shmi (shminux) · 2022-08-30T21:01:51.565Z · LW(p) · GW(p)
Morality works as a set of guidelines, not rules. Any human ethical system is a set of heuristics that emerge from the need to coexist with their tribe, and it never works when taken as absolute, so no point in trying.
comment by Noosphere89 (sharmake-farah) · 2022-08-31T13:53:39.858Z · LW(p) · GW(p)
This has one important implication for the long-term future, assuming no collapse has happened (in the thousands of years future.)
The best-case scenario for the long-term future will be very weird, lovecraftian and surprisingly horrifying for morality, as extremely high-end technology like whole brain emulation, genetic engineering, immortality, and nanotechnology make concepts like personal identity very weird, very fast. This obviously makes a whole lot of moralities look lovecraftian and weird, but no more so than even good futures apply. So the fact that morality gets weird and lovecraftian fast is really a symptom of a larger problem, that extreme scenarios seem to do apply to the long-term future.
comment by Nathan Helm-Burger (nathan-helm-burger) · 2022-08-30T17:58:43.985Z · LW(p) · GW(p)
Negative Utilitarianism is quite dangerous to my values in a high-power-imbalance scenario (not omnipotence or future-knowledge, just even a billionaire under our current system, or an unusually intelligent scientist with a well equipped cross-discipline laboratory). Why? Because I positively value life and sapient experience, much more than I anti-value suffering. I have a preference for on-average moderately suffering humans over no humans. A negative utilitarian can take actions to prevent the suffering of future humans and animals by ending those humans and animals ability to reproduce, or removing them from existence. Even a horrible death is worth it if it prevents a great deal of suffering of the target's descendants. I don't want a brilliant negative utilitarian unleashing a horrible plague on the world designed to render as many humans as possible sterile, and not caring if it also killed them. I'm only a mid-tier genetic engineer with a few years of practice making custom viruses for modifying the brains of mammals, and I could come up with a handful of straightforward candidate plagues that I could assemble in a few months time in a well equipped lab without tripping any of the current paltry international bio-defense alerts. We are really vulnerable to technological leaps.
Simply a narrow AI made of current technology that had a similar power of designing custom viruses that DeepFold has at predicting protein folding is very dangerous in bad hands. Such a tool would make my simplistic ideas for forcibly sterilizing humanity and all animals much more effective, reliable, and more fast and easy to produce.
One mad scientist with a grad student level background in genetic engineering, a few weeks access to a couple million dollars of lab equipment (as most grad students studying anything involving genetic engineering would be expected to have), and a narrow AI assistant... that's nowhere near the power of a large state or corporate actor, much less an omnipotent being. I don't want veto power over the future of humanity to fall into any single person's hands.