An argument that consequentialism is incomplete
post by cousin_it · 2024-10-07T09:45:12.754Z · LW · GW · 27 commentsContents
27 comments
I think consequentialism describes only a subset of my wishes. For example, maximizing money is well modeled by it. But when I'm playing with something, it's mostly about the process, not the end result. Or when I want to respect the wishes of other people, I don't really know what end result I'm aiming for, but I can say what I'm willing or unwilling to do.
If I try to shoehorn everything into consequentialism, then I end up looking for "consequentialist permission" to do stuff. Like climbing a mountain: consequentialism says "I can put you on top of the mountain! Oh, that's not what you want? Then I can give you the feeling of having climbed it! You don't want that either? Then this is tricky..." This seems a lot of work, just to do something I already want to do. There are many reasons to do things - not everything has to be justified by consequences.
There are of course objections. Objection one is that non-consequentialist wishes can make you go in circles, like that Greg Egan character who spent thousands of hours carving table legs, making himself forget the last time so he could enjoy the next. But when pushed to such extremes, a consequentialist goal like maximizing happiness can also lead to weird results (vats of happiness goo...) And if we don't push quite so hard, then I can imagine utopia containing both consequentialist and non-consequentialist stuff, doing things for their own sake and such. So there's no difference here.
Objection two is that our wishes come from evolution, which wants us to actually achieve things, not go in circles. But our wishes aren't all perfectly aligned with with evolution's wish (procreate more). They are a bunch of heuristics that evolution came up with, and a bunch of culturally determined stuff on top of that. So there's no difference here either - both our consequentialist and non-consequentialist wishes come from an equally messy process, so they're equally legitimate.
27 comments
Comments sorted by top scores.
comment by cubefox · 2024-10-07T15:03:30.070Z · LW(p) · GW(p)
We often imagine a "consequence" as a state of the world at a particular time. But we could also include processes that stretch out in time under the label "consequence". More generally, we could allow the truth of any proposition as a potential consequence. This wouldn't be restricted to a state, and not even to a single process.
I think this is intuitive. Generally, when we want something, we do wish for something to be true. E.g. I want to climb a mountain: I want it to be true that I climb a mountain.
Replies from: cousin_it, cubefox↑ comment by cousin_it · 2024-10-07T16:02:49.574Z · LW(p) · GW(p)
Yeah, you can say something like "I want the world to be such that I follow deontology" and then consequentialism includes deontology. Or you could say "it's right to follow consequentialism" and then deontology includes consequentialism. Understood this way, the systems become vacuous and don't mean anything at all. When people say "I'm an consequentialist", they usually mean something more: that their wishes are naturally expressed in terms of consequences. That's what my post is arguing against. I think some wishes are naturally consequentialist, but there are other equally valid wishes that aren't, and expressing all wishes in terms of consequences isn't especially useful.
Replies from: cubefox, ben-livengood↑ comment by cubefox · 2024-10-07T16:28:48.156Z · LW(p) · GW(p)
This reminds me of the puzzle: why is death bad? After all, when you are dead, you won't be around to suffer from it. Or why worry about not being alive in the future when you weren't alive before birth either? Simple response: We just don't want to be dead in the future for evolutionary reasons. Organisms who hate death had higher rates of reproduction. What matters for us is not a fact about the consequence of dying, but what we happen to want or not want. (Related: this [LW(p) · GW(p)], but also this [EA · GW].)
↑ comment by Ben Livengood (ben-livengood) · 2024-10-07T22:08:43.620Z · LW(p) · GW(p)
I think consequentialism is the robust framework for achieving goals and I think my top goal is the flourishing of (most, the ones compatible with me) human values.
That uses consequentialism as the ultimate lever to move the world but refers to consequences that are (almost) entirely the results of our biology-driven thinking and desiring and existing, at least for now.
↑ comment by cubefox · 2024-10-09T03:56:21.925Z · LW(p) · GW(p)
On reflection, I would rewrite this a bit: What we care about is things being true. Potential facts. So we care about the truth of propositions. And both actions ("I do X.") and consequences of those actions ("Y happens.") can be expressed as propositions. But an action is not itself a consequence of an action. It's directly caused by a decision. So consequentialism is "wrong" insofar that it doesn't account for the possibility that one can care about actions for themselves, not just for their consequences.
Replies from: cousin_it↑ comment by cousin_it · 2024-10-09T16:44:52.671Z · LW(p) · GW(p)
Yeah. I think consequentialism is a great framing that has done a lot of good in EA, where the desired state of the world is easy to describe (remove X amount of disease and such). And this created a bit of a blindspot, where people started thinking that goals not natively formulated in terms of end states ("play with this toy", "respect this person's wishes" and such) should be reformulated in terms of end states anyway, in more complex ways. To be honest I still go back and forth on whether that works - my post was a bit polemical. But it feels like there's something to the idea of keeping some goals in our "internal language", not rewriting them into the language of consequences.
comment by Vladimir_Nesov · 2024-10-07T14:25:58.172Z · LW(p) · GW(p)
For this argument, consequentialism is like kinetic theory of gases. The point is not that it's wrong and doesn't work (where it should), but that it's not a relevant tool for many purposes.
I started giving up on consequentialism when thinking about concepts of alignment like corrigibility and then membranes (respect for autonomy). They could in principle be framed as particular preferences, but that doesn't appear to be a natural way of thinking about them, of formulating them more clearly. Even in decision theory, with the aim of getting certain outcomes to pass, my current preferred ontology of simulation-structure of things [LW(p) · GW(p)] points more towards convincing other computations to move the world in certain ways than towards anticipating their behavior before they decide what it should be themselves. It's still a sort of "consequentialism", but the property of preferences being unchanging is not a centerpiece, and the updateless manipulation of everything else is more of a technical error (like two-boxing in ASP) than a methodology.
In human thinking, issues with consequentialism seem to be about losing sight of chasing the void [LW · GW]. Reflectively endorsed hedonistic goals (in a broad sense, which could include enjoyment of achievement) are a bit of a dead end, denying the process of looking for different kinds of aims, sometimes cynical reveling in knowing the secrets of human nature.
Replies from: cousin_it↑ comment by cousin_it · 2024-10-07T16:26:40.295Z · LW(p) · GW(p)
Yeah, I've been thinking along similar lines. Consequentialism stumbles on the richness of other creatures, and ourselves. Stumbles in the sense that many of our wishes are natively expressed in our internal "creature language", not the language of consequences in the world.
comment by Said Achmiz (SaidAchmiz) · 2024-10-08T01:07:02.529Z · LW(p) · GW(p)
Doesn’t rule consequentialism (as opposed to act consequentialism) solve all of these problems (and also all[1] other problems that people sometimes bring up as alleged “arguments against consequentialism”)?
Approximately all. ↩︎
comment by Steven Byrnes (steve2152) · 2024-10-08T00:50:29.256Z · LW(p) · GW(p)
This post strikes me as saying something extremely obvious and uncontroversial, like “I care about what happens in the future, but I also care about other things, e.g. not getting tortured right now”. OK, yeah duh, was anyone disputing that??
non-consequentialist wishes can make you go in circles
I feel like you’re responding to an objection that doesn’t make sense in the first place for more basic reasons. Why is “going around in circles” bad? Well, it’s bad by consequentialist lights—if your preferences exclusively involve the state of the world in the distant future, then going around in circles is bad according to your preferences. But that’s begging the question. If your care about other things too, then there isn’t necessarily any problem with “going around in circles”. See my silly “restaurant customer” example here [LW · GW].
Replies from: cousin_it↑ comment by cousin_it · 2024-10-08T07:21:57.560Z · LW(p) · GW(p)
This post strikes me as saying something extremely obvious and uncontroversial, like “I care about what happens in the future, but I also care about other things, e.g. not getting tortured right now”. OK, yeah duh, was anyone disputing that??
I'm thinking about cases where you want to do something, and it's a simple action, but the consequences are complex and you don't explicitly analyze them - you just want to do the thing. In such cases I argue that reducing the action to its (more complex) consequences feels like shoehorning.
For example: maybe you want to climb a mountain because that's the way your heuristics play out, which came from evolution. So we can "back-chain" the desire to genetic fitness; or we can back-chain to some worldly consequences, like having good stories to tell at parties as another commenter said; or we can back-chain those to fitness as well, and so on. It's arbitrary. The only "bedrock" is that when you want to climb the mountain, you're not analyzing those consequences. The mountain calls you, it doesn't need to be any more complex than that. So why should we say it's about consequences? We could just say it's about the action.
And once we allow ourselves to do actions that are just about the action, it seems calling ourselves "consequentialists" is somewhere between wrong or vacuous. Which is the point I was making in the post.
comment by MinusGix · 2024-10-08T06:30:06.537Z · LW(p) · GW(p)
I think there's two parts of the argument here:
- Issues of expressing our values in a consequentialist form
- Whether or not consequentialism is the ideal method for humans
The first I consider not a major problem. Mountain climbing is not what you can put into the slot to maximize, but you do put happiness/interest/variety/realness/etc. into that slot. This then falls back into questions of "what are our values". Consequentialism provides an easy answer here: mountain climbing is preferable along important axes to sitting inside today. This isn't always entirely clear to us, we don't always think natively in terms of consequentialism, but I disagree with:
There are many reasons to do things - not everything has to be justified by consequences.
We just don't usually think in terms of consequences, we think in terms of the emotional feeling of "going mountain climbing would be fun". This is a heuristic, but is ultimately about consequences: that we would enjoy the outcome of mountain climbing better than the alternatives immediately available to our thoughts.
This segues into the second part. Is consequentialism what we should be considering? There's been posts about this before, of whether our values are actually best represented in the consequentialist framework.
For mountain climbing, despite the heuristic of "I feel like mountain climbing today", if I learned that I would actually enjoy going running for an hour then heading back home more, then I would do that instead. When I'm playing with some project, part of that is driven by in-the-moment desires, but ultimately from a sense that this would be an enjoyable route.This is part of why I view the consequentialist lens as a natural extension of most if not all of our heuristics.
An agent that really wanted to go in circles doesn't necessarily have to stop, but for humans we do care about that.
There's certainly a possible better language/formalization to talk about agents that are mixes of consequentialist parts and non-consequentialist parts, which would be useful for describing humans, but I also am skeptical about your arguments for non-consequentialist elements of human desires.
↑ comment by cousin_it · 2024-10-08T18:26:35.971Z · LW(p) · GW(p)
Here's maybe another point of view on this: consequentialism fundamentally talks about receiving stuff from the universe. An hour climbing a mountain, an hour swimming in the sea, or hey, an hour in the joy experience machine. The endpoint of consequentialism is a sort of amoeba that doesn't really need a body to overcome obstacles, or a brain to solve problems, all it needs to do is receive and enjoy. To the extent I want life to be also about doing something or being someone, that might be a more natural fit for alternatives to consequentialism - deontology and virtue ethics.
Replies from: MinusGix↑ comment by MinusGix · 2024-10-09T21:49:26.676Z · LW(p) · GW(p)
This reply is perhaps a bit too long, oops.
Having a body that does things is part of your values and is easily described in them. I don't see deontology or virtue ethics as giving any more fundamentally adequate solution to this (beyond the trivial 'define a deontological rule about ...', or 'it is virtuous to do interesting things yourself', but why not just do that with consequentialism?).
My attempt at interpreting what you mean is that you're drawing a distinction between morality about world-states vs. morality about process, internal details, experiencing it, 'yourself'. To give them names, "global"-values (you just want them Done) & "indexical"/'local"-values (preferences about your experiences, what you do, etc.) Global would be reducing suffering, avoiding heat death and whatnot. Local would be that you want to learn physics from the ground up and try to figure out XYZ interesting problem as a challenge by yourself, that you would like to write a book rather than having an AI do it for you, and so on.
I would say that, yes, for Global you should/would have an amorphous blob that doesn't necessarily care about the process. That's your (possibly non-sentient) AGI designing a utopia while you run around doing interesting Local things. Yet I don't see why you think only Global is naturally described in consequentialism.
I intrinsically value having solved hard problems—or rather, I value feeling like I've solved hard problems, which is part of overall self-respect, and I also value realness to varying degrees. That I've actually done the thing, rather than taken a cocktail of exotic chemicals. We could frame this in a deontological & virtue ethics sense: I have a rule about realness, I want my experiences to be real. / I find it virtuous to solve hard problems, even if in a post-singularity world.
But do I really have a rule about realness? Uh, sort-of? I'd be fine to play a simulation where I forget about the AGI world and am in some fake-scifi game world and solve hard problems. In reality, my value has a lot more edge-cases that will be explored than many deontological rules prefer. My real value isn't really a rule, it is just sometimes easy to describe it that way. Similar to how "do not lie" or "do not kill" is usually not a true rule.
Like, we could describe my actual value here as a rule, but seems actually more alien to the human mind. My actual value for realness is some complicated function of many aspects of my life, preferences, current mood to some degree, second-order preferences, and so on. Describing that as a rule is extremely reductive.
And 'realness' is not adequately described as a complete virtue either. I don't always prefer realness: if playing a first-person shooter game, I prefer that my enemies are not experiencing realistic levels of pain! So there are intricate trade-offs here as I continue to examine my own values.
Another aspect I'm objecting to mentally when I try to apply those stances is that there's two ways of interpreting deontology & virtue ethics that I think are common on LW. You can treat them as actual philosophical alternatives to consequentialism [LW · GW], like following the rule "do not lie". Or you can treat them as essentially fancy words for deontology=>"strong prior for this rule being generally correct and also a good coordination point" and virtue ethics=>"acting according to a good Virtue consistently as a coordination scheme/culture modification scheme and/or because you also think that Virtue is itself a Good".
Like, there's a difference between talking about something using the language commonly associated with deontology and actually practicing deontology. I think conflating the two is unfortunate.
The overaching argument here is that consequentialism properly captures a human's values, and that you can use the basic language of "I keep my word" (deontology flavored) or "I enjoy solving hard problems because they are good to solve" (virtue ethics flavored) without actually operating within those moral theories. You would have the ability to unfold these into the consequentialist statements of whatever form you prefer.
In your reply to cubefox, "respect this person's wishes" is not a deontological rule. Well, it could be, but I expect your actual values don't fulfill that. Just because your native internal language suggestively calls it that, doesn't mean you should shoehorn it into the category of rule!
"play with this toy" still strikes me as natively a heuristic/approximation to the goal of "do things I enjoy". The interlinking parts of my brain that decided to bring that forward is good at its job, but also dumb because it doesn't do any higher order thinking. I follow that heuristic only because I expect to enjoy it—the heuristic providing that information. If I had another part of my consideration that pushed me towards considering whether that is a good plan, I might realize that I haven't actually enjoyed playing with a teddy bear in years despite still feeling nostalgia for that. I'm not sure I see the gap between consequentialism and this. I don't have the brain capacity to consider every impulse I get, but I do want to consider agents other than AIXI to be a consequentialist.
I think there's a space in there for a theory of minds, but I expect it would be more mechanistic or descriptive rather than a moral theory. Ala shard theory.
Or, alternatively, even if you don't buy my view that the majority of my heuristics can be cast as approximations of consequentialist propositions, then deontology/virtue ethics are not natural theories either by your descriptions. They miss a lot of complexity even within their usual remit.
Replies from: cousin_it↑ comment by cousin_it · 2024-10-13T10:57:04.546Z · LW(p) · GW(p)
No problem about long reply, I think your arguments are good and give me a lot to think about.
My attempt at interpreting what you mean is that you’re drawing a distinction between morality about world-states vs. morality about process, internal details, experiencing it, ‘yourself’.
I just thought of another possible classification: "zeroth-order consequentialist" (care about doing the action but not because of consequences), "first-order consequentialist" (care about consequences), "second-order consequentialist" (care about someone else being able to choose what to do). I guess you're right that all of these can be translated into first-order. But by the same token, everything can be translated to zeroth-order. And the translation from second to first feels about as iffy as the translation from first to zeroth. So this still feels fuzzy to me, I'm not sure what is right.
comment by Dagon · 2024-10-07T18:14:18.232Z · LW(p) · GW(p)
This may be a complaint about legibilism, not specifically consequentialism. Godel was pretty clear - a formal system is either incomplete or inconsistent. Any moral or decision system that demands that everything important about a decision is clear and well-understood is going to have similar problems. Your TRUE reasons for a lot of things are not accessible, so you will look for legible reasons to do what you want, and you will find yourself a rationalizing agent, rather than a rational one.
That said, consequentialism is still a useful framework for evaluating how closely your analytic self matches with your acting self. It's not going to be perfect, but you can choose to get closer, and you can get better at understanding which consequences actually matter to you.
Climbing a mountain has a lot of consequences that you didn't mention, but probably should consider. It connects you to people in new ways. It gives you interesting stories to tell at parties. It's a framework for improving your body in various ways. If you die, it lets you serve as a warning to others. It changes your self-image (honestly, this one may be the most important impact).
Replies from: cousin_it↑ comment by cousin_it · 2024-10-07T21:26:41.996Z · LW(p) · GW(p)
Maybe. Or maybe the wish itself is about climbing the mountain, just like it says, and the other benefits (which you can unwind all the way back to evolutionary ones) are more like part of the history of the wish.
Replies from: Dagon↑ comment by Dagon · 2024-10-07T21:49:11.365Z · LW(p) · GW(p)
Quite possibly, but without SOME framework of evaluating wishes, it's hard to know which wishes (even of oneself) to support and which to fight/deprioritize.
Humans (or at least this one) often have desires or ideas that aren't, when considered, actually good ideas. Also, humans (again, at least this one) have conflicting desires, only a subset of which CAN be pursued.
It's not perfect, and it doesn't work when extended too far into the tails (because nothing does), but consequentialism is one of the better options for judging one's desires and picking which to pursue.
Replies from: cousin_it↑ comment by cousin_it · 2024-10-07T22:30:42.870Z · LW(p) · GW(p)
This is tricky. In the post I mentioned "playing", where you do stuff without caring about any goal, and most play doesn't lead to anything interesting. But it's amazing how many of humanity's advances were made in this non-goal-directed, playing mode. This is mentioned for example in Feynman's book, the bit about the wobbling plate.
comment by AnthonyC · 2024-10-11T01:04:57.820Z · LW(p) · GW(p)
I think this post is asking questions a lot of people ask, and most of the natural responses are already in the comments.
I'd note that when evaluating a choice or action, a consequentialist should ideally consider all of its effects, throughout the entire future light cone (aka there's no such thing as an end result, only a succession of intermediate states that ripples out forever). That's obviously computationally infeasible, but we also aren't so limited that we have to phrase desires with short sentences that can be so trivially shown are not what we really want.
Beyond that, consequentialism doesn't tell you what you're allowed to want. There's nothing non-consequentialist with a desire being, "No I want to actually climb the mountain, the old fashioned way." But it does give you a framework from which to say, "Given that I want to climb the mountain, the course of action where you put me up there or give me a feeling or false memory of having climbed up there doesn't achieve it." Similarly, there's nothing non-consequentialist about wanting to play, or wanting to respect others' desires. You want what you want. Consequentialism evaluates whether a given choice or action gets you more or less of what you want.
comment by Rafael Harth (sil-ver) · 2024-10-08T06:47:34.735Z · LW(p) · GW(p)
I don't see any limit to or problem with consequentialism here, only an overly narrow conception of consequences.
In the mountain example, well, it depends on what you, in fact, want. Some people (like my 12yo past self) actually do want to reach the top of the mountain. Other people, like my current self, want things like take a break from work, get light physical exercise, socialize, look at nature for a while because I think it's psychologically healthy, or get a sense of accomplishment after having gotten up early and hiked all the way up. All of those are consequences, and I don't see what you'd want that isn't a consequence.
Whether consequentialism is impractical to think about everyday things is question I'd want to keep striclty separate from the philosophical component... but I don't see the impracticality in this example, either. When I debated going hiking this summer, I made a consequentialist cost-benefit analysis, however imperfectly.
Replies from: cousin_it↑ comment by cousin_it · 2024-10-08T07:31:47.047Z · LW(p) · GW(p)
Some people (like my 12yo past self) actually do want to reach the top of the mountain. Other people, like my current self, want things like take a break from work, get light physical exercise, socialize, look at nature for a while because I think it’s psychologically healthy, or get a sense of accomplishment after having gotten up early and hiked all the way up.
There's plenty of consequentialism debate in other threads, but here I'd just like to say that this snippet is a kind of unintentionally sad commentary on growing up. It's probably not even sad to you; but to me it evokes a feeling of "how do we escape from this change, even temporarily".
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2024-10-08T08:34:51.543Z · LW(p) · GW(p)
Reading this reply evoked memories for me of thinking along similar lines. Like that it used to be nice and simple with goals being tied to easily understood achievements (reach the top, it doesn't matter how I get there!) and now they're tied to more elusive things--
-- but they are just memories because at some point I made a conceptual shift that got me over it. The process-oriented things don't feel like they're in a qualitatively different category anymore; yeah they're harder to measure, but they're just as real as the straight-forward achievements. Nowadays I only worry about how hard they are to achieve.
comment by James Stephen Brown (james-brown) · 2024-10-11T04:48:10.171Z · LW(p) · GW(p)
This seems to be discounting the consequentialist value of short term pleasure seeking. Doing something because you enjoy the process has immediate positive consequences. Doing something because it is enriching for your life has both positive short term and long term consequences.
To discount short term pleasures as hedonism (as some might) is to miss the point of consequentialism (well of Utilitarianism at least), which is to increase well-being (which can be either short or long term). Well-being can only be measured (ultimately) in terms of pleasure and pain.
Though I agree consequentialism is necessarily incomplete as we don't have perfect predictive powers.
comment by ProgramCrafter (programcrafter) · 2024-10-07T20:43:06.999Z · LW(p) · GW(p)
To use a physics analogy, utility often isn't a potential function over state of affairs, and for many depends on path taken.
However, state of affairs is but a projection; state of world also includes mind states, and you might be indifferent between any quantum paths to worlds involving same mind state (including memory, beliefs) for you. (As a matter of values, I am not indifferent between paths either; rather, I endorse some integrated-utility up to an unspecified pinning point in future.)
comment by deepthoughtlife · 2024-10-07T22:23:52.156Z · LW(p) · GW(p)
Broadly, consequentialism requires us to ignore many of the consequences of choosing consequentialism. And since that is what matters in consequentialism it is to that exact degree self-refuting. Other ethical systems like Deontology and Virtue Ethics are not self-refuting and thus should be preferred to the degree we can't prove similar fatal weaknesses. (Virtue Ethics is the most flexible system to consider, as you can simply include other systems as virtues! Considering the consequences is virtuous, just not the only virtue! Coming up with broadly applicable rules that you follow even when they aren't what you most prefer is a combination of honor and duty, both virtues.)