The Categorical Imperative Obscures
post by Gordon Seidoh Worley (gworley) · 2022-12-06T17:48:01.591Z · LW · GW · 17 commentsContents
18 comments
I've been thinking about the categorical imperative lately and how it obscures more than it illuminates. I'll make the case that the categorical imperative tries to pull a fast one by taking something for granted, and the thing it takes for granted is the thing we care about.
Disclaimer: I'm sure someone has already made this argument elsewhere in more detailed, elegant, and formal terms. Since I'm not a working academic philosopher, I don't really mind that I might be rediscovering something people already know. What I find valuable is thinking about these things for myself. If you think it might be interesting to read my thinking out loud, continue on.
One common English translation of Kant's summary of the categorical imperative is
Act as if the maxims of your action were to become through your will a universal law of nature.
My own translation into more Less Wrong friendly terms:
Act as if you were following norms that should apply universally.
I think an argument could be made—and probably already has—that this is pointing in the same direction as timeless decision theory [? · GW].
The categorical imperative aims to solve the problem of what norms to pick, but goes on to try to claim universality. I think that's the tricky bit where it's attempting an end-run around the hard part of picking good norms.
My (admittedly loose) translation tries to make this attempted end-run explicit by including the word "should", although the typical translation that includes "law of nature" serves the same purpose, though framed to fit a moral realist worldview. The trick being attempted is to assume the thing we want to prove, namely universality, by assuming that satisfying our judgment of what's best will lead to universality.
Let me give an example to demonstrate the problem.
Suppose a Babyeater [LW · GW] tries to apply the categorical imperative. Since they think eating babies is good, they will act in accordance with the norm of baby-eating and be happy to see others adopt their baby-eating ways.
You, a human, might object that you don't like this so it can't be universally true, yet a Babyeater would object that you not eating babies is an outrageous norm violation that will lead to terrible outcomes.
The trouble is that the categorical imperative tries to smuggle in every moral agents' values without actually doing the hard work of aggregating and deciding between them. It's straightforward so long as everyone has the same underlying values, but one iota of difference in what people (or animals, or AIs, or thermostats [LW · GW], or electrons) care about and any attempt to follow the categorical imperative is not substantially different from simply following moral intuition.
The categorical imperative does force one to refine one's intuitions and better optimize norms for working with others who share similar values. That's not nothing, but it's also only going so far as to coordinate around a metaethics grounded in "my favorite theory".
Recent events [LW · GW] have some folks rethinking their commitments to consequentialism, and some of those folks are looking closer at deontological ethics. I think that's valuable, but it's also worth keeping in mind that deontology has its own blind spots. A deontologist might not draw any repugnant conclusions, but they are more likely to ignore what people unlike themselves care [LW · GW] about.
17 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2022-12-06T23:08:04.541Z · LW(p) · GW(p)
Hmm, unrelated, I guess. Why is it interesting to think about Kant's stab at moral philosophy? We don't bother using Newton's original notation for his laws, because we understand them a lot better by now. Is reading Kant mostly of historical interest, or is there something that can still be useful now?
Replies from: gworley, TAG, TAG↑ comment by Gordon Seidoh Worley (gworley) · 2022-12-06T23:40:01.629Z · LW(p) · GW(p)
He's on my mind because I've heard folks talking about him in response to SBF's naive utilitarianism and the bad consequences that caused (assuming you think that's what's going on with SBF). Specifically folks have been taking a fresh look at deontology, and it crystalized just enough to turn into a post instead of a Twitter thread.
Replies from: shminux↑ comment by Shmi (shminux) · 2022-12-07T00:32:17.123Z · LW(p) · GW(p)
Gotcha, thanks! I guess the SBF debacle has a high enough profile that people question their own normative ethics.
↑ comment by TAG · 2022-12-07T02:27:53.545Z · LW(p) · GW(p)
Why is it interesting to think about Kant’s stab at moral philosophy?
Where is the progress beyond it? The rationalsphere has only recently progressed from naive utilitarianism to something like rule consequentialism or Kantianism....it's playing catch up.
Replies from: shminux↑ comment by Shmi (shminux) · 2022-12-07T03:57:12.119Z · LW(p) · GW(p)
That was my question, whether there has been much progress. My guess is that Eliezer eventually converged to the obvious, utilitarianism within reasonable deontological rules inspired by the virtues held by the person. This resolves the edge cases like whether to be honest with a killer at the door (no, because it would conflict with your values). This doesn't seem like anything very new, but it certainly departs from pure deontology and pure utilitarianism
comment by Daniel V · 2022-12-06T19:08:56.025Z · LW(p) · GW(p)
The categorical imperative aims to solve the problem of what norms to pick, but goes on to try to claim universality.
...
The trouble is that the categorical imperative tries to smuggle in every moral agents' values without actually doing the hard work of aggregating and deciding between them.
That would be a problem, if that is what it were doing. The categorical imperative aims to define the obligation an individual has in conducting themselves consistent with the autonomy of the will. Each individual may have a distinct moral code consistent with that obligation, and that is indeed a problem for ethics, but the categorical imperative does not attempt to help people pick specific norms to apply across multiple agents.
Kant lays out a ton of definitions and then leans on them heavily; it's classic old philosophy. Understanding what Kant said means you need to read Kant, not just his conclusions. That's a big strike against the clarity of his writing (it is a real slog to get through), but whether he achieves his intent should be judged vs. his self-professed intent, not against a misunderstanding of his intent.
Replies from: gworley, Dagon↑ comment by Gordon Seidoh Worley (gworley) · 2022-12-06T19:41:39.730Z · LW(p) · GW(p)
This is a totally fair criticism, but also I don't think it matters for my purposes.
My not entirely secret agenda here is to say something in response to people taking a closer look at deontology after some folks getting scared by SBF's naive utilitarianism. Most of them are going to be performing the same surface-level interpretation of Kant that I am here, so that's the thing I want to address.
I'm very sympathetic to your point of view, though. I often feel the same way about people's readings of Husserl, Heidegger, and Sartre among others.
Replies from: Daniel V↑ comment by Daniel V · 2022-12-07T17:05:59.109Z · LW(p) · GW(p)
Ah, makes more sense now. I'm generally not a fan of that approach though, and here's why.
My comment was that your conclusion about the categorical imperative vis-a-vis its aims was off because the characterization of its aims wasn't quite right. But you're saying it's okay, we're still learning something here because you meant to do the not-quite-right characterization because most people will do that, and this is the conclusion they will reach from that. But you never tell me up front that's what you're doing, nor do you caution that the characterization is not-quite-right (you said your purpose was to "make the case that the categorical imperative tries to pull a fast one" not that "laypeople will end up making the case..."). So I'm left thinking you genuinely believe the case you laid out, and my efforts go into addressing the flaws in that. Little did I know you were trying to describe a manner of not-quite-right thinking people will do, a description which we should be interested in not because of its truth value but because of its inaccuracy (which you never pointed out).
I've seen this before in another domain: someone wanted to argue that doing X in modeling would be a bad call. So they did Y, then did X, got bad results and said voila, X is bad. But they did Y too (which in this particular case was a priori known to be not the right thing to do and did most of the damage - if they did Y and not X, they'd get good results, and if they had just not done Y, they'd get good results without X and adequate results with X)! When I pointed out that X really wasn't that bad by itself, they said, well, Y is pretty standard practice so we'd expect people to probably improperly do that in this particular case anyway. You gotta tell me that beforehand! Otherwise it looks like a flaw in your commenting on the true state of the world as opposed to a feature of your analysis of the typical approach. But it also changes the message: the problem isn't with X but Y, the problem isn't with the categorical imperative but analyzing it superficially. Of course, that message is also the main point of my comment.
↑ comment by Dagon · 2022-12-06T19:40:54.103Z · LW(p) · GW(p)
whether he achieves his intent should be judged vs. his self-professed intent, not against a misunderstanding of his intent.
What if I don't care about his intent or whether he achieves it? Is it a useful framework for me to make decisions within, and to judge and cajole others in their actions?
Replies from: Daniel V↑ comment by Daniel V · 2022-12-07T17:14:58.549Z · LW(p) · GW(p)
1. My comment was contra "the case that the categorical imperative tries to pull a fast one." Evaluating this (the point of the OP) very much requires an understanding of intent.
2. Well sure, if you don't care about the point of the OP, you can care about other things. Whether it is a useful framework for you to judge and cajole others in their actions is a very important question! You might still care about intent though. Is a handsaw the right tool for hammering a nail? I would recommend you look at a handsaw's intended use case and rule it out. If you really want to see how it would do, okay then, I can't really stop you. You can conclude a handsaw it bad at hammering nails, but don't go to the hardware store and complain. Likewise, is the categorical imperative a good way to aggregate preferences? I say it's not for that, so you don't really have to try it. If you really want to see how it would do and find out that it's bad for the job though, great. But don't say you were duped!
comment by Dagon · 2022-12-06T18:15:36.467Z · LW(p) · GW(p)
The idea of a categorical imperative has never really resonated with me, and I think this description helps me understand why that is. I don't believe ANY of my preferences or social-level beliefs are, should be, or can be, universal.
I have pretty strong beliefs about some factors of an equilibrium that works in my extended in-group today, and I get angry at violators, and I'm happy to improve my own and others' compliance. But I have zero claim that it's the only possible one, or the best one, or even that it's still valid in different technological or cultural contexts.
comment by Ben (ben-lang) · 2022-12-07T12:59:51.543Z · LW(p) · GW(p)
I think the categorical imperative is a nice framework. I don't think your counterexample quite works for me. The babyeater applies the imperative, and is a morally upstanding babyeater who eats lots of its own children. Meanwhile the human applies the imperative and is a morally upstanding human who doesn't kill any humans (baby or otherwise).
Both actors are acting morally, according to the imperative. That they are not acting the same way just shows that they are different agents with different values. Conversely, a babyeater and a human could both fail to live up to this imperative (eg. the babyeater thinks that child eating is good for the wider world, and wants everyone else to eat their own children to ensure the world stays nice, but it makes an exception for itself.). For both humans and babyeaters adopting the imperative might change their policy. It changes it in different ways for the two of them because they are different.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2022-12-07T23:57:15.513Z · LW(p) · GW(p)
Yep. I'd still say that there's something people often try to pull about universality though, imagining that what's right for group X is right for all groups. The divide is usually not between humans and Babyeaters, but between humans who believe X and humans who believe not X. For example, if you took the two sides of the US culture war, red and blue tribe, and applied the categorical imperative, you'll get red and blue norms, but they won't be universal norms, but both groups would like to claim for various reasons that their norms are universal. This is something that the categorical imperative, as stated here, doesn't really address and actually lets you get away with training to claim universality when you really mean "universal for this group of likeminded folks".
Replies from: ben-lang↑ comment by Ben (ben-lang) · 2022-12-08T09:03:26.504Z · LW(p) · GW(p)
Yes, I agree that it doesn't solve all morality and provide the one true moral code. However, at least in my mind, it can be useful for working out which actions are acceptable within a certain narrow-ish group.
comment by Erich_Grunewald · 2022-12-06T20:32:21.431Z · LW(p) · GW(p)
The categorical imperative aims to solve the problem of what norms to pick, but goes on to try to claim universality. [...] The trick being attempted is to assume the thing we want to prove, namely universality, by assuming that satisfying our judgment of what’s best will lead to universality.
It sounds wrong to me to say that the categorical imperative "claims universality". I'm not sure exactly what you mean by that, but it sounds as if you mean "the categorical imperative says that if a thing is permitted (according to itself), it's universally permitted, but it also says that a thing is permitted because it's universal or universalizable, so it's circular".
I don't think the categorical imperative says that "satisfying our judgment of what’s best will lead to universality". I think it says that it does so only in so far as we act rationally (in the Kantian sense), that, in so far as people are rational, they act according to the same law, and that this law is universal precisely because it's based in practical reason (which every moral agent shares).
Suppose a Babyeater tries to apply the categorical imperative. Since they think eating babies is good, they will act in accordance with the norm of baby-eating and be happy to see others adopt their baby-eating ways.
You, a human, might object that you don’t like this so it can’t be universally true, yet a Babyeater would object that you not eating babies is an outrageous norm violation that will lead to terrible outcomes.
The Formula of Universal Law, which you quoted ("Act as if the maxims ..."), is one of three formulations Kant gave of the categorical imperative. Another, the Formula of Humanity ("treat humanity, whether in your own person or in the person of any other, always at the same time as an end, never merely as a means"), clearly prohibits baby-eating, no matter what norm the agent may prefer.
Replies from: MichaelStJules↑ comment by MichaelStJules · 2022-12-07T00:22:48.712Z · LW(p) · GW(p)
The Formula of Humanity isn't equivalent to the Formula of Universal Law, though, right? Or, at least, that requires further assumptions.
Replies from: Erich_Grunewald↑ comment by Erich_Grunewald · 2022-12-07T10:32:21.640Z · LW(p) · GW(p)
They are supposedly (according to Kant) equivalent, though most Kant scholars don't seem to treat them as if they were. I do recommend Daniel Kokotajlo's fun decision theory app store post [LW · GW] where he makes the case that they are in fact, sort of, equivalent.