Code Quality and Rule Consequentialism
post by Adam Zerner (adamzerner) · 2022-06-13T03:12:09.861Z · LW · GW · 13 commentsContents
Two types of questions Consequentialism Code quality None 13 comments
(See also Taking the outside view on code quality)
Code quality. Such a divisive topic. To overgeneralize[1], managers always want things released soon, which means a quick and dirty version of the code. On the other hand, engineers always want to do it the "right way", which means taking longer before releasing.
And that's just one example. Here are some others.
- Engineers want to refactor stuff. Managers say it's not worth it.
- Engineers want to take the time to write tests. Managers need new features to be released.
- Engineers want to upgrade to the new version of the library. Managers say that'll be too costly.
- Engineers want to set up a linter. Managers don't see how that'll actually help the business.
- Engineers want to spend time discussing things during code review. Managers feel like that always just gets in the way of meeting deadlines and doesn't really matter.
And now for the million dollar question: who's right? Are the engineers right in saying that you should take the time to do these sorts of things? Or are the managers right in saying that you should keep your eye on the prize and deploy features that are actually gonna improve the lives of end users?
Just kidding. We shouldn't ask who's right. Instead, let's think about how we'd even go about answering these sorts of questions in the first place.
Two types of questions
I want to pause here for a second. I want to emphasize the distinction. "Who's right?" and "How do we go about thinking about who's right?" are two very different types of questions.
Let's make it more concrete.
- Should we convert those class-based React components into functional components?
- How would we go about deciding whether to convert those class-based React components into functional components?
Do you see the difference? (1) is asking what we should do. (2) is asking how to go about deciding what we should do.
It's the difference between 1) asking what we should eat for dinner and 2) asking how we should go about deciding what to eat for dinner. The answer to (1) might be "pizza". The answer to (2) might be "we should think about how well different options perform along the dimensions of health, convenience, cost, and taste, and pick the option that performs best".
For these debates about whether to prioritize code quality, I've seen commentary on the first type of question. On the other hand, I can't really think of any commentary on the second type of question. At companies I've been at, or in the blogosphere, or even in conversations amongst my friends.
I think that second type of question is crucial though. Well, it is for me. It's central to my own opinions and it explains why I lean especially hard in the direction of code quality.
Consequentialism
Let's talk about moral philosophy[2]. In a previous version of this post I was going to talk about the three schools of thought: consequentialism, virtue ethics, and deontology. However, I'm not very good at explaining them, and I think that in the context of a business[3], consequentialism is the predominant perspective. CEOs care about results. Consequences. They're not running their businesses according to the abstract ideals of virtues or Kant's categorical imperative. So let's meet them where they are currently standing here and assume that consequences are what matter.[4]
There are different types of consequentialism though, and I want to focus[5] on those differences.
I think a good place to start is with act consequentialism. Say your friend Alice approaches you and asks whether you think she looks good in her new dress. You don't think she looks good in it. And to keep things simple, let's suppose that your only options are to say "yes" or "no".
An act consequentialist would think about the consequences of saying yes, the consequences of saying no, and choose the action that leads to the best consequences. Perhaps they think that saying yes would lead to better consequences because it'll make Alice happier, whereas saying no would just hurt her self-esteem.
On the other hand, rule consequentialism takes a different perspective. I'm being a little colorful here, but I think rule consequentialism says something like this.
Look, I agree that consequences are ultimately what matter. In a perfect world, I'd want to choose the action that leads to the better consequences. I'm with you on that.
I just don't think that people are very good at figuring out what actions lead to the best consequences. I think people are hopelessly biased. I think people are weak. They rationalize. They do what will be easy.
So in this example of your friend Alice, you will be biased towards what is easy, which is to tell her that she looks good. I don't trust you to actually perform the calculus and figure out what option leads to the best consequences.
Instead, you could come up with rules ahead of time that lead to good consequences. Like "don't lie". Then when you are actually faced with situations in real life, just follow those rules. Such a strategy will lead to better consequences than the strategy of trying to calculate which actions will lead to the best consequences.
This is a little inflexible though. "Don't lie" might do good most of the time, but it doesn't always work. To use a classic example, what if a friend is hiding in your house and a murderer who is looking for your friend asks you if they are there? Do you lie? On the one hand it seems pretty clear that lying would produce the better consequences. But on the other hand, didn't we just talk about how you can't be trusted to do this calculus and instead should follow the rules that you agreed upon ahead of time?
Fortunately, rule consequentialists recognized this issue and have a pretty good response. Strong rule consequentialism says that rules can't be broken. No matter what. So a strong rule consequentialist would[6] in fact say to follow the "don't lie" rule and reveal that your friend is hiding.
That is pretty dumb though and is where weak rule consequentialism comes in. Weak rule consequentialism says that you could use your judgement about when to follow rules. Rules are there for guidance, but you don't need to be a slave to them.
But that begs the question of "how do you know when to use your judgement". I didn't really research this, but I'm pretty sure that there are all sorts of different forms of weak rule consequentialism that answer this question differently.
One form is two-level consequentialism. It kinda divides things into 1) everyday situations where you should follow the rules and 2) extreme situations where you can think about deviating. That doesn't really speak to me though. Surely we can trust people to use their judgement a little bit more than that, right?
Here's how I see it. Yes, we are biased. Yes, we lean towards doing what is easy. Yes, it is helpful to have rules set ahead of time to guide us. But... well... "rules" is a bad term. I think "guidelines" is a better one. It is good to have guidelines. It is good to be aware of our biases. But from there, it is all a matter of judgement. Think about your biases. Think about what the guidelines say. Do that act consequentialist calculus. And then, incorporating[7] all of that stuff, make a decision and go with it.
Code quality
Let's bring this back to code quality now. How does all this philosophy stuff relate to code quality? Well, I think that most discussions about whether it is worth investing in code quality approach the conversation like an act consequentialist.
They ask how much re-writing those class-based React components as functional components would actually help. How much easier is it to read that functional code? How much will it improve velocity? How much time will it take?
These are good questions. I think that we should try to answer them. I think it is worth engaging with the decision at the ground level. However, I don't think that we should stop there. I think we need to incorporate some rule consequentialism into the mix.
What would that look like? Well, as an example, maybe you have a rule about how much time it is generally worth spending on refactoring. Or, rather, maybe you have various rules of the form "In a codebase of size X, it is wise to spend Y% of the time refactoring." Maybe one example of this is "In a large codebase, it is wise to spend 30% of the time refactoring."
Ok. Now think about your act consequentialist calculus. Try to extrapolate a bit. What if you performed that calculus on different refactoring decisions? Maybe it would add up to you only spending 5% of the time refactoring. But this violates that rule that said 30%.
I'm not saying that automatically means the decision should be to refactor your class-based components to functional components. I am saying that the 30% rule should influence your decision, causing you to lean more towards refactoring. And, more generally, that such rules need to have a seat at the table.
I suspect the things I say below about engineers vs managers might be triggering. It might be tempting to think something like "This is so uncharitable!" or "Hey, I'm a manager and I'm not that short sighted. What you're describing is a straw man." 1) I did say I'm overgeneralizing. 2) In my experience working for six different companies and talking to various friends, these stereotypes are actually pretty accurate. Not 100% accurate or even 95% accurate, but to make up a number, maybe they're 75% accurate. ↩︎
I know, I know. Philosophy? This is a post about software engineering. Moral philosophy sounds like a pretty big detour. How is it relevant? Is this guy one of those quacks who philosophizes too much and overthinks everything? All I can say about those objections right now is to bear with me. IMHO, it will be worth it in the end. ↩︎
Not all code is written in the context of business though. It would be interesting to think about the merits of virtue ethics or deontology for something like open source software or a side project. Or even for a business that makes social good a strong priority alongside profits. Or, alternatively, a business like Basecamp that really prioritizes employee happiness as an end in and of itself. ↩︎
Confused? Me too. How could something other than consequences actually matter? You could read the arguments made in the Stanford Encyclopedia of Philosophy for virtue ethics and deontology, but for whatever reason they just don't "compute" with me, so I'm not going to try to explain them. ↩︎
I'm going to be basing my explanations largely off of this video. I spent a few hours clicking around, reading on different websites and watching other videos. I didn't have much luck though. Other resources felt too confusing. I'd also like to note that I'm not an expert here. I might have made some mistakes, so take this with a grain of salt. ↩︎
Well, this is just an instructive example. In practice a real-life strong rule consequentialist would have come up with better, more nuanced rules in the first place. Like "don't lie if X, Y and Z". ↩︎
If you are wise you should think about what your track record is for making such decisions, or similar decisions, in the past. ↩︎
13 comments
Comments sorted by top scores.
comment by AnthonyC · 2022-06-14T10:15:27.783Z · LW(p) · GW(p)
One thing that has long surprised me about the strict Kantian rule-following point of view is the seeming certainty that the rule needs to be a short sentence, on the length scale of "Thou shalt not kill." (And yes, I see it as the same sort of error that many people make who think there's a simple utility function we could safely give an AGI.) My POV makes more of a distinction on the lines of axiology/morality/law, where if you want a fundamental principal in ethics, one that should never be violated, it's going to be axiological and also way too complicated for a human mind to consciously grasp, let alone compute and execute in real time. Morality and law are ways of simplifying the fractally complex edges the axiology would have in order to make it possible in principle for a human to follow, or a human society to enforce. (Side note: It looks to me like as society makes moral progress and has more wealth to devote to its ethics, both morals and laws are getting longer and more complicated and harder to follow.)
In short: I think both the engineer and manager classes are making the same sort of choice by simplifying underlying (potentially mutually compatible) ethics models in favor of different kinds of simplified edges. I don't think either is making a mistake in doing so, per se, but I am looking forward to hearing in more detail what kind of process you think they should follow in the cases when their ideas conflict.
Replies from: Dagon, adamzerner↑ comment by Adam Zerner (adamzerner) · 2022-06-14T21:24:39.135Z · LW(p) · GW(p)
One thing that has long surprised me about the strict Kantian rule-following point of view is the seeming certainty that the rule needs to be a short sentence, on the length scale of "Thou shalt not kill."
I think this is a misconception actually. In an initial draft for this post, I submitted it for feedback and the reviewer, who studied moral philosophy in college, mentioned that real deontologists have a) more sensible rules than that and b) have rules for when to follow which rules. So eg. "Thou shalt not kill" might be a rule, but so would "Thou shalt save an innocent person", and since those rules can conflict, there'd be another rule to determine which wins out.
In short: I think both the engineer and manager classes are making the same sort of choice by simplifying underlying (potentially mutually compatible) ethics models in favor of different kinds of simplified edges. I don't think either is making a mistake in doing so, per se, but I am looking forward to hearing in more detail what kind of process you think they should follow in the cases when their ideas conflict.
To make sure I am understanding you correctly, are you saying that each class is choosing to simplify things, trading off accuracy for speed? I suppose there is a tradeoff there, but I don't think it falls on the side of simplification. It doesn't actually take much time or effort to think to yourself or to bring up in conversation something like "What would the rule consequentialist rules/guidelines say? How much weight do they deserve here?"
Replies from: AnthonyC↑ comment by AnthonyC · 2022-06-14T23:26:03.395Z · LW(p) · GW(p)
real deontologists
I think you're right in practice, but the last formal moral philosophy class I took was Michael Sandel's intro course, Justice, and it definitely left me with the impression that deontologists lean towards simple rules. I do wonder, with the approach you outline here, if there's a highest-level conflict-resolving rule somewhere in the set of rules, or if it's an infinite regress. I suspect the conflict-resolving rules end up looking pretty consequentialist a lot of the time.
It doesn't actually take much time or effort to think to yourself or to bring up in conversation something like "What would the rule consequentialist rules/guidelines say? How much weight do they deserve here?"
I disagree, mostly. Conscious deliberation is costly, and in practice having humans trust their own reasoning on when to follow which rules doesn't tend to lead to great outcomes, especially when they're doing the reasoning in real-time either in discussion with other humans they disagree with, or when they are under external pressure to achieve certain outcomes like a release timeline or quarterly earnings. I think having default guidelines, that are different for different layers of an organization, can be good. Basically, you're guaranteeing regular conflict between the engineers and the managers, so that the kind of effort you're calling for happens in discussions between the two groups, instead of within a single mind.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2022-06-15T01:09:29.845Z · LW(p) · GW(p)
I suspect the conflict-resolving rules end up looking pretty consequentialist a lot of the time.
Yeah I think so too. I further suspect that a lot of the ethical theories end up looking consequentialist when you dig deep enough. Which makes me wonder if they actually disagree on important, real world moral dilemmas. If so I wish that common intro to ethics types of discussions would talk about it more.
I disagree, mostly. Conscious deliberation is costly, and in practice having humans trust their own reasoning on when to follow which rules doesn't tend to lead to great outcomes
I suspect we just don't see eye-to-eye on this crux of how costly this sort of deliberation is. But I wonder if your feelings change at all if you try thinking of it as more of a spectrum (maybe you already are, I'm not sure). Ie, at least IMO, there is a spectrum of how much effort you expend on this conscious deliberation, so it isn't really a question of doing it vs not doing it, it's more a question of how much effort is worthwhile. Unless you think that in practice, such conversations would be contentious and drag on (in cultures I've been a part of this happens more often than not). In that scenario I think it'd be best to have simple rules and no/very little deliberation.
comment by Shmi (shminux) · 2022-06-13T04:00:14.768Z · LW(p) · GW(p)
I think it might be useful to step back a bit more and talk about possible worlds with and without the conversion, and assign probabilities to each. Once you have a list, and a consensus on probabilities (probably the hardest part) for each item, one can productively discuss how to make a decision. For example:
- Convert those class-based React components into functional components
- what are the odds of it taking 1 month? 2 months? 6 months?
- what is the opportunity cost in terms of other projects?
- what is the gain, in terms of simplifying maintenance, reducing effort/schedule for new features, etc.?
- what are the losses, such as unstable API, (re)training people, (re)writing test cases, having critical bugs that will be introduced, etc.?
- what are the unknowns and how would they impact the project?
This might ease the engineering/management/sales divide, assuming people are honest and earnest and you can get a buy-in to do something like that.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2022-06-13T04:10:40.168Z · LW(p) · GW(p)
That sounds like a good exercise. However, that sort of exercise falls in the act consequentialist bucket, and I think that rule consequentialist stuff also deserves a seat at the table.
Replies from: shminux↑ comment by Shmi (shminux) · 2022-06-13T06:15:36.870Z · LW(p) · GW(p)
Right, it's way faster and probably accurate enough for many purposes.
comment by Dan Weinand (dan-weinand) · 2022-06-13T18:06:16.441Z · LW(p) · GW(p)
You gave the caveats, but I'm still curious to hear what companies you felt had this engineer vs manager conflict routinely about code quality. Mostly, I'd like to know so I can avoid working at those companies.
I suspect the conflict might be exacerbated at places where managers don't write code (especially if they've never written code). My managers at Google and Waymo have tended to be very supportive of code health projects. The discussion of how to trade-off code debt and velocity is also very explicit. We've gotten pretty guidance in some quarters along the lines of 'We are sprinting and expect to accumulate debt' vs 'We are slowing down to pay off tech debt'. This makes it pretty easy to tell if a given code health project is something that company leadership wants me to be doing right now.
comment by MSRayne · 2022-06-13T15:18:04.297Z · LW(p) · GW(p)
I'm probably something like a rule consequentialist (which feels like a mixture of consequentialism and deontology), in that while I want to maximize the weighted total utility of all sentient beings, I want to do it while obeying strict moral rules in most cases (the ends do not automatically justify the means in every case).
Specifically I think the foundational rules are "don't affect another sentient being in a way they didn't give you permission to" and "don't break your promises", with caveats (which I am not sure how to specify rigorously) for the fact that there are situations where it is necessary and reasonable to break those rules - and nearly every other moral principle falls out of those two. Really they're the same rule stated in two different ways - "take only actions which everyone affected deems acceptable", given that your past self is affected by your present self's actions and can thus influence which ones are acceptable by making promises.
Then my consequentialism could be restated as "maximize the degree to which this moral principle is followed in the universe."
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2022-06-13T17:22:12.932Z · LW(p) · GW(p)
Do you see those rules as ends in and of themselves, or do you see them as the most effective means to achieving the end of "maximize the weighted total utility of all sentient beings"? Or maybe just guidelines you use in order to achieve the end of "maximize the weighted total utility of all sentient beings"?
Replies from: MSRayne↑ comment by MSRayne · 2022-06-13T21:33:52.768Z · LW(p) · GW(p)
I think that the "rights" idea is the starting point - it is good in itself for a sentient being (an entity which possesses qualia, such as all animals and possibly some non-animal life forms and AIs, depending on how consciousness works) to get what it wants - to have its utility function maximized - and if it cannot verbally describe its desires, a proxy for this is pleasure versus pain, the way the organism evolved to live, the kinds of decisions the organism can be observed generally making, etc.
The amount of this right a being possesses is proportional to its capacity for conscious experience - the intensity and perhaps also variety of its qualia. So humans would individually score only slightly higher than most other mammals, due to having equally intense emotions but more types of qualia due to our being the basis of a memetic ecosystem - and the total amount of rights on the planet belonging to nonhumans vastly outweighs the amount of rights collectively owned by humans. (Many people on LessWrong likely disagree with me on that.)
Meanwhile, the amount of responsibility to protect this right a being possesses is proportional to its capacity to influence the future trajectory of the world times its capacity to understand this concept - meaning humans have nearly all the moral responsibility which exists on the planet currently, though AIs will soon have a hefty chunk of it and will eventually far outstrip us in responsibility levels. (This implies that the ideal thing to do is to uplift all other life forms so they can take care of themselves, find and obtain new sources of subjective value far beyond what they could experience as they are, and in the process relieve ourselves and our AIs of the burden of stewardship.)
The "consequentialism" comes from the basic idea that every entity with such responsibility ought to strive to maximize its total positive impact on that fundamental right in all beings, weighted by their ownership of that right and by its own responsibility to uphold it - such as by supporting AI alignment, abolitionism, etc. (This could be described as, I think we have the responsibility to implement the coherent extrapolated volition of all living things. Our own failure to align to that to me rather obvious ethical imperative suggests a gloomy prospect for our AI children!)
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2022-06-13T22:11:35.399Z · LW(p) · GW(p)
That all sounds pretty reasonable :)