The Real Rules Have No Exceptions
post by Said Achmiz (SaidAchmiz) · 2019-07-23T03:38:45.992Z · LW · GW · 57 commentsContents
57 comments
(This is a comment [LW(p) · GW(p)] that has been turned into a post.)
From Chris_Leong’s post, “Making Exceptions to General Rules [LW · GW]”:
Suppose you make a general rule, ie. “I won’t eat any cookies”. Then you encounter a situation that legitimately feels exceptional , “These are generally considered the best cookies in the entire state”. This tends to make people torn between two threads of reasoning:
Clearly the optimal strategy is to make an exception this one time and then follow the rule the rest of the time.
If you break the rule this one time, then you risk dismantling the rule and ending up not following it at all.
How can we resolve this? …
This is my answer:
Consider even a single exception to totally undermine any rule. Consequently, only follow rules with no exceptions.[1]. When you do encounter a legitimate exception to a heretofore-exceptionless rule, immediately discard the rule and replace it with a new rule—one which accounts for situations like this one, which, to the old rule, had to be exceptions.
This, of course, requires a meta-rule (or, if you like, a meta-habit):
Prefer simplicity in your rules. Be vigilant that your rules do not grow too complex; make sure you are not relaxing the legitimacy criteria of your exceptions. Periodically audit your rules, inspecting them for complexity; try to formulate simpler versions of complex rules.
So, when you encounter an exception, you neither break the rule once but keep following it thereafter, nor break it once and risk breaking it again. If this is really an exception, then that rule is immediately and automatically nullified, because good rules ought not have exceptions. Time for a new rule.
And if you’re not prepared to discard the rule and formulate a new one, well, then the exception must not be all that compelling; in which case, of course, keep following the existing rule, now and henceforth.
But why do I say that good rules ought not have exceptions? Because rules already don’t have exceptions.
Exceptions are a fiction. They’re a way for us to avoid admitting (sometimes to ourselves, sometimes to others) that the rule as stated, together with the criteria for deciding whether something is a “legitimate” exception, is the actual rule.
The approach I describe above merely consists of making this fact explicit.
By which I mean “only follow rules to which no legitimate exception will ever be encountered”, not “continue following a rule even if you encounter what seems like a legitimate exception”. ↩︎
57 comments
Comments sorted by top scores.
comment by Said Achmiz (SaidAchmiz) · 2019-07-23T05:18:54.696Z · LW(p) · GW(p)
Playing the devil’s advocate:
Consider this tumblr post by nostalgebraist, the contents of which I entirely concur with and endorse. It would seem to contradict, or at least undermine the applicability of, the approach I describe in this post.
More generally, the contradiction arises because, while this is entirely true—
the rule as stated, together with the criteria for deciding whether something is a “legitimate” exception, is the actual rule.
—the difficulty is that in some cases, the stated rule may be straightforward and legible, but the criteria for evaluating the legitimacy of exceptions is complex and illegible (and, in many or even most such cases, attempting to make the criteria legible will inevitably result in discarding important information).
Thus, e.g., in the sort of scenario described by nostalgebraist, the “actual rule” is “these are the explicit rules, but I also reserve the right to apply my own, fundamentally irreducible[1] judgment to make exceptions, and I admit of no formal/explicit rule which stands above that right”. In this case, spelling out the “actual rule” seems to have gained us very little.
Yet I think that the approach I describe withstands this challenge—because it remains the best approach, despite not being perfect; all the other solutions to the question (of what to do about apparently-compelling exceptions to apparently-reasonable rules) do no better, in such cases.
And while we gain little by spelling out the “actual rule” in these “complex and/or illegible exception-judging criteria” situations, nevertheless we do gain something—namely, making explicit (and therefore salient) the fact that unexpected exceptions (driven by irreducible judgment) are a possibility. What is explicit, can be better prepared for, and can be discussed, and problems addressed; so this is a benefit, if not a very great one.
Why do I say “fundamentally irreducible”? Suppose that you offer some operationalization of my judgment criteria—one which appears to account for all of the judgments I’ve made, to instantiate any principles that seem to stand behind my judgment criteria, not to to leave unaddressed any cases I can imagine, etc. You may be tempted to call this a successful reduction—to identify my judgment with your reduction of it. Yet recall that, by construction, I have retained the right to “call bullshit” on any application of an explicit rule which I feel goes against the rule’s spirit; which means that I remain free to, e.g., reject the output of your operationalization of my judgment criteria, in any future case, no matter how closely that output has matched my judgment thus far. Since this applies to any operationalization you can construct—which must, by definition, be explicit—the “personal judgment” rule is a meta-rule of a higher order than any explicit rule, and operationalizing it is impossible. ↩︎
↑ comment by quanticle · 2019-07-23T14:18:23.656Z · LW(p) · GW(p)
Paul Scharre, in his excellent book about the application of AI to military technology, Army of None, has an anecdote which I think is relevant. In the book, he talks about leading a patrol up an Afghan hillside. As he and the troops under his command ascend the hillside, they're spotted by a farmer. Realizing that they've been spotted, the patrol hunkers down and awaits the inevitable attack by Afghan insurgent forces. However, before the attackers arrive, something unexpected happens. A little girl, about 5 or 6 years of age, comes up to the position, with some goats and a radio. She reports the details of the Americans' deployment to the attacking insurgents and departs. Shortly thereafter, the insurgent attack begins in earnest, and results in the Americans being driven off the hillside.
After the failed patrol, Scharre's troop held an after-action briefing where they discussed what they might have done differently. Among the things they discussed was potentially detaining the little girl, or at least relieving her of her radio so as to limit the information being passed back to the attackers. However, at no point, did anyone suggest the alternative of shooting the girl, even though they would have been perfectly justified, under the laws of war and rules of engagement, in doing so. Under the laws of war, anyone who acts like a soldier is a soldier, and this includes 5-year-old-girls conducting reconnaissance for insurgents. However, everyone understood, on a visceral level, that there was a difference between permissible and correct and that the choice of shooting the girl, while permissible, was morally abhorrent to the point where it was discarded at an unconscious level.
That said, no one in the troop also said, "Okay, well, we need to amend our rules of engagement to say, 'Shooting at people conducting reconnaissance is permissible... except when the person is a cute little 5-year-old-girl.'" Everyone recognized, again, at an unconscious level, that there was value to having a legible rule "Shooting at people behaving in a soldierly manner is acceptable," with illegible exceptions ("Except when that person is a 5-year-old girl leading goats"). The drafters of rules cannot anticipate every circumstance in which the rule might be applied, and thus having some leeway about the specific obligations (while making the intent of the rule clear) is valuable insofar as it allows people to take actions without being paralyzed by doubt. This applies as much to rules governing as an organization as it does to rules that you make for yourself.
The application to AI is, I hope, obvious. (Unfriendly) AIs don't make a distinction between permissible and correct. Anything that is permissible is an option that can be taken, if it furthers the AI's objective. Given that, I would summarize your point about having illegible exceptions as, "You are not an unfriendly AI. Don't act like one."
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-07-24T00:11:39.325Z · LW(p) · GW(p)
'Shooting at people conducting reconnaissance is permissible... except when the person is a cute little 5-year-old-girl.'
At least in the old war movies I've seen, that used to have the general "except for women and children" clause.
Replies from: quanticle, jmh↑ comment by quanticle · 2019-07-24T02:42:01.179Z · LW(p) · GW(p)
That's something you see in movies, yes, but as I understand what Paul Scharre is saying, it's not something that's actually true. According to him, the laws of war "care about what you do, not who you are." If you are behaving in a soldierly fashion, you are a soldier, whether you are a young man, old man, woman, or child.
Replies from: ryan_b↑ comment by ryan_b · 2019-07-24T18:05:37.879Z · LW(p) · GW(p)
I affirm Scharre's interpretation.
Anecdote: during deployment when we arrive in country, we are given briefings about the latest tactics being employed in the area where we will be operating. When I went to Iraq in 2008 one of these briefings was about young girls wearing suicide vests, which was previously unprecedented.
The tactic consisted of taking a family hostage, and telling the girl that if she did not wear this vest and go to X place at Y time, her family would be killed. Then they would detonate the vest by remote.
We copped to it because sometimes we had jammers on which prevented the detonation, and one of the girls told us what happened. Of course, we didn't have jammers everywhere. Then the calculus changes from whether we can take the hit in order to spare the child, to one child or many (suicide bombings target crowds).
The obvious wrongness of killing children does not change; nor that of allowing children to die. So one guy eats the sin, and the others feel ashamed for letting him.
↑ comment by jmh · 2019-07-24T12:18:31.086Z · LW(p) · GW(p)
On a more depressing note one might look into the events in the Korean War "except for women and children" was not applied. The movie is called A Little Pond (it was available on Amazon Prime a year or so back not sure if it's there now though) about the events at Nogunri.
Now, the movie also depicts the more human side of a soldier when confronted directly with that act -- rather than the impersonal shapes from hundreds of meters away -- near the end of the movie.
I would also add, regarding the whole permissible versus exception, that I suspect it is even grayer than suggested. The 5 year old with a radio is hardly and less a part of the fighting force than the civilians providing all the logistics and production supporting any of the military actions. So where is that line?
I'm not sure the AI will do much worse or much better than those making the plans and issuing the orders far from the battle ground and not exposed to the bloodshed and human carnage.
↑ comment by orthonormal · 2019-07-23T22:17:16.796Z · LW(p) · GW(p)
Meta: I approve of the practice of arguing against your own post in a comment.
↑ comment by Matt Goldenberg (mr-hire) · 2019-07-23T18:02:11.191Z · LW(p) · GW(p)
See also You Don't Get To Know What You're Fighting For, which makes this sort of situation more explicit.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-23T18:38:49.440Z · LW(p) · GW(p)
Indeed. In particular I want to note Nate Soares’ point about how one of the reasons you don’t necessarily know what you’re fighting for, is that your goal(s) may change as you learn more, grow, etc. Similarly, illegible complex judgment criteria may shift over time (and for that reason will not be amenable to formalization, which is of necessity static), while still always being “my own judgment”; it is precisely that freedom to alter the criteria which I protect by resisting any proffered formalization.
↑ comment by Gurkenglas · 2019-07-23T12:40:28.254Z · LW(p) · GW(p)
re your footnote: The explicit version of your judgement allows you an override, yet by construction you will never take it. So the crux behind whether the versions are semantically the same is whether we define rules to allow or disallow actions, or timelines.
comment by jbay · 2019-07-24T19:16:26.873Z · LW(p) · GW(p)
In spirit I agree with "the real rules have no exceptions". I believe this applies to physics just as well as it applies to decision-making.
But, while the foundational rules of physics are simple and legible, the physics of many particles -- which are needed for managing real-world situations -- includes emergent behaviours like fluid drag and turbulence. The notoriously complex behaviour of fluids can be usefully compressed into rules that are simple enough to remember and apply, such as inviscid or incompressible flow approximations, or tables of drag coefficients. But these simple rules are built on top of massively complex ones like the Navier-Stokes equation (which is itself still a simplifying assumption over quantum physics and relativity).
It is useful to remember that the equations of incompressible flow are not foundational and so will have exceptions, or else you will overconfidently predict that nobody can fly supersonic airplanes. But that doesn't mean you should discard those simplified rules when you reach an exception and proceed to always use Navier-Stokes, because the real rules might simply be too hard to apply the rest of the time and give the same answer anyway, to three significant figures. It might just be easier in practice to remember the exceptions.
Hence, when making predictive models, even astrophysicists will think of gravity in terms of "stars move according to Newton's inverse square law, except when dealing with black holes or gravitational lensing". They know that it's really relativity under the hood, but only draw on that when they know it's necessary.
OK, that's enough of an analogy. When might this happen in real life?
One case could be multi-agent, anti-inductive systems... like managing a company. As soon as anyone identifies a complete and compact formula for running a successful business it either goes horrifyingly [LW · GW] wrong [LW · GW], or the competitive landscape adapts to nullify it, or else it was too vague of a rule to allow synthesizing concrete actions. ("Successful businesses will aim to turn a profit").
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-24T19:25:29.911Z · LW(p) · GW(p)
This is a very good point, thank you. I have some tentative thoughts in response, but I will have to think about it carefully.
Here’s a question in the meantime: do you think that what you say is addressed in / is essentially the same as what I write in this comment elsethread [LW(p) · GW(p)]? Or is this something else entirely?
Replies from: jbay↑ comment by jbay · 2019-07-24T21:44:47.318Z · LW(p) · GW(p)
Thanks!
I think my point is different, although I have to admit I don't entirely grasp your objection to Nostalgebraist's objection. I think Nostalgebraist's point about rules being gameable does overlap with my example of multi-agent systems, because clear-but-only-approximately-correct rules are exploitable. But I don't think my argument is about it being hard to identify legitimate exceptions. In fact, astrophysicists would have no difficulty identifying when it's the right time to stop using Newtonian gravity.
But my point with the physics analogy is that sometimes, even if you actually know the correct rule, and even if that rule is simple (Navier-Stokes is still just one equation), you still might accomplish a lot more by using approximations and just remembering when they start to break down.
That's because Occam's-razor-simple rules like "to build a successful business, just turn a huge profit!" or "air is perfectly described by this one-line equation!" can be very hard to apply to synthesize into specific new business plans or airplane designs, or even to make predictions about existing business plans or airplane designs.
I guess a better example is: the various flavours of utilitarianism each convert complex moral judgements into simple, universal rules to maximize various measures of utility. But even with a firm belief in utilitarianism, you could still be stumped about the right action in any particular dilemma, just because it might be really hard to calculate the utility of each option. In this case, you don't feel like you've reached an "exception" to utilitarianism at all -- you still believe in the underlying principle -- but you might find it easier to make decisions using an approximation like "try not to kill anybody", until you reach edge-cases where that might break down, like in a war zone.
You might not even know if eating a cookie will increase or decrease your utility, so you stick to an approximation like "I'm on a diet" to simplify your decision-making process until you reach an exception like "this is a really delicious-looking / unusually healthy cookie", in which you decide it's worth dropping the approximation and reaching for the deeper rules of utilitarianism to make your choice.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-24T23:15:34.234Z · LW(p) · GW(p)
I think my point is different, although I have to admit I don’t entirely grasp your objection to Nostalgebraist’s objection.
Oh, I don’t object to what nostalgebraist says! I think it’s entirely right. (Also, to be clear, his post was written some time before my comment, so it’s not in any way a response to the latter.)
I say only that despite what he says seemingly being a serious challenge to (or even contradiction of) my post, nonetheless the post’s thesis survives the challenge intact, if not unscathed—mostly because no alternative approach to mine deals with the challenge any better.
I guess a better example is: the various flavours of utilitarianism …
Actually… I think this is a much worse example—because, in fact, I think such difficulties are entirely fatal to utilitarianism! (In fact I think that utilitarianism’s inadequacy as a moral theory is overdetermined—that is, that there are several reasons to reject it, each one sufficient on its own—but the sorts of problems you mention are certainly among those reasons.)
But let me return to your original examples—physics and business. Having thought about the matter a bit, it now seems to me that the position you are arguing against, which you (by implication) ascribe to me, is somewhat of a strawman.
The sort of situation I am referring to is one where you have (a) a rule that is applicable to a given class of situations, and (b) some phenomenon by which exceptions to the rule [i.e., specific situations where you don’t follow the rule, but instead do something else] arise. The claim I am making at the end of the post is that (b) is not some unfathomable black box from which, unexpectedly and unpredictably, exceptional cases spring, but rather a comprehensible set of criteria; and that (a) and (b) together constitute the actual “rule”—which, by construction, lacks exceptions. (And then there is the additional claim that there’s a benefit to making all of this explicit, and basing your decisions on it; this is the primary subject of the post.)
Now, it seems to me (and please correct me if I’m wrong here) that you are misreading me in two ways.
Firstly, it seems as if you are reading me as saying that (a) and (b) actually should be, or are, not two separate things but actually just one thing (and perhaps even that this one thing is, or should be, a simple thing). But I’m not saying anything of the sort! For instance, you say:
In fact, astrophysicists would have no difficulty identifying when it’s the right time to stop using Newtonian gravity.
Well and good! This is entirely consistent with my point. Here the “actual rule” would be something like: “relativity, plus whatever criteria we use to determine when to use Newtonian physics instead”. Clearly, this rule has no exceptions! (And if it does, well, whence those exceptions? How do physicists decide those are exceptions? However they did, whatever criteria they used—into the rule they go…)
Secondly, the situations I am referring to are, as I said, those where you have a rule that’s applicable to a given class of situations. By this I mean that you have some rule that tells you precisely what to do, but sometimes instead of doing that thing, you do (or, at least, are tempted to do) a different thing (i.e., you sometimes encounter [potentially] exceptional cases).
For example, if you have the rule “don’t eat cookies”, and you encounter a cookie, your rule is very clear on what you are to do: don’t eat the cookie. There’s no ambiguity here, no confusion or uncertainty. Should you eat this cookie? The rule says: no. You should not eat the cookie. End of story. That you are sometimes tempted to ignore, a.k.a. break, the rule, does not change the fact that the rule unambiguously dictates your actions. (The question, then, is why you’re tempted to make the exception, and exactly in what sorts of cases, etc.)
But note that this is not the case in your examples! If the rule, supposedly, is “use the Navier-Stokes equation”, but that equation is, in practice, impossible to calculate, then the rule doesn’t actually dictate your actions! It’s not that you know exactly what the answer is but you are unwilling to accept it; you just don’t have the answer! The supposed “rule” isn’t really any such thing. And in business it’s even worse: yes, “just turn a huge profit”, but what actually do I do? Specifically? I don’t know! I’m not tempted to break the rule, not at all; actually, I’d love to follow it, if only I knew how… but I don’t have any idea how! So, I have to use something other than this purported “rule”, in order to decide what to do.
comment by mingyuan · 2020-12-02T19:48:29.683Z · LW(p) · GW(p)
This post made me realize some things I was doing wrong in my life, and I sometimes still find myself explicitly saying in my head, "the real rules have no exceptions", when I'm considering breaking a self-imposed rule. I find it a very useful concept handle.
comment by Wei Dai (Wei_Dai) · 2020-01-31T22:48:12.425Z · LW(p) · GW(p)
Can anyone give some examples of "rules to which no legitimate exception will ever be encountered"? This post is being referred to in subsequent discussions, and I realized that I never really understood it due to lack of examples.
Also, examples of "When you do encounter a legitimate exception to a heretofore-exceptionless rule, immediately discard the rule and replace it with a new rule—one which accounts for situations like this one, which, to the old rule, had to be exceptions." would also be appreciated.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2020-02-01T09:03:10.679Z · LW(p) · GW(p)
Can anyone give some examples …
Certainly.
First, let me note that the key to understanding the post is this part:
But why do I say that good rules ought not have exceptions? Because rules already don’t have exceptions.
Exceptions are a fiction. They’re a way for us to avoid admitting (sometimes to ourselves, sometimes to others) that the rule as stated, together with the criteria for deciding whether something is a “legitimate” exception, is the actual rule.
The approach I describe above merely consists of making this fact explicit.
Once again, for emphasis:
… the rule as stated, together with the criteria for deciding whether something is a “legitimate” exception, is the actual rule.
And this is summarized by the title of the post: “The Real Rules Have No Exceptions”.
Now for some examples. I will give three: dietary restrictions, ethical injunctions, and criminal justice systems. We’ll examine each, and see how they fit into the concept I describe in the OP.
Personal dietary restrictions
This is the example in the quoted bit of Chris Leong’s post. You have a rule: “I won’t eat any cookies”. (You have decided on this rule, one imagines, to curb your sugar intake. Or something.) You’ve held strong for a while; you’ve turned down your friend’s signature chocolate chip cookies, and those wonderful black-and-white cookies they sell at the corner deli. But! You now find yourself faced with a bakery that sells what are, by all accounts of the cookie cognoscenti, the best cookies in the state. This, it seems to you, is a legitimate exception to your no-cookies rule. You eat the cookies. (They are delicious.)
The naïve view of this scenario is: “I am following a simple rule: No Cookies. But, sometimes, there are legitimate exceptions. Like, say, if the cookies are the best cookies in the state. Or… some similar situation. No Cookies is still the rule! Exceptions are just… exceptions.”
And I am saying that this view is both mistaken and imprudent. (More on this in a bit.)
Now, the obvious question to ask of the naïve account is: just what is this business of “legitimate exceptions”? What makes an exception “legitimate”, anyway? This is the crux of the matter. Chris Leong’s description of such scenarios says “you encounter a situation that legitimately feels exceptional”—but what makes one exception “feel” legitimate, and another “feel” illegitimate?
Generally, in such scenarios, there is some underlying intuition—which may or may not be easily verbalized or even teased out from examples. Nevertheless, there is (in my experience) always some pattern, some “generator” (to use the local parlance) of the intuition, some regularity—and this regularity sorts situations wherein the stated rule is applicable into the categories of “legitimate exception” and “not a legitimate exception”.
And so the core insight (such as it is) of my post is just this: whatever the stated rule may be, nevertheless the actual rule—the complete, fully described rule that governs situations of the given category—is constituted by the stated rule, plus whatever is the underlying pattern, dynamic, generator, etc., which determines which situations are legitimate exceptions to the stated rule.
Let’s return to our “No Cookies” example. Despite being a fairly trivial matter, this happens to be one of those cases where the underlying intuition behind judgments of exception legitimacy is hard to verbalize. It’s hard to say what may motivate someone to treat this particular situation (“best cookies in the state”) as a legitimate exception to a No Cookies rule… but consider this as one plausible account (out of potentially many other such):
“If I encounter a situation where I have the opportunity to have an interesting, fun, or pleasant experience which is rare, or even unique, and which opportunity I can expect will not repeat itself often, or ever, then it is permissible to suspend certain rules which otherwise would be in effect at all times. This is because, firstly, the benefit to me of having such a rare positive experience outweighs the downside of undermining a generally-unbreakable rule, and secondly, if I do not expect such a situation to recur often, then I run relatively little risk of permanently undermining the rule to an extent that makes following it infeasible.”
Now, again, such an intuition will, for the overwhelming majority of people, not be a consciously held belief. If you ask them to tell you what is their policy vis-a-vis cookies, they will say: “my policy is No Cookies”. If you press them, they will confess that their policy admits of exceptions, in some legitimately exceptional situations. If you ask them to explain just what situations are “legitimately exceptional”, they will be unable to oblige you in any coherent way. Yet this does not, of course, mean that the above-described intuition (or something along those general lines) does not govern their behavior and their thinking on the subject of cookies.
So, what I am saying is: the real rule in this case is not No Cookies, but something more like: No Cookies, Unless Consumption Of Some Particular Cookies Constitutes A Rare Opportunity To Have An Unusual, Or Even Unique, Experience, Which I Expect Will Not Recur Often, Or Perhaps Ever. (Or something along these lines.)
I said earlier that the naïve view (“My rule is No Cookies. But, yes, sometimes there are legitimate exceptions.”) is both mistaken and imprudent. What I meant by “mistaken” should now be clear: the naïve view is substantially less accurate than the fully-informed view; it does not really let you make accurate predictions about your own behavior (not without the aid of that non-verbalized intuition). And what I mean by “imprudent” is this: if you hold the naïve view, then you really have no opportunity to examine that exception-generating intuition of yours, and to endorse it, or revise it, or reject it. On the other hand, if you are fully cognizant of what the real rule is, then you can give it due consideration—and perhaps tweak it to your liking!
Note two things. First: this fully formulated, a.k.a. “real”, rule—is it a “rule to which no legitimate exception will ever be encountered”? Here I must admit that this wording was a bit of shorthand on my part. What I was referring to was something a bit like the notion of conservation of expected evidence; that is, while it is not all that probable that any given rule will survive the rest of your life without having to be updated, nevertheless you should not ever expect to encounter exceptions, any more than you should ever expect to encounter evidence in some specific direction from your current belief. If you do expect to encounter evidence in a specific direction from your current belief, then you should update immediately, because this indicates that you already have some not-yet-integrated evidence (which is the source of your expectation). Similarly, if you have some specific reason to believe that you’ll encounter legitimate exceptions to some rule, then you should revise your rule, because the real rule you’re already following is your stated rule plus whatever is causing you to expect to encounter exceptions.
Second: what role does encountering what seems to you to be a legitimate exception play in this whole framework? Simply, it is a demonstration that your real rule is not the same as your stated rule, and that there are some hidden parts to it (which are the source of your sense of the given exception’s legitimacy). So, in our cookie example, suppose that you thought (and would have said, if asked) that your policy on the subject of cookies is simple: No Cookies, No Exceptions. Then you encounter the best cookies in the state, and say: “OK, well… no cookies or exceptions… except for these cookies, which are clearly legitimately exceptional”. Your rule, which you thought had no exceptions, turns out to have exceptions—and is thereby revealed not to have been the real rule all along. You should now (I claim) discard your (stated) “No Cookies” rule, and adopt—no! wrong! not “adopt”, because you are already using it!… and (consciously) accept the fully formulated, real rule. (Or, of course, reject the fully formulated rule, and thus also reject your judgment of the given exception’s legitimacy.)
Ethical injunctions
Suppose you have a rule of personal conduct: “no lying; always tell the truth”. Then you find yourself sheltering an innocent person from a tyrannical government, whose agents accost you and inquire about whether you’re doing any such thing. “Clearly,” you think, “this is a legitimate exception to that whole ‘no lying’ business; after all, an innocent person’s life is at stake, and anyhow, these guys are, like, super evil.” You lie, and thereby save a life.
You have now discovered (if you will but admit it to yourself) that your “no lying” rule wasn’t the real rule after all. If you’re now asked whether you have any specific reason to expect that you might encounter exceptions to this “no lying” rule, you will surely say “yes”. The real rule was something more like: “no lying, unless it’s necessary to save a life”. (There might also be some intuition about whether the person(s) you’re lying to are, in some sense, deserving of honesty; but that is more complex, and anyway, overdetermines your behavior—the innocent person’s life quite suffices.) You should (I claim) admit all this to yourself, discard the “no lying, ever” rule (which, if you decide to lie in this scenario, was never truly operative in the first place) and replace it with the fully formulated version. (Of course, as with the cookies, you also have the option of endorsing the simple rule—even after reflecting on the source of your intuition that this is a legitimate exception—and discarding instead your judgment of the exception’s legitimacy; and, of course, then telling the truth to the jackbooted thugs at your door.)
Once you have reflected thus, and either endorsed the fully formulated rule, or rejected it along with your judgment of the exception’s legitimacy, whatever stated rule you now follow is one to which you do not expect ever to find exceptions.
Criminal justice systems
We have (or so we are told in our middle-school civics class) a justice system where everyone has the right to a fair trial with a jury of their peers, and all are equal before the law. Yet even a cursory glance at a news source of your choice reveals that our system of criminal justice routinely finds all sorts of legitimate exceptions to this very just and simple rule.
Clearly, it would be altogether utopian to suggest that our government “should” discard the simple stated rule, and instead either explicitly adopt some rule along the lines of “everyone’s entitled to a fair trial with a jury of their peers, unless of course our courts are swamped with cases (which is most of the time) or it’s an election year and we’re trying to be ‘tough on crime’, or any number of various other things; and everyone’s equal before the law, except of course that if you have money you can hire a good lawyer and that makes people unequal, [… etc.; insert the usual litany of entirely legal, non-corruption-related exceptions to the ostensible fairness of the criminal justice system]”, or (the still more starry-eyed scenario) reject all the exceptions and actually administer the law as fairly as in the civics class fantasy. These things will not happen. But if you were elected Absolute Dictator of America, with the power to make any social or political changes with a wave of your hand, you would (I hope) consider either of these (preferably, of course, the latter) to be good candidates for early implementation.
The point, in any case, is that, once more, the real rules have no exceptions. The real social, political, and economic forces that determine who gets treated fairly by the criminal justice system and who does not, and what the outcomes are—these forces, these dynamics, do not have exceptions (at least, not ones we can ever expect or predict). They operate at all times. They are a constant source of legitimate (which is to say, endorsed, de facto, by the simple fact of being the status quo, and of not changing even if brought to light) exceptions to the stated rules (“all are equal before the law” and so forth) precisely because the stated rules are not the real rules, and the dynamics which determine actual outcomes are the real rules.
Replies from: Wei_Dai, SaidAchmiz↑ comment by Wei Dai (Wei_Dai) · 2020-02-02T03:22:19.123Z · LW(p) · GW(p)
nevertheless you should not ever expect to encounter exceptions, any more than you should ever expect to encounter evidence in some specific direction from your current belief.
What does the first "expect" mean, in a technical sense? (The second "expect" does have a technical meaning which makes the statement sensible.)
The problem I see is that if I keep thinking about it, I can find an ever growing (but increasingly unlikely) list of exceptions to any rule. Do you just use an arbitrary probability threshold to define "expect", or what? For example with the No Cookies rule:
- except if someone invents a cookie that's good for my health
- except if someone points a gun at my head and orders me to eat a cookie
- except if a doctor prescribes cookies because of some medical or psychiatric reason
- except if I'm in a social situation where not eating a cookie is a serious faux pas (e.g., it will seriously offend the person offering me a cookie)
- except if I'm diagnosed with a terminal disease so I have no reason to care about my long term health anymore
- except if I'm presented with convincing evidence that I'm living in a simulation and eating cookies has no real negative consequences
etc. etc. When should I stop and say this is the real rule? (I could just go to full consequentialism and say the real rule is "no cookies except if the benefits of eating a cookie outweigh the costs" but presumably that's not the point of this post?)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2020-02-02T08:13:00.822Z · LW(p) · GW(p)
The question of formalization (a.k.a. “what does ‘expect’ mean in a technical sense”) is a good one; I don’t have an answer for you. (As I said, the idea which I have in mind is like the idea of “conservation of expected evidence”, but, as you say, it’s not quite the same thing.) My mathematical skills do not suffice to provide any technical characterization of the term as I am using it.
It seems to me that the informal sense of the word suffices here; a formalization would be useful, no doubt (and if someone can construct one, more power to them)… but I do not see that the lack of one seriously undermines the concept’s validity or applicability.
In particular, your list of examples is composed almost entirely of cases which quite miss the point. Before going through them, though, I’ll note two things:
First—the purpose of the exercise is to construct more effective rules with which to govern our own behavior (as individuals), and the behavior of groups or organizations in which we participate. This general goal is often threatened by the existence of so-called “exceptions” to our ostensible rules, which can easily turn some apparently clear and straightforward rule against itself, and against the ostensible intent of the rule’s formulator(s). My aim in the OP is to provide a conceptual tool that counteracts this threat, by pointing out that the existence of “exceptions” is, in fact, a sign that there actually exists some real rule which is not identical to the stated rule (and which is the generator for the “exceptions”).
Second—the point I make in the OP is twofold: descriptive and prescriptive. The descriptive component is “the real rules have no exceptions”. The prescriptive component is “here is how you ought to deal with encountered apparent ‘legitimate exceptions’”. You seem to be objecting, here, to the prescriptive component. I do not think your objection holds (as I’ll try to demonstrate shortly), but note that even if you continue to find my prescription unconvincing, nevertheless the description remains true! There is some underlying pattern which is generating “legitimate exceptions”, and it will continue, unseen, to govern your behavior (and to undermine the predictability thereof)… unless you identify it, and either integrate or alter it.
We’ll do well to remember these two points as we consider the examples you offer. You propose that the following seem like potentially expectable exceptions to the “No Cookies” rule:
- except if someone invents a cookie that’s good for my health
Once again, recall that the point of the rule in the first place is to effectively govern your own behavior. The difficulty, after all, is what? It’s that you know that you shouldn’t eat cookies all the time (or perhaps, almost ever), but you also know that without some device with which to restrain yourself, you’ll eat lots of cookies, because they’re delicious. (We can express this in terms of first- and second-order desires, or “goals” vs. “urges”, or some framework along such lines, but I think that the point here ought to be simple enough in any case.) A No Cookies rule is such a device. Its purpose is to enforce upon yourself some rule which you wish enforced upon yourself, in the service of achieving, and maintaining, some goal of yours.
Now, what happens when you encounter the best cookies in the state, and they seem to you to be a legitimate exception to your No Cookies rule? Roughly, what you have discovered thereby is that in addition to your goal of maintaining your health, you also have some other goal(s), which compete with it (such as, perhaps, “avoid turning life into a joyless existence, devoid entirely of sensory pleasures”, or “don’t let rare experiences pass you by, as they are precious and enriching”). Any explicit rule meant to govern the given class of situations, which purports to embody your goals and preferences, must capture this competing goal, along with the “maintain health” goal.
But under this view, the quoted example of a purported exception isn’t any such thing after all! The purpose of the No Cookies rule was to stop yourself from eating lots of cookies and thereby harming your health in the pursuit of momentary pleasures… but this hypothetical “health cookie” doesn’t interfere at all with the “health maintenance” goal, and is entirely consonant with the purpose of the existing rule. If you like, you can say that we take “cookies” to be a stand-in for “delicious but unhealthy sweets”—and “health cookies” don’t fit the bill. (Indeed, such a broad interpretation is needed anyway, as otherwise we would have the absurd situation of abjuring cookies but gorging on brownies—thus utterly ruining the purpose of the rule—and having to engage in philosophical debates about whether “bar cookies” are cookies or a distinct culinary product called bars, etc.)
- except if someone points a gun at my head and orders me to eat a cookie
Well, first of all, should you encounter such a conundrum, you really have bigger problems than how best to formulate a rule governing your dietary practices.
Nothing I wrote in the OP (indeed, you may assume, nothing I ever write) is intended to replace common sense. I am not Eliezer; I do not write with the ultimate aim of applying my points to AI design. My prescription is meant for people—not for robots.
That having been said, there is, in fact, a non-ad-hoc way of handling just such cases; one prominent example of such an approach is seen in Jewish religious law, in the concept of pikuach nefesh. Briefly, the point is that there is no need to write into every rule a clause to the effect that “this rule shall be suspended if someone’s pointing a gun at my head”; instead, you have a general rule that if your life’s in danger, almost all other rules are suspended. Whatever goals and purposes your rules serve, they’re not so important as to be worth your life. (This doesn’t apply to all rules, just most of them… but certainly that “most” includes dietary restrictions.)
- except if a doctor prescribes cookies because of some medical or psychiatric reason
Essentially the same response applies as that for example #1. If the cookies in question are, in fact, necessary to maintain your health, then eating them serves the goal for which the No Cookies rule was formulated. There is no question, here, of whether this is a “legitimate exception”; no uncertainty, no temptation.
Again, remember that the No Cookies rule is made by you, to serve your goals, to guard those goals against your impulses and your weaknesses. Consider again the notion of “legitimate exceptions”. We have already covered the meaning of legitimate exceptions (they are manifestations of underlying intuitions which serve competing goals), but what about illegitimate exceptions? The possibility of such is implied, isn’t it? But what are they? Well, they’re the manifestations, not of competing goals, but of precisely the impulses or urges which the rule is aimed at restraining in the first place! The question of “legitimacy” of an exception is, then, the question: “I have an intuition that I ought to except this situation from application of the rule, but does that intuition spring from a competing goal which I endorse, or does it spring from the desire I am trying to restrain?”
But in the given example, the question does not arise, because the exception is not generated by your intuition, but by an entirely endogenous factor: your doctor. (And, it must be noted, the question of how the health-related goal of your No Cookies rule stacks up to whatever medical reason your doctor has for prescribing you cookies, can, and should, be discussed with your doctor!)
- except if I’m in a social situation where not eating a cookie is a serious faux pas (e.g., it will seriously offend the person offering me a cookie)
(Skipping this one for now; see below.)
- except if I’m diagnosed with a terminal disease so I have no reason to care about my long term health anymore
Well, then you can drop the No Cookies rule entirely, and need no longer worry about what does, or does not, constitute an exception to it.
- except if I’m presented with convincing evidence that I’m living in a simulation and eating cookies has no real negative consequences
The same response applies as to example #5.
Now, let’s return to the example I skipped:
- except if I’m in a social situation where not eating a cookie is a serious faux pas (e.g., it will seriously offend the person offering me a cookie)
Ah! Now, here we have a genuine difficulty—and it is precisely the sort of difficulty which the concept I describe in the OP is intended to handle.
First, a note. In my post and my comments, I have talked about “encountering” various situations (and, relatedly, “expecting” to encounter them). Yet as you demonstrate, one can imagine encountering all sorts of situations, before ever actually encountering them.
Well, and what is the problem with that? This, it seems to me, is a feature, not a bug. Surely it’s a good thing, and not at all a bad thing, to think through the implications of your rules, and to consider how they may be applied in this or that situation you might run into. Suppose, after all, that you run into such a social situation (where refusing an offered cookie is a faux pas), having never before considered the possibility of doing so. You are likely to experience some indecision; you may act in a way you will later come to regret; you will, in short, handle the situation more poorly than you might’ve, had you instead given the matter some thought in advance.
You may think of this, if you like, of “encountering” the situation in your mind, which (assuming that your imagined scenario contains no gross distortions of the likely reality) may stand in for encountering the situation in fact. If the imagined scenario contains an apparently legitimate exception to your rule, you can then apply the same approach I describe in my post (i.e., analyze the generator of the exception, then either integrate the exception by updating the rule, or keep the rule and judge the exception to be illegitimate after all).
(Of course, such things shouldn’t be overdone. It’s no good to be paralyzed into anxiety by the constant contemplation of all possible situations you may ever encounter. But this problem is, I think, beyond the scope of this discussion.)
Now, to the specific example. You have, we have said, a rule: No Cookies. But you find yourself in some social situation where applying this rule has negative social consequences. This would seem to be one of those legitimate exceptions. And why is this? Well, we may suppose that you’ve got (as most people have) a general goal along the lines of “maintain good social standing”; or, perhaps, the operative goal is something more like “maintain a good relationship with this specific individual”.
The question before you, then, is how to weigh this social goal of yours against the health goal served by the No Cookies rule. That is something you (that is, our hypothetical person with the No Cookies rule) must answer for yourself; there is no a priori correct answer. In some cases, for some people, the social goal overrides the health goal. But for others, the health goal takes precedence. In such a case, it is a very good idea to have considered such situations in advance, and to have decided, in advance, to stand firm—to reject, in other words, the intuitive judgment of the exception’s legitimacy, having analyzed it and given due consideration to its source (i.e., the goal of maintaining social status or a personal relationship).
Such advance consideration is valuable not only because it saves you from making on-the-spot decisions you would later regret, but also because it allows you to take steps to mitigate the effects of choosing one way or the other—to turn an “either way, I lose something important” situation into a win-win.
Take the case of a No Cookies rule which is challenged by the refusal of an offered cookie being a social faux pas. Suppose you decide that in such a case (or in a specific such case), you will give precedence to your social goal(s), and eat the cookie. What steps might you take to mitigate the effects of this? For one, you might consider the impact of this violation of your No Cookies rule on the goal the rule serves, and compensate by reducing your sugar intake for the day / week / month. Alternatively, you might anticipate the possibility of entirely ruining your diet by frequent encounters of such socially challenging cookie-related situations, and proactively ensure that you only rarely find yourself at cookie-tasting parties (or whatever).
Conversely, suppose you decided that in such a case (or in a specific such case), you will give precedence to your health goal(s), and refuse the cookie. What steps might you take to mitigate the seriousness of the faux pas? Well, you might warn your cookie-offering acquaintance in advance that you are on a No Cookies diet, apologize in advance for refusing their offer of a cookie, and assure them (and solicit credible witnesses to bolster your assurance) that your refusal isn’t a judgment on their cookie-baking skills, but rather is forced by your dietary needs.
I could just go to full consequentialism and say the real rule is “no cookies except if the benefits of eating a cookie outweigh the costs” but presumably that’s not the point of this post?
The point, as I say above, is to provide a conceptual tool with which to better govern your own behavior, and that of organizations and groups in which you participate. Consequentialism is very well and good, and I have no quarrel with it; but act consequentialism is impractical (for humans). Consider my post to be a suggestion for a certain sort of rule-consequentialist “implementation detail” for your consequentialist principles.
Replies from: Ericf↑ comment by Said Achmiz (SaidAchmiz) · 2020-02-02T19:39:46.483Z · LW(p) · GW(p)
In keeping with my habit of illustrating things [LW · GW] using World of Warcraft [LW · GW], here is an additional, real-world (… more or less) example of applying the concept I describe in the OP.
Note that the case I’m about to describe has two interesting features which make it a useful case study for the concept. First, the rule in question is a rule meant to bind an organization, rather than an individual (in contrast to, e.g., the No Cookies rule we’ve thus far been discussing in this comment thread). Second, the challenge to the rule (which arose from the apparent existence of “legitimate exceptions”) was, in this case, resolved not by integrating the exceptions and updating the rule, but by rejecting the apparent legitimacy of the exceptions, identifying and repudiating the generator of those exceptions, and retaining the original rule.
Now, to the example. With the release of World of Warcraft: Classic (a.k.a. WoW), I’ve started playing the game once more, and so once more I routinely encounter the challenges of raiding, loot distribution, and everything else I described in my post about incentives and rewards in WoW [LW · GW]. (See that post, and the one before it [LW · GW], for explanations of all the WoW jargon I use here.) The following happened to a guild with which I’m familiar.
This guild had wisely chosen the EP/GP loot distribution system [LW · GW] (without question, the most rational of loot systems) for use in their raids. The system worked well at first, but soon there began to take place such situations: some raid member would receive a piece of gear (having the highest priority ratio among all those who wanted this item), but—so the sentiment among many of the raiders went—it would have gone to better use in the hands of a different raid member. Or: some item of loot—quite powerful, and potentially beneficial to the raid in the hands of one or another specific raid member—was discarded, and went to waste, because no one wanted to “spend points” (that is, to sacrifice their loot priority) on that item.
The raid leadership began to talk of legitimate exceptions… which, of course, stirred up anxiety and discontent among the raiders. (After all, if the rules only apply until the raid leader decides they don’t apply, then the rules don’t really apply at all… and the benefit of having a known, predictable system of loot distribution—raid member satisfaction and empowerment, the delegation of optimization tasks, etc.—are lost.) Seeing this, the guild’s officers held a public discussion, and analyzed the situation as follows.
Two competing goals, they said, together generate our intuitions (and yours) about how loot should be distributed. On the one hand, we desire that there be equity, fairness, and freedom of choice in the process; those who contribute, should be rewarded, and they should be free to choose how to spend the currency of those fairly allocated rewards. On the other hand, we also strive for raid progression, and to effectively defeat the challenges of raid content [i.e., killing powerful “raid boss” monsters—which are the source of loot]. Certain allocations of loot items, and certain allocation systems, may serve the former goal more than they serve the latter, and vice versa.
However (continued the guild officers), fairness is one of the stated values of this guild—and it takes precedence over optimization of raid progression. Our chosen loot distribution system (EP/GP) is meant to be the fairest system, and to provide an environment where our raid members can reliably expect to be rewarded for their contributions—and that is our top priority. This will, indeed, sometimes result in a less-than-optimal result from the standpoint of whole-raid optimization. We accept this consequence. We say that any apparent “legitimate exceptions” to EP/GP-based loot distribution, whose seeming legitimacy stems from the intuition generated by the “optimize the raid’s overall performance” goal, are not, in fact, legitimate, in our eyes. We recognize this goal, the source of such intuitions, and while we do not in the least disclaim it, we nonetheless explicitly place it below the goal of fairness, in our goal hierarchy. There will (the guild officers concluded) be no exceptions, after all. The rule will stand.
comment by Unnamed · 2021-01-25T10:06:19.306Z · LW(p) · GW(p)
It seems like the core thing that this post is doing is treating the concept of "rule" as fundamental.
If you have a general rule plus some exceptions, then obviously that "general rule" isn't the real process that is determining the results. And noticing that (obvious once you look at it) fact can be a useful insight/reframing.
The core claim that this post is putting forward, IMO, is that you should think of that "real process" as being a rule, and aim to give it the virtues of good rules such as being simple, explicit, stable, and legitimate (having legible justifications).
An alternative approach is to step outside of the "rules" framework and get in touch with what the rule is for - what preferences/values/strategy/patterns/structures/relationships/etc. it serves. Once you're in touch with that purpose, then you can think about both the current case, and what will become of the "general rule", in that light. This could end up with an explicitly reformulated rule, or not.
It seems like treating the "real process" as a rule is more fitting in some cases than others, a better fit for some people's style of thinking than for other people's, and also something that a person could choose to aim for more or less.
I think I'd find it easier to think through this topic if there was a long, diverse list of brief examples.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2021-01-25T19:52:14.118Z · LW(p) · GW(p)
This comment [LW(p) · GW(p)] discusses a class of situations where what you say seems likely to be true.
In most other cases, I think the sort of attitude you describe is likely to be a way to avoid admitting (to yourself or others) what the “real rules” are. Once you start saying stuff like…
… treating the “real process” as a rule is more fitting in some cases than others, a better fit for some people’s style of thinking than for other people’s, and also something that a person could choose to aim for more or less.
… then the usefulness of the concept/approach described in the OP is destroyed.
The request for more examples (note that I give three extended ones downthread [LW(p) · GW(p)]) is not unreasonable, but if the existing examples don’t convince, I’m not entirely sure more would, either. What is your take on the examples I’ve provided so far?
comment by ryan_b · 2019-07-23T15:51:37.440Z · LW(p) · GW(p)
And if you’re not prepared to discard the rule and formulate a new one, well, then the exception must not be all that compelling
This feels similar to Beware Trivial Inconveniences [LW · GW] and The Amish, and Strategic Norms Around Technology [LW · GW] posts; the inconvenience of having to update the rule by itself serves as disincentive to violate the rule.
comment by Zvi · 2020-12-04T17:18:49.830Z · LW(p) · GW(p)
I endorse this perspective and have since well before this post, and it was great to have it said explicitly and cleanly by someone else. This is especially true because I believe most people disagree with it. I've linked back to this a few times.
comment by Raemon · 2019-09-04T20:49:34.191Z · LW(p) · GW(p)
Curated.
I've found this post useful for crystallizing my own thinking, both about rules I follow (as a human taking actions), and even a bit helpful for grokking the overall Law vs Toolbox [LW · GW] distinction.
Looking over the comments, I see some people seem to have found it less crystalizing than I. I have a sense that there's a version of this post that could have bridged some inferential gulf better. But also, there's not necessarily such a thing as a universally good explanation
I have some sense that the post could be improved if it was given a second draft whose goals was specifically to find someone who didn't grok the first version of the post, and exploring various different explanations until it clicks.
But, also there's no such thing as a universally compelling explanation and maybe this is just a case where it was useful to add one more road to Rome [LW · GW] that was helpful for at least some people.
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-01-07T00:28:29.202Z · LW(p) · GW(p)
I think the point this post makes is right, both as a literal definition of what a rule is, and of how you should respond to the tendency to make "exceptions." I prefer the notion of a "framework" to a rule, because it suggests that the rules can be flexible, layered, and only operating in specific contexts (where appropriate). For example, I'm trying to implement a set of rules about when I take breaks from work, but the rule "25 minutes on, 5 minutes off" only is valid when I'm actually at work.
My point of disagreement is the conclusion - that exceptions are primarily a form of self-signaling, a way to avoid being honest about the real rule.
I think instead people have a mistaken belief that you should be able to just declare a rule and stick to it, right away and without modification. They want to be healthy, so the first thought their mind floats is "don't eat any cookies... EVER AGAIN!"
Well, the real rule they want to follow might be much more nuanced. But instead, they just observe themselves making a rule, then breaking it. "Exception" is just the word they use for a modification of the rule. That and "lack of willpower."
So the issue isn't so much one of honesty or self-signaling as a kludgy, ill-thought-out perspective on willpower.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2021-01-07T02:15:45.345Z · LW(p) · GW(p)
My point of disagreement is the conclusion—that exceptions are primarily a form of self-signaling, a way to avoid being honest about the real rule.
I did not say “self-signaling”.
Note well that the idea of “the real rules have no exceptions” applies to rules that govern social groups and organizations and subcultures and societies just as much (if not more!) as it applies to rules made by a person to govern their own actions.
In that light, the signaling is not to oneself, but to others; and it is of great importance (as the rule-as-stated, clean and exception-free as it is, creates legitimacy, and the appearance of explicit structure and order). And for this reason also, the insight described in the OP is, in such contexts, subversive to the group and to those in power within it, because it is corrosive to the beliefs and behaviors that maintain the group’s cohesion and stability.
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2021-01-07T02:40:20.782Z · LW(p) · GW(p)
Let me rephrase my objection on this point. You explain the rules/exceptions dynamic as motivated by signaling. It's hard to give examples, because you don't actually explain the specific function of not stating the "real rule" in any particular case.
My explanation is different. I think that it's that it's difficult and contrary to common sense to explicitly state the real rule, with all its nuances and layers.
For example, it's very easy to say "I'm gonna stop eating cookies." Then it's two months later, you eat a cookie, and you make an "exception" that it was OK because you'd been good for so long, or because they're really good cookies, or whatever. It feels, in the moment, like an appropriate action, even though it violates your original rule.
Then you continue under the assumption that the rule is "I'm gonna stop eating cookies..." until you feel the time is right again to eat another cookie.
The reason you don't specify the specific circumstances or timing for when it's OK to eat cookies isn't necessarily because you want to show yourself what a good dieter you are, or show other people.
It's just that the idea that you'd explicitly specify the complete rule set seems hard, weird, and just doesn't occur to most people.
I do think that signaling can enter in here, if people consider what it would look like to others to have some elaborate, constantly modified explicit ruleset for cookie-eating. That might be a factor weighing against it. People want to come across as having willpower, not being neurotic, having good healthy habits already, and being effortlessly successful.
There's a big difference in explaining why it's common sense not to create big complicated rulesets for behavior, and why any individual person avoids creating those rulesets.
I think for individuals, the reason is that it's not common sense, and that even if it were, it's often hard to think the problem through.
I think the reason why it's not common sense in the first place does have more to do with signaling and attendant coordination problems. Is it polite to offer alcohol at parties? What if we know that one of the attendees is secretly a recovering alcoholic? Is it polite to refuse dessert if we're trying to diet? Is it impolite to offer if we know somebody's trying to diet?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2021-01-07T03:55:09.448Z · LW(p) · GW(p)
It’s hard to give examples, because you don’t actually explain the specific function of not stating the “real rule” in any particular case.
By no means is it hard to give examples. Indeed, I did give several examples in an earlier comment [LW(p) · GW(p)].
As for reasons to keep the real rule unstated, they seem clear enough to me. I did not state them because I considered them too obvious to belabor… of course, it’s possible that I was wrong about this!
I can make my views of this explicit, if you like, but I don’t think I will be adding much to the understanding of signaling already common in this forum. In fact, I wonder if anyone else (perhaps one of the folks who liked or benefited from this post) would like to try their hand at explaining this? It would give me useful info about whether readers of this post understood it as I intended it to be understood (and would help to clarify the post for anyone confused, of course).
Replies from: AllAmericanBreakfast↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2021-01-07T09:37:19.505Z · LW(p) · GW(p)
Personally, I hear the phrase “signaling” used often, but it’s starting to sound a little hollow. Who is signaling what, to whom, why, how do they know how it’s being perceived, how do we know this, and what else is going on? I demand specifics!
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2021-01-07T11:02:19.747Z · LW(p) · GW(p)
Quite reasonable. In that case, yes, I invite readers who enjoyed (and believe that they did properly understand) this post to say what they believe the answer to this question is.
If there aren’t any responses in, let us say, two weeks, then I will post my own explanation.
comment by Dagon · 2019-07-23T16:57:36.335Z · LW(p) · GW(p)
One point in favor of biasing toward non-exceptions (I still won't say "none at all") is that some parts of me are adversarial with the parts who are identifying rules. They can be very persuasive that this is a time for an exception, so it makes sense to have a pretty high bar (mostly: it contradicts another rule in the same ring) for making exceptions.
comment by Dagon · 2019-07-23T04:59:49.344Z · LW(p) · GW(p)
Alternate approach: recognize that rules (as opposed to physical laws) are always and only guidelines, or defaults, or lossy summaries of one's intent. There's no such thing as a complete and consistent ruleset, and even if we could get close, it wouldn't fit in our brains.
Rules are like models: none are true (none are fully binding or complete descriptions of desired behavior), many are useful (in that they can give good defaults and heuristics for common cases, where deeper computation is undesirable or infeasible).
There are no real rules. Exceptions may be fiction, but that's because rules are fiction in the first place.
Rules don't exist in the territory, they're just fuzzy areas on maps.
Replies from: Raemon, kithpendragon
↑ comment by Raemon · 2019-07-23T05:06:54.679Z · LW(p) · GW(p)
Something something Law Thinking vs Toolbox Thinking [LW · GW]. My read on Said's post here is to help people think about Lawful thinking, noticing that there *is* some actual optimal rule you can follow (even if it's computationally intractable and you don't know what the rule is)
Replies from: Dagon, SaidAchmiz↑ comment by Dagon · 2019-07-23T14:38:19.291Z · LW(p) · GW(p)
I don't think I can pass an ITT for strict lawful thinking. I'm absolutely supportive of discovering and creating summaries of future decision intent, and of being somewhat rigorous in doing so. But I can't ignore the fundamental complexity of the real world, and the fact that these are ONLY extremely compressed expressions of a set of beliefs.
I may be stuck in toolbox thinking, though I'll definitely use lawful models as some of my tools. Or I may simply not be smart enough to identify and make legible the incredible variety of decisions I face over time. Rules (and habits, which are basically unconscious rules) make this tolerable, as I can spend very little energy on most of them. But there are daily choices where I see conflicts among rules and have to choose among rules that might apply, and also among the meta-rules to pick the right rule, and meta-meta-rules to weigh across different meta-rules, etc.
I kind of wonder if we have actual different felt experiences on the topic. I can only think of stated rules as porous and directional, and I feel good when I violate one for a good purpose. Take that, over-simplistic, condescending non-agent worldview! I also feel good when I recognize a new context in which a rule applies and find that the rule is stronger than I previously thought, so I'm not anti-rule in general, just that I think they're a convenience rather than a truth.
I've talked with other people who are horrified when they find a case that an accepted rule interferes with doing the best thing, and work hard to reconcile the situation with patches or meta-rules (and get angry when I use the word "rationalization"). They seem to feel near-physical pain from violating (some) rules without a lot of justification. I have sometimes been guilty of thinking they just need to find the right Manic Pixie Dream Person to break them out of the bonds of propriety, but I also wonder if there's something deeper in the way the world actually feels day-to-day to them and to me.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-23T05:20:22.754Z · LW(p) · GW(p)
The comment I wrote just now [LW(p) · GW(p)] is relevant.
↑ comment by kithpendragon · 2019-07-23T10:37:11.304Z · LW(p) · GW(p)
I like to formulate this as "I intend to be the kind of person who mostly X" or "I plan to X on a [TIME]ish basis". Using these formulations removes the friction and stress I experience from "I must X every day" or "I must never X". I've found this makes habits easier to assimilate since they are intentions and not hard rules.
comment by Dacyn · 2019-09-13T13:46:34.119Z · LW(p) · GW(p)
the rule as stated, together with the criteria for deciding whether something is a “legitimate” exception, is the actual rule.
The approach I describe above merely consists of making this fact explicit.
This would be true were it not for your meta-rule. But the criteria for deciding whether something is a legitimate exception may be hazy and intuitive, and not prone to being stated in a simple form. This doesn't mean that the criteria are bad though.
For example, I wouldn't dream of formulating a rule about cookies that covered the case "you can eat them if they're the best in the state", but I also wouldn't say that just because someone is trying to avoid eating cookies means they can't eat the best-in-state cookies. It's a judgement call. If you expect your judgement to be impaired enough that following rigid explicitly stated rules will be better than making judgement calls, then OK, but it is far from obvious that this is true for most people.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-09-13T15:21:04.614Z · LW(p) · GW(p)
For example, I wouldn’t dream of formulating a rule about cookies that covered the case “you can eat them if they’re the best in the state”
Why?
This seems like a case that is entirely amenable to formalization (and without any great difficulty, either).
If you expect your judgement to be impaired enough that following rigid explicitly stated rules will be better than making judgement calls
“Judgment calls” are not irreducible.
One of the great insights that comes from the informal canon of best practices for GMing TTRPGs (e.g.) is that “rules” and “judgment calls” need not be contrasted with each other; on the contrary, the former can, and often does, assist and improve the latter. In other words, it’s not that following explicitly stated rules is better than making judgment calls, but rather that following explicitly stated rules is how you do better at making judgment calls.
comment by Donald Hobson (donald-hobson) · 2019-07-26T11:35:51.558Z · LW(p) · GW(p)
For deciding your own decisions, only a full description of your own utility function and decision theory will tell you what to do in every situation. And (work out what you would do if you were maximally smart, then do that) is a useless rule in practice. When deciding your own actions, you don't need to use rules at all.
If you are in any kind of organization that has rules, you have to use your own decision theory to work out which decision is best. To do this would involve weighing up the pros and cons of rule breaking, with one of the cons being any punishment the rule enforcers might apply.
Suppose you are in charge, you get to write the rules and no one else can do anything about rules they don't like.
You are still optimizing for more than just being correct. You want rules that are reasonably enforceable, the decision of whether or not to punish can only depend on things the enforcers know. You also want the rules to be short enough and simple enough for the rule followers to comprehend.
The best your rules can hope to do when faced with a sufficiently weird situation is not apply any restrictions at all.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-26T15:05:52.472Z · LW(p) · GW(p)
When deciding your own actions, you don’t need to use rules at all.
Even a rudimentary level of knowledge of how people behave is enough to know that this is entirely false. Act consequentialism doesn’t work for human psychology.
Suppose you are in charge, you get to write the rules and no one else can do anything about rules they don’t like.
This, too, bears no resemblance to reality. People can do all sorts of things about rules they don’t like.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-07-26T16:00:14.457Z · LW(p) · GW(p)
Act consequentialism doesn’t work for human psychology.
In what sense does it "not work"? I feel like I use act consequentialism all the time, for example when deciding what restaurant to go to for dinner (which can't be made into a rule since it depends on so many variables like where I am located on a particular day, what the weather is like, and what foods I've eaten recently), or to decide whether to say X to person Y or hold my tongue (which similarly depends on many variables). I may not be doing expected utility computations in a conscious or explicit way (at least not in most cases), but I'm guessing the neural networks implementing my intuitions have been trained to do something like that. (ETA: Because the choices I make usually respond to changing circumstances in a way that seems consistent with doing something like EU maximization.) Do you have some reason to think otherwise?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-26T21:13:33.565Z · LW(p) · GW(p)
I feel like I use act consequentialism all the time, for example when deciding what restaurant to go to for dinner …
Really? When deciding what restaurant to go to for dinner, you examine all possible consequences of all the choices at your disposal (and the probability distributions across them), evaluate or rank them all, and select one? You don’t use any rules at all?
… which can’t be made into a rule since it depends on so many variables like where I am located on a particular day, what the weather is like, and what foods I’ve eaten recently
Why do you think that there needs to be a rule, instead of, say, multiple rules (some combination of which may bear on any given situation)? And why can’t rules depend on variables? (Or contain heuristics, etc.?)
… I’m guessing the neural networks implementing my intuitions have been trained to do something like [expected utility computations]
This seems stupendously unlikely. My reason for thinking otherwise is that this just isn’t consistent with anything we know about how people make decisions.
Because the choices I make usually respond to changing circumstances in a way that seems consistent with doing something like EU maximization.
Are you just saying that the preferences revealed in your choices conform to the VNM axioms (or some other formalism—if so, which?)? (If you are, then you know that this implies nothing at all about whether your brain is actually doing any expected utility computations.) Or are you making some stronger claim? If so, what is it?
Replies from: Wei_Dai, Dagon↑ comment by Wei Dai (Wei_Dai) · 2019-07-27T01:37:27.441Z · LW(p) · GW(p)
Really? When deciding what restaurant to go to for dinner, you examine all possible consequences of all the choices at your disposal (and the probability distributions across them), evaluate or rank them all, and select one? You don’t use any rules at all?
No, I guess I examine some subset of consequences that seem relevant to each decision (i.e., might differ across my choices in a predictable way, and the differences make a difference for my values). I can't confidently say that I don't use any rules at all (maybe I'm using some rules in some subconscious way, or I'm doing something that counts as "using a rule") but neither can I say what those rules are.
Why do you think that there needs to be a rule, instead of, say, multiple rules (some combination of which may bear on any given situation)? And why can’t rules depend on variables? (Or contain heuristics, etc.?)
I wasn't intending to make a point about single vs multiple rules (but since you ask, having multiple rules seems to require some meta-rule to tell you which rules to use in which circumstances and how to adjudicate conflict between them, so that meta-rule would be "the rule"). My point was more that I don't see what rule(s) I could be using that would seemingly take into account so many variables in such a fluid and dynamic way, and can seemingly handle new unforeseen circumstances/variables without me having to think "how should I change my rules to handle this?"
This seems stupendously unlikely. My reason for thinking otherwise is that this just isn’t consistent with anything we know about how people make decisions.
Can you list some such inconsistencies, so I can have a better idea of what you mean?
Are you just saying that the preferences revealed in your choices conform to the VNM axioms (or some other formalism—if so, which?)?
No, I mean things like when one of my choices would predictably cause some bad consequences (and doesn't cause enough good consequences to compensate) I seem to fairly reliably avoid making that choice, even when there's enough novelty involved that it seems unlikely I would have created a rule to cover the situation ahead of time, and without having to think "how should I change my rules to handle this?"
↑ comment by Dagon · 2019-07-26T22:26:31.648Z · LW(p) · GW(p)
You may need to taboo "rule" to get much further on this. I can't speak for Wei, but I use plenty of heuristics, cached ideas, and non-legible estimates of effect in choosing a dinner location. None of these are "rules" in the sense I get from this post, and I don't abandon nor reformulate them when I choose a different food than previously.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-26T22:33:46.264Z · LW(p) · GW(p)
To clarify, “rule” as used in the grandparent and “rule” as used in the OP are different concepts. (Namely, in the grandparent I was referring—following what I took to be Wei Day’s usage—to rule consequentialism.)
Replies from: Dagon↑ comment by Dagon · 2019-07-28T00:03:24.253Z · LW(p) · GW(p)
Can you delineate a bit of the difference between these uses of "rule", and how rule consequentialism avoids any of the problems that the post (and comments/objections to it) talks about?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-28T00:30:21.461Z · LW(p) · GW(p)
… how rule consequentialism avoids any of the problems that the post (and comments/objections to it) talks about?
It… doesn’t? Who said that it does? I’m not even sure what that would mean; it seems like an almost entirely orthogonal issue…
(As for the uses of “rule”, that’s a fine question but I hesitate to write any lengthy commentary on it, because it seems like we have some sort of weird misunderstanding, and it may not even be relevant…)
comment by jmh · 2019-07-24T12:02:09.747Z · LW(p) · GW(p)
At one level I can find agreement with your position. On another I find it difficult.
Well defined rules should clearly apply (much like a function with a clean mapping from domain to range) to know settings. However, that then begs the question of how do we know we have clearly defined or perceived all the possible setting the rule might seem to apply.
Is there a presumption of perfect knowledge when making the rule?
[Edit to add: Has anyone here read the article "Origins of Predictable Behavior" -- AER 1984 I think. Ron Heiner. If not I think it may offer some additional insights to this discussion. Over 20 years since I read it so don't even want to try summarizing impressionistic memory I have.]
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-24T14:05:23.009Z · LW(p) · GW(p)
Is there a presumption of perfect knowledge when making the rule?
No. Of course not.
If you learn something new, or encounter some new situation, that makes some existing rule no longer make sense—you discard that rule, and make a new rule. This is no different from encountering a “legitimate exception”.
… how do we know we have clearly defined or perceived all the possible setting the rule might seem to apply
We don’t. We try our best, but we have no guarantees. That is life.
This is, in fact, the point of all that stuff about discarding and re-formulating rules when you encounter “exceptions”, periodically auditing your rules, etc. It is a way—indeed, the only sane way—of dealing with imperfect knowledge, and the inevitability of surprises.
Replies from: jmh↑ comment by jmh · 2019-07-24T18:32:05.049Z · LW(p) · GW(p)
If you learn something new, or encounter some new situation, that makes some existing rule no longer make sense—you discard that rule, and make a new rule. This is no different from encountering a “legitimate exception”.
I think walking through that door puts us right back with the exception to every rule -- which has always implied the exception was in fact legitimate.
Exceptions are a fiction. They’re a way for us to avoid admitting (sometimes to ourselves, sometimes to others) that the rule as stated, together with the criteria for deciding whether something is a “legitimate” exception, is the actual rule.
Requires that we already had the criteria for deciding if the as yet unknown exception arising from the know information was in fact well formed and able to deal with the unknown information.
If one wants to argue that rules are inherently context/informationlly bound and within those bounds we can define where the rule applies and where it does not I agree. But that seems a lot different than saying we can update rules as situations arise and some how that allows us to escape the trap or temptation of claiming "exception" to escape holding ourselves to the rule.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-24T19:19:41.121Z · LW(p) · GW(p)
Requires that we already had the criteria for deciding if the as yet unknown exception arising from the know information was in fact well formed and able to deal with the unknown information.
No. It does not. Nothing that I wrote requires this.
comment by Jimdrix_Hendri · 2019-07-24T02:10:40.656Z · LW(p) · GW(p)
A good rule is an objective procedure that can be apply to derive a response to any foreseeable situation.
A look-up table is not a rule, for the same reason that a detailed table of planetary ephemerides is not a substitute for the law of gravity.
Nostalgebrist's suggestion cannot be considered a rule at all. It is not objective.
In the realm of psychology and politics, rules gain legitimacy when they are adhered to over a long period of time and when they are seen to consistently protect against bad outcomes.
There is a case for flexible interpretation, but an agent who abandons rules too frequently, and with slight incentive will eventually lose confidence in his ability to abide by rules. This was only hinted at in the original post, but it is a point worth making explicit.
Replies from: SaidAchmiz
↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-24T04:33:12.642Z · LW(p) · GW(p)
What is an “objective procedure”? How does it differ from a procedure which is not “objective”?
Replies from: Jimdrix_Hendri↑ comment by Jimdrix_Hendri · 2019-07-24T21:32:04.566Z · LW(p) · GW(p)
One criterion for a procedure to be objective is that it can be carried out equally by anyone.
A procedure which includes a codicil: "Sometimes, I will step in and overturn the arrangement". Fails on three counts:
1 It fails to explicitly define the criteria for making interventions.
2 Nothing is said about the range of interventions that will be entertained.
3 It does not specify the means by which the type of intervention will be determined.
The name for this is dictat, and is almost always inappropriate and dangerous.
There are other ways of building in flexibility. For instance:
In cases many where the environment (causes) is very unpredictable, it is still possible to establish guidelines with reference to effects.
At the same time, the "rule" can explicitly state criteria for turning off the intervention, thereby reducing the risk that the intervention become a new normal.
Types of interventions can also be limited to a pre-existing list of alternatives, which can be criticised and vetted before the emergency is triggered.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-24T23:47:50.323Z · LW(p) · GW(p)
There are other ways of building in flexibility. For instance:
In cases many where the environment (causes) is very unpredictable, it is still possible to establish guidelines with reference to effects.
At the same time, the “rule” can explicitly state criteria for turning off the intervention, thereby reducing the risk that the intervention become a new normal.
Types of interventions can also be limited to a pre-existing list of alternatives, which can be criticised and vetted before the emergency is triggered.
Yes, all of these things can be done. And if you do any or all of them, and do not have a provision for applying judgment that stands above any of these provisions, your system will be exploitable. This is nostalgebraist’s point.
In practice, there will always exist “no, actually, we’re not following that rule in this case” exceptions. Our options are as follows:
-
Pretend strenuously that no exceptions exist. Apply the rules as inflexibly as possible, even (in fact, especially) in cases where it really seems like we shouldn’t apply them, to maintain the illusion that no exceptions exist.
-
Admit the fact that exceptions exist; attempt to make explicit, clarify, and rationalize the criteria for exception-making (in other words, fold them into the explicit rules); nevertheless maintain the option to exercise judgment, in contravention to the rules.
Crucially, in case #1, there will still be exceptions. They will be “snuck in” via creative interpretation of the rules, or via expansion or alteration of the rule set with rules that are bad rules (and exist only to allow a certain class of cases which the rules would otherwise forbid), or via manipulations outside the rule system that make the rule-based decision itself irrelevant, or in any number of other ways. Look to our criminal justice system for examples, and our political system also; and any number of other systems of rules.
Note three things:
First, that nostalgebraist’s advice does not in any way prevent you from formulating guidelines, and attempting to minimize and to circumscribe the scope of your “extralegal” judgments. In fact, you should do this—especially if you find yourself making such “extralegal” exceptions often! You should seek a pattern in them, and see if they (or at least some of them) can be made into a rule; or, perhaps, if the existing rules need to be altered.
Second, the “fail-safe” clause is not meant to be used only by one “side” or party in some relationship—quite the opposite! Consider what nostalgebraist says:
Of course, this option can itself be abused, particularly if it is used freely and unthinkingly. If the rules hold only until the moment you don’t like their consequences, then the rules don’t really hold at all.
But if someone does that, you can take the same option yourself: “yeah, I said you should have this option, but using it like that is wrong.” Because the option does not have specific rules (by construction), no one can use rules lawyering to take it away from you. If someone uses “calling bullshit” for bad ends, you can just call bullshit on them.
If someone is using “sometimes I will just use my judgment” to be a capricious, unpredictable dictator, then you say: “you are acting improperly”. By construction, it will do the would-be tyrant no good to say “ah, but we agreed that I can use my judgment whenever I see fit, so your objection is invalid”!
Third, you say:
In the realm of psychology and politics, rules gain legitimacy when they are adhered to over a long period of time and when they are seen to consistently protect against bad outcomes.
There is a case for flexible interpretation, but an agent who abandons rules too frequently, and with slight incentive will eventually lose confidence in his ability to abide by rules.
Here you are speaking first about legitimacy—how rules meant to bind others are perceived by the public—and then, apparently, about self-control and will. These are not the same thing, and it does no good to discuss them both in the same breath.