# Don't Get Distracted by the Boilerplate

post by johnswentworth · 2018-07-26T02:15:46.951Z · score: 44 (22 votes) · LW · GW · 10 comments*Author’s Note: Please don’t get scared off by the first sentence. I promise it's not as bad as it sounds.*

There’s a theorem from the early days of group theory which says that any continuous, monotonic function which does not depend on the order of its inputs can be transformed to addition. A good example is multiplication of positive numbers: f(x, y, z) = x*y*z. It’s continuous, it’s monotonic (increasing any of x, y, or z increases f), and we can change around the order of inputs without changing the result. In this case, f is transformed to addition using a logarithm: log(f(x, y, z)) = log(x) + log(y) + log(z).

Now, at first glance, we might say this is a very specialized theorem. “Continuous” and “monotonic” are very strong conditions; they’re not both going to apply very often. But if we actually look through the proof, it becomes clear that these assumptions aren’t as important as they look. Weakening them does change the theorem, but the core idea remains. For instance, if we remove monotonicity, then our function can still be written in terms of *vector* addition.

Many theorems/proofs contain pieces which are really just there for modelling purposes. The central idea of the proof can apply in many different settings, but we need to pick one of those settings in order to formalize it. This creates some mathematical boilerplate. Typically, we pick a setting which keeps the theorem simple - but that may involve stronger boilerplate assumptions than are strictly necessary for the main idea.

In such cases, we can usually relax the boilerplate assumptions and end up with slightly weaker forms of the theorem, which nonetheless maintain the central concepts.

Unfortunately, the boilerplate occasionally distracts people who aren’t familiar with the full idea underlying the proof. For some reason, I see this problem most with theorems in economics, game theory and decision theory - the sort of theorems which say “either X, or somebody is giving away free money”. People will come along and say “but wait, the theorem assumes Y, which is completely unrealistic!” But really, Y is often just boilerplate, and the core ideas still apply even if Y is relaxed to something more realistic. In fact, in many cases, the confusion is over the *wording* of the boilerplate! Just because we use the word “bet”, doesn’t mean people need to be at a casino for the theorem to apply.

A few examples:

- “VNM utility theorem is unrealistic! It requires that we have preferences over every possible state of the universe.” Response: Completeness is really just there to keep the math clean. The core ideas of the proof still show that, if we don’t have a utility function over some neighborhood of world-states, then we can be exploited using only those world-states.
- “All these rationality theorems are unrealistic! They’re only relevant to worlds where evil agents are constantly running around looking to exploit us.” Response: We face trade-offs in the real world, and if we don’t choose between the options consistently, we’ll end up throwing away resources unnecessarily. Whether an evil person manipulates us into it, or we stumble into it, isn’t really relevant. The more situations where we locally violate VNM utility, the more situations where we’ll lose resources.
- “VNM utility theorem is unrealistic! It assumes we’re willing to accept either a trade or its opposite (or both) - rather than just ignoring offers.” Response: We face trade-offs in the real world where “ignore” is not an option, and if we don’t choose between the options consistently, we’ll end up throwing away resources unnecessarily.
- “Dutch Book theorems are unrealistic! They assume we’re willing to accept either a bet or it’s opposite, rather than ignoring both.” Response: same as previous. Alternatively, we can build bid-ask spreads into the model, and most of the structure remains.
- “Dutch Book Theorems are unrealistic! They assume we’re constantly making bets on everything possible.” Response: every time we make a decision under uncertainty, we make a bet. Do so inconsistently, and we throw away resources unnecessarily.

In closing, one important note: I definitely do not want to claim that *all* objections to the use of VNM utility theorem, Dutch Book theorems, etc make this kind of mistake.

## 10 comments

Comments sorted by top scores.

I think your post needs a counterpoint: to deserve that kind of trust, a result needs to also have good empirical reputation. Not all theoretical results are like that. For example, Aumann agreement makes perfect sense in theory and is robust to small changes, but doesn't happen in reality. A big part of studying econ is figuring out which parts have empirical backing and how much.

Is Aumann robust to untrustworthiness ?

I really liked this post, because I have been on both sides of the coin here: that is to say, I have been the person who thought a theory was irrelevant because its assumptions were too extreme, and I have been the person trying to apply the core insights of the theory, and been criticized because the situation to which I was applying it did not meet various assumptions. I was confused each time, and I am pretty sure I have even been on both sides of the *same theory *at least once.

It is practically inevitable that either side is the correct answer depending on the situation, and possible that I was closer to correct than the person I was disagreeing with. But then, I was confused each time; the simpler explanation by far is that I was confused about the theories under discussion.

When I am trying to learn about a particular theory or field, I now set as the first priority the historical context for its development. This is very reliable for communicating the underlying intuitions, and also can be counted on to describe the real-life situations that inspired them or to which they were applied.

I don’t think this post passes the Intellectual Turing Test for people (like me) who object to the sorts of theorems you cite.

You say:

In such cases, we can usually relax the boilerplate assumptions and end up with slightly weaker forms of the theorem, which nonetheless maintain the central concepts.

But in most such cases, whether the “weaker forms” of the theorems do, in fact, “maintain the central concepts”, is *exactly what is at issue*.

Let’s go through a couple of examples:

The core ideas of the proof [of the VNM theorem] still show that, if we don’t have a utility function over some neighborhood of world-states, then we can be exploited using only those world-states.

This one is not a central example, since I’ve not seen any VNM-proponent put it in quite these terms. A citation for this would be nice. In any case, the sort of thing you cite is not really *my* primary objection to VNM (insofar as I even have “objections” to the theorem itself rather than to the irresponsible way in which it’s often used), so we can let this pass.

We face trade-offs in the real world, and if we don’t choose between the options consistently, we’ll end up throwing away resources unnecessarily. … The more situations where we locally violate VNM utility, the more situations where we’ll lose resources.

Yes, this is exactly the claim under dispute. This is the one you need to be defending, seriously and in detail.

“VNM utility theorem is unrealistic! It assumes we’re willing to accept either a trade or its opposite (or both) - rather than just ignoring offers.” Response: We face trade-offs in the real world where “ignore” is not an option, and if we don’t choose between the options consistently, we’ll end up throwing away resources unnecessarily.

Ditto.

“Dutch Book theorems are unrealistic! They assume we’re willing to accept either a bet or it’s opposite, rather than ignoring both.” Response: same as previous.

Ditto again. I have asked for a demonstration of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in related contexts. I’ve never gotten so much as a serious attempt at a response. I ask you the same: demonstrate, please, and with (real-world!) examples.

“Dutch Book Theorems are unrealistic! They assume we’re constantly making bets on everything possible.” Response: every time we make a decision under uncertainty, we make a bet. Do so inconsistently, and we throw away resources unnecessarily.

Once again, please provide some *real-world* examples of when this applies.

In summary: you seem to think, and claim, that people simply *aren’t aware* that there’s a weaker form of the theorem, which is still claimed to be true. I submit to you that if your interlocutor is intelligent and informed, then this is almost always *not* the case. Rather, people *are* aware of the “weaker form”, *but do not accept it as true*!

(After all, the “strong form” has a *proof*, which we can, like, look up on the internet and so on. The “weak form” has… what? Usually, nothing but hand-waving… or that’s how it seems, anyway! In any case, making a serious, convincing case for the “weak form”, *with real-world examples*, that engages with doubters and addresses objections, etc., is where the meat of this sort of argument has to be.)

This one is not a central example, since I’ve not seen any VNM-proponent put it in quite these terms. A citation for this would be nice. In any case, the sort of thing you cite is not really my primary objection to VNM (insofar as I even have “objections” to the theorem itself rather than to the irresponsible way in which it’s often used), so we can let this pass.

VNM is used to show why you need to have utility functions if you don't want to get Dutch-booked. It's not something the OP invented, it's the whole point of VNM. One wonder what you thought VNM was about.

Yes, this is exactly the claim under dispute. This is the one you need to be defending, seriously and in detail.

That we face trade-offs in the real world is a claim under dispute ?

Ditto.

Another way of phrasing it is that we can model "ignore" as a choice, and derive the VNM theorem just as usual.

Ditto again. I have asked for a demonstration of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in related contexts. I’ve never gotten so much as a serious attempt at a response. I ask you the same: demonstrate, please, and with (real-world!) examples.

Ditto.

Once again, please provide some

real-worldexamples of when this applies.

OP said it: every time we make a decision under uncertainty. Every decision under uncertainty can be modeled as a bet, and Dutch book theorems are derived as usual.

VNM is used to show why you need to have utility functions if you don’t want to get Dutch-booked. It’s not something the OP invented, it’s the whole point of VNM. One wonder what you thought VNM was about.

This is a confused and inaccurate comment.

The von Neumann-Morgenstern utility theorem states that *if* an agent’s preferences conform to the given axioms, *then* there exists a “utility function” that will correspond to the agent’s preferences (and so that agent can be said to behave as if maximizing a “utility function”).

We may then ask whether there is any *normative* reason for our preferences to conform to the given axioms (or, in other words, whether the axioms are justified by anything).

If the answer to this latter question turned out to be “no”, the VNM theorem would continue to hold. The theorem is entirely agnostic about whether any agent “should” hold the given axioms; it only tells us a certain mathematical fact about agents that *do* hold said axioms.

It so happens to be the case that for at least some[1] of the axioms, an agent that violates that axiom will agree to a Dutch book. Note, however, that the truth of this fact is independent of the truth of the VNM theorem.

Once again: if the VNM theorem were false, it could still be the case that an agent that violated one or more of the given axioms would agree to a Dutch book; and, conversely, if the latter were not the case, the VNM theorem would remain as true as ever.

[1] It would be rather audacious to claim that this is true for *each* of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!

We face trade-offs in the real world, and if we don’t choose between the options consistently, we’ll end up throwing away resources unnecessarily. … The more situations where we locally violate VNM utility, the more situations where we’ll lose resources.

Yes, this is exactly the claim under dispute. This is the one you need to be defending, seriously and in detail.

That we face trade-offs in the real world is a claim under dispute ?

Your questions give the impression that you’re being deliberately dense.

Obviously it’s true that we face trade-offs. What is not so obvious is *literally the entire rest of the section I quoted*.

“Dutch Book theorems are unrealistic! They assume we’re willing to accept either a bet or it’s opposite, rather than ignoring both.” Response: same as previous.

Ditto again. I have asked for a demonstration of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in related contexts. I’ve never gotten so much as a serious attempt at a response. I ask you the same: demonstrate, please, and with (real-world!) examples.

Another way of phrasing it is that we can model “ignore” as a choice, and derive the VNM theorem just as usual.

As I explained above, the VNM theorem is orthogonal to Dutch book theorems, so this response is a *non sequitur*.

More generally, however… I have heard glib responses such as “Every decision under uncertainty can be modeled as a bet” many times. Yet if the applicability of Dutch book theorems is so ubiquitous, why do you (and others who say similar things) seem to find it so difficult to provide an actual, concrete, real-world example of any of the claims in the OP? Not a class of examples; not an analogy; not even a formal proof that examples exist; but an *actual example*. In fact, it should not be onerous to provide—let’s say—*three* examples, yes? Please be *specific*.

[1] It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!

How can an agent not conform the completeness axiom ? It literally just say "either the agent prefer A to B, or B to A, or don't prefer anything". Offer me an example of an agent that don't conform to the completeness axiom.

Obviously it’s true that we face trade-offs. What is not so obvious is literally the entire rest of the section I quoted.

The entire rest of the section is a straightforward application of the theorem. The objection is that X don't happen in real life, and the counter-objection is that something like X do happen in real life, meaning the theorem do apply.

As I explained above, the VNM theorem is orthogonal to Dutch book theorems, so this response is a non sequitur.

Yeah, sorry for being imprecise in my language. Can you just be charitable and see that my statement make sense if you replace "VNM" by "Dutch book" ? Your behavior does not really send the vibe of someone who want to approach this complicated issue honestly, and more send the vibe of someone looking for Internet debate points.

More generally, however… I have heard glib responses such as “Every decision under uncertainty can be modeled as a bet” many times. Yet if the applicability of Dutch book theorems is so ubiquitous, why do you (and others who say similar things) seem to find it so difficult to provide an actual, concrete, real-world example of any of the claims in the OP? Not a class of examples; not an analogy; not even a formal proof that examples exist; but an actual example. In fact, it should not be onerous to provide—let’s say—three examples, yes? Please be specific.

- If I cross the street, I make a bet about whether a car will run over me.
- If I eat a pizza, I make a bet about whether the pizza will taste good.
- If I'm posting this comment, I make a bet about whether it will convince anyone.
- etc.

*(Note: I ask that you not take this as an invitation to continue arguing the primary topic of this thread; however, one of the points you made is interesting enough on its own, and tangential enough from the main dispute, that I wanted to address it for the benefits of anyone reading this.)*

[1] It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!

How can an agent not conform the completeness axiom ? It literally just say “either the agent prefer A to B, or B to A, or don’t prefer anything”. Offer me an example of an agent that don’t conform to the completeness axiom.

This turns out to be an interesting question.

One obvious counterexample is simply an agent whose preferences are not totally deterministic; suppose that when choosing between A and B (though not necessarily in other cases involving other choices), the agent flips a coin, preferring A if heads, B otherwise (and thenceforth behaves according to this coin flip). However, until they actually have to make the choice, they have no preference. How do you propose to construct a Dutch book for this agent? Remember, the agent will only determine their preference *after* being provided with your offered bets.

A less trivial example is the case of bounded rationality. Suppose you want to know if I prefer A to B. However, either or both of A/B are outcomes that I have not considered yet. Suppose also (as is often the case in reality) that whenever I do encounter this choice, I will at once perceive that to fully evaluate it would be computationally (or otherwise cognitively) intractable given the limitations of time and other resources that I am willing to spend on making this decision. I will therefore rely on certain heuristics (which I have inherited from evolution, from my life experiences, or from god knows where else), I will consider certain previously known data, I will perhaps spend some small amount of time/effort on acquiring information to improve my understanding of A and B, and then form a preference.

My preference will thus depend on various contingent factors (what heuristics I can readily call to mind, what information is easily available for me to use in deciding, what has taken place in my life up to the point when I have to decide, etc.). Many, if not most, of these contingent factors, are not known to you; and even were they known to you, their effects on my preference are likely to be intractable to determine. You therefore are not able to model me as an agent whose preferences are complete. (We might, at most, be able to say something like “Omega, who can see the entire manifold of existence in all dimensions and time directions, can model me as an agent with complete preferences”, but certainly not that *you*, nor any other realistic agent, can do so.)

Finally, “Expected Utility Theory without the Completeness Axiom” (Dubra et. al., 2001) is a fascinating paper that explores some of the implications of completeness axiom violation in some detail. Key quote:

Before stating more carefully our goal and the contribution thereof, let us note that there are several economic reasons why one would like to study incomplete preference relations. First of all, as advanced by several authors in the literature, it is not evident if completeness is a fundamental rationality tenet the way the transitivity property is. Aumann (1962), Bewley (1986) and Mandler (1999), among others, defend this position very strongly from both the normative and positive viewpoints. Indeed, if one takes the psychological preference approach (which derives choices from preferences), and not the revealed preference approach, it seems natural to define a preference relation as a potentially incomplete preorder, thereby allowing for the occasional "indecisiveness" of the agents. Secondly, there are economic instances in which a decision maker is in fact composed of several agents each with a possibly distinct objective function. For instance, in coalitional bargaining games, it is in the nature of things to specify the preferences of each coalition by means of a vector of utility functions (one for each member of the coalition), and this requires one to view the preference relation of each coalition as an incomplete preference relation. The same reasoning applies to social choice problems; after all, the most commonly used social welfare ordering in economics, the Pareto dominance, is an incomplete preorder. Finally, we note that incomplete preferences allow one to enrich the decision making process of the agents by providing room for introducing to the model important behavioral traits like status quo bias, loss aversion, procedural decision making, etc.

I encourage you to read the whole thing (it’s a mere 13 pages long).

*P.S.* Here’s the aforementioned “Aumann (1962)” (yes, that very same Robert J. Aumann)—a paper called “Utility Theory without the Completeness Axiom”. Aumann writes in plain language wherever possible, and the paper is *very* readable. It includes this line:

Of all the axioms of utility theory, the completeness axiom is perhaps the most questionable.[8] Like others of the axioms, it is inaccurate as a description of real life; but unlike them, we find it hard to accept even from the normative viewpoint.

The full elaboration for this (perhaps quite shocking) comment is too long to quote; I encourage anyone who’s at all interested in utility theory to read the paper.

Though there’s a great deal more I could say here, I think that when accusations of “looking for Internet debate points” start to fly, that’s the point at which it’s best to bow out of the conversation.