Don't Get Distracted by the Boilerplate
post by johnswentworth · 2018-07-26T02:15:46.951Z · LW · GW · 19 commentsContents
19 comments
Author’s Note: Please don’t get scared off by the first sentence. I promise it's not as bad as it sounds.
There’s a theorem from the early days of group theory which says that any continuous, monotonic function which does not depend on the order of its inputs can be transformed to addition. A good example is multiplication of positive numbers: f(x, y, z) = x*y*z. It’s continuous, it’s monotonic (increasing any of x, y, or z increases f), and we can change around the order of inputs without changing the result. In this case, f is transformed to addition using a logarithm: log(f(x, y, z)) = log(x) + log(y) + log(z).
Now, at first glance, we might say this is a very specialized theorem. “Continuous” and “monotonic” are very strong conditions; they’re not both going to apply very often. But if we actually look through the proof, it becomes clear that these assumptions aren’t as important as they look. Weakening them does change the theorem, but the core idea remains. For instance, if we remove monotonicity, then our function can still be written in terms of vector addition.
Many theorems/proofs contain pieces which are really just there for modelling purposes. The central idea of the proof can apply in many different settings, but we need to pick one of those settings in order to formalize it. This creates some mathematical boilerplate. Typically, we pick a setting which keeps the theorem simple - but that may involve stronger boilerplate assumptions than are strictly necessary for the main idea.
In such cases, we can usually relax the boilerplate assumptions and end up with slightly weaker forms of the theorem, which nonetheless maintain the central concepts.
Unfortunately, the boilerplate occasionally distracts people who aren’t familiar with the full idea underlying the proof. For some reason, I see this problem most with theorems in economics, game theory and decision theory - the sort of theorems which say “either X, or somebody is giving away free money”. People will come along and say “but wait, the theorem assumes Y, which is completely unrealistic!” But really, Y is often just boilerplate, and the core ideas still apply even if Y is relaxed to something more realistic. In fact, in many cases, the confusion is over the wording of the boilerplate! Just because we use the word “bet”, doesn’t mean people need to be at a casino for the theorem to apply.
A few examples:
- “VNM utility theorem is unrealistic! It requires that we have preferences over every possible state of the universe.” Response: Completeness is really just there to keep the math clean. The core ideas of the proof still show that, if we don’t have a utility function over some neighborhood of world-states, then we can be exploited using only those world-states.
- “All these rationality theorems are unrealistic! They’re only relevant to worlds where evil agents are constantly running around looking to exploit us.” Response: We face trade-offs in the real world, and if we don’t choose between the options consistently, we’ll end up throwing away resources unnecessarily. Whether an evil person manipulates us into it, or we stumble into it, isn’t really relevant. The more situations where we locally violate VNM utility, the more situations where we’ll lose resources.
- “VNM utility theorem is unrealistic! It assumes we’re willing to accept either a trade or its opposite (or both) - rather than just ignoring offers.” Response: We face trade-offs in the real world where “ignore” is not an option, and if we don’t choose between the options consistently, we’ll end up throwing away resources unnecessarily.
- “Dutch Book theorems are unrealistic! They assume we’re willing to accept either a bet or it’s opposite, rather than ignoring both.” Response: same as previous. Alternatively, we can build bid-ask spreads into the model, and most of the structure remains.
- “Dutch Book Theorems are unrealistic! They assume we’re constantly making bets on everything possible.” Response: every time we make a decision under uncertainty, we make a bet. Do so inconsistently, and we throw away resources unnecessarily.
In closing, one important note: I definitely do not want to claim that all objections to the use of VNM utility theorem, Dutch Book theorems, etc make this kind of mistake.
19 comments
Comments sorted by top scores.
comment by Said Achmiz (SaidAchmiz) · 2018-07-26T14:46:07.625Z · LW(p) · GW(p)
I don’t think this post passes the Intellectual Turing Test for people (like me) who object to the sorts of theorems you cite.
You say:
In such cases, we can usually relax the boilerplate assumptions and end up with slightly weaker forms of the theorem, which nonetheless maintain the central concepts.
But in most such cases, whether the “weaker forms” of the theorems do, in fact, “maintain the central concepts”, is exactly what is at issue.
Let’s go through a couple of examples:
The core ideas of the proof [of the VNM theorem] still show that, if we don’t have a utility function over some neighborhood of world-states, then we can be exploited using only those world-states.
This one is not a central example, since I’ve not seen any VNM-proponent put it in quite these terms. A citation for this would be nice. In any case, the sort of thing you cite is not really my primary objection to VNM (insofar as I even have “objections” to the theorem itself rather than to the irresponsible way in which it’s often used), so we can let this pass.
We face trade-offs in the real world, and if we don’t choose between the options consistently, we’ll end up throwing away resources unnecessarily. … The more situations where we locally violate VNM utility, the more situations where we’ll lose resources.
Yes, this is exactly the claim under dispute. This is the one you need to be defending, seriously and in detail.
“VNM utility theorem is unrealistic! It assumes we’re willing to accept either a trade or its opposite (or both) - rather than just ignoring offers.” Response: We face trade-offs in the real world where “ignore” is not an option, and if we don’t choose between the options consistently, we’ll end up throwing away resources unnecessarily.
Ditto.
“Dutch Book theorems are unrealistic! They assume we’re willing to accept either a bet or it’s opposite, rather than ignoring both.” Response: same as previous.
Ditto again. I have asked for a demonstration of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in related contexts. I’ve never gotten so much as a serious attempt at a response. I ask you the same: demonstrate, please, and with (real-world!) examples.
“Dutch Book Theorems are unrealistic! They assume we’re constantly making bets on everything possible.” Response: every time we make a decision under uncertainty, we make a bet. Do so inconsistently, and we throw away resources unnecessarily.
Once again, please provide some real-world examples of when this applies.
In summary: you seem to think, and claim, that people simply aren’t aware that there’s a weaker form of the theorem, which is still claimed to be true. I submit to you that if your interlocutor is intelligent and informed, then this is almost always not the case. Rather, people are aware of the “weaker form”, but do not accept it as true!
(After all, the “strong form” has a proof, which we can, like, look up on the internet and so on. The “weak form” has… what? Usually, nothing but hand-waving… or that’s how it seems, anyway! In any case, making a serious, convincing case for the “weak form”, with real-world examples, that engages with doubters and addresses objections, etc., is where the meat of this sort of argument has to be.)
Replies from: Paperclip Minimizer↑ comment by Paperclip Minimizer · 2018-08-14T19:33:25.450Z · LW(p) · GW(p)
This one is not a central example, since I’ve not seen any VNM-proponent put it in quite these terms. A citation for this would be nice. In any case, the sort of thing you cite is not really my primary objection to VNM (insofar as I even have “objections” to the theorem itself rather than to the irresponsible way in which it’s often used), so we can let this pass.
VNM is used to show why you need to have utility functions if you don't want to get Dutch-booked. It's not something the OP invented, it's the whole point of VNM. One wonder what you thought VNM was about.
Yes, this is exactly the claim under dispute. This is the one you need to be defending, seriously and in detail.
That we face trade-offs in the real world is a claim under dispute ?
Ditto.
Another way of phrasing it is that we can model "ignore" as a choice, and derive the VNM theorem just as usual.
Ditto again. I have asked for a demonstration of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in related contexts. I’ve never gotten so much as a serious attempt at a response. I ask you the same: demonstrate, please, and with (real-world!) examples.
Ditto.
Once again, please provide some real-world examples of when this applies.
OP said it: every time we make a decision under uncertainty. Every decision under uncertainty can be modeled as a bet, and Dutch book theorems are derived as usual.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-08-14T20:32:53.562Z · LW(p) · GW(p)
VNM is used to show why you need to have utility functions if you don’t want to get Dutch-booked. It’s not something the OP invented, it’s the whole point of VNM. One wonder what you thought VNM was about.
This is a confused and inaccurate comment.
The von Neumann-Morgenstern utility theorem states that if an agent’s preferences conform to the given axioms, then there exists a “utility function” that will correspond to the agent’s preferences (and so that agent can be said to behave as if maximizing a “utility function”).
We may then ask whether there is any normative reason for our preferences to conform to the given axioms (or, in other words, whether the axioms are justified by anything).
If the answer to this latter question turned out to be “no”, the VNM theorem would continue to hold. The theorem is entirely agnostic about whether any agent “should” hold the given axioms; it only tells us a certain mathematical fact about agents that do hold said axioms.
It so happens to be the case that for at least some[1] of the axioms, an agent that violates that axiom will agree to a Dutch book. Note, however, that the truth of this fact is independent of the truth of the VNM theorem.
Once again: if the VNM theorem were false, it could still be the case that an agent that violated one or more of the given axioms would agree to a Dutch book; and, conversely, if the latter were not the case, the VNM theorem would remain as true as ever.
[1] It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!
We face trade-offs in the real world, and if we don’t choose between the options consistently, we’ll end up throwing away resources unnecessarily. … The more situations where we locally violate VNM utility, the more situations where we’ll lose resources.
Yes, this is exactly the claim under dispute. This is the one you need to be defending, seriously and in detail.
That we face trade-offs in the real world is a claim under dispute ?
Your questions give the impression that you’re being deliberately dense.
Obviously it’s true that we face trade-offs. What is not so obvious is literally the entire rest of the section I quoted.
“Dutch Book theorems are unrealistic! They assume we’re willing to accept either a bet or it’s opposite, rather than ignoring both.” Response: same as previous.
Ditto again. I have asked for a demonstration of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in related contexts. I’ve never gotten so much as a serious attempt at a response. I ask you the same: demonstrate, please, and with (real-world!) examples.
Another way of phrasing it is that we can model “ignore” as a choice, and derive the VNM theorem just as usual.
As I explained above, the VNM theorem is orthogonal to Dutch book theorems, so this response is a non sequitur.
More generally, however… I have heard glib responses such as “Every decision under uncertainty can be modeled as a bet” many times. Yet if the applicability of Dutch book theorems is so ubiquitous, why do you (and others who say similar things) seem to find it so difficult to provide an actual, concrete, real-world example of any of the claims in the OP? Not a class of examples; not an analogy; not even a formal proof that examples exist; but an actual example. In fact, it should not be onerous to provide—let’s say—three examples, yes? Please be specific.
Replies from: Paperclip Minimizer↑ comment by Paperclip Minimizer · 2018-08-15T17:08:25.483Z · LW(p) · GW(p)
[1] It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!
How can an agent not conform the completeness axiom ? It literally just say "either the agent prefer A to B, or B to A, or don't prefer anything". Offer me an example of an agent that don't conform to the completeness axiom.
Obviously it’s true that we face trade-offs. What is not so obvious is literally the entire rest of the section I quoted.
The entire rest of the section is a straightforward application of the theorem. The objection is that X don't happen in real life, and the counter-objection is that something like X do happen in real life, meaning the theorem do apply.
As I explained above, the VNM theorem is orthogonal to Dutch book theorems, so this response is a non sequitur.
Yeah, sorry for being imprecise in my language. Can you just be charitable and see that my statement make sense if you replace "VNM" by "Dutch book" ? Your behavior does not really send the vibe of someone who want to approach this complicated issue honestly, and more send the vibe of someone looking for Internet debate points.
More generally, however… I have heard glib responses such as “Every decision under uncertainty can be modeled as a bet” many times. Yet if the applicability of Dutch book theorems is so ubiquitous, why do you (and others who say similar things) seem to find it so difficult to provide an actual, concrete, real-world example of any of the claims in the OP? Not a class of examples; not an analogy; not even a formal proof that examples exist; but an actual example. In fact, it should not be onerous to provide—let’s say—three examples, yes? Please be specific.
- If I cross the street, I make a bet about whether a car will run over me.
- If I eat a pizza, I make a bet about whether the pizza will taste good.
- If I'm posting this comment, I make a bet about whether it will convince anyone.
- etc.
↑ comment by Said Achmiz (SaidAchmiz) · 2018-08-16T18:51:37.017Z · LW(p) · GW(p)
(Note: I ask that you not take this as an invitation to continue arguing the primary topic of this thread; however, one of the points you made is interesting enough on its own, and tangential enough from the main dispute, that I wanted to address it for the benefits of anyone reading this.)
[1] It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!
How can an agent not conform the completeness axiom ? It literally just say “either the agent prefer A to B, or B to A, or don’t prefer anything”. Offer me an example of an agent that don’t conform to the completeness axiom.
This turns out to be an interesting question.
One obvious counterexample is simply an agent whose preferences are not totally deterministic; suppose that when choosing between A and B (though not necessarily in other cases involving other choices), the agent flips a coin, preferring A if heads, B otherwise (and thenceforth behaves according to this coin flip). However, until they actually have to make the choice, they have no preference. How do you propose to construct a Dutch book for this agent? Remember, the agent will only determine their preference after being provided with your offered bets.
A less trivial example is the case of bounded rationality. Suppose you want to know if I prefer A to B. However, either or both of A/B are outcomes that I have not considered yet. Suppose also (as is often the case in reality) that whenever I do encounter this choice, I will at once perceive that to fully evaluate it would be computationally (or otherwise cognitively) intractable given the limitations of time and other resources that I am willing to spend on making this decision. I will therefore rely on certain heuristics (which I have inherited from evolution, from my life experiences, or from god knows where else), I will consider certain previously known data, I will perhaps spend some small amount of time/effort on acquiring information to improve my understanding of A and B, and then form a preference.
My preference will thus depend on various contingent factors (what heuristics I can readily call to mind, what information is easily available for me to use in deciding, what has taken place in my life up to the point when I have to decide, etc.). Many, if not most, of these contingent factors, are not known to you; and even were they known to you, their effects on my preference are likely to be intractable to determine. You therefore are not able to model me as an agent whose preferences are complete. (We might, at most, be able to say something like “Omega, who can see the entire manifold of existence in all dimensions and time directions, can model me as an agent with complete preferences”, but certainly not that you, nor any other realistic agent, can do so.)
Finally, “Expected Utility Theory without the Completeness Axiom” (Dubra et. al., 2001) is a fascinating paper that explores some of the implications of completeness axiom violation in some detail. Key quote:
Before stating more carefully our goal and the contribution thereof, let us note that there are several economic reasons why one would like to study incomplete preference relations. First of all, as advanced by several authors in the literature, it is not evident if completeness is a fundamental rationality tenet the way the transitivity property is. Aumann (1962), Bewley (1986) and Mandler (1999), among others, defend this position very strongly from both the normative and positive viewpoints. Indeed, if one takes the psychological preference approach (which derives choices from preferences), and not the revealed preference approach, it seems natural to define a preference relation as a potentially incomplete preorder, thereby allowing for the occasional "indecisiveness" of the agents. Secondly, there are economic instances in which a decision maker is in fact composed of several agents each with a possibly distinct objective function. For instance, in coalitional bargaining games, it is in the nature of things to specify the preferences of each coalition by means of a vector of utility functions (one for each member of the coalition), and this requires one to view the preference relation of each coalition as an incomplete preference relation. The same reasoning applies to social choice problems; after all, the most commonly used social welfare ordering in economics, the Pareto dominance, is an incomplete preorder. Finally, we note that incomplete preferences allow one to enrich the decision making process of the agents by providing room for introducing to the model important behavioral traits like status quo bias, loss aversion, procedural decision making, etc.
I encourage you to read the whole thing (it’s a mere 13 pages long).
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-08-16T21:45:20.248Z · LW(p) · GW(p)
P.S. Here’s the aforementioned “Aumann (1962)” (yes, that very same Robert J. Aumann)—a paper called “Utility Theory without the Completeness Axiom”. Aumann writes in plain language wherever possible, and the paper is very readable. It includes this line:
Of all the axioms of utility theory, the completeness axiom is perhaps the most questionable.[8] Like others of the axioms, it is inaccurate as a description of real life; but unlike them, we find it hard to accept even from the normative viewpoint.
The full elaboration for this (perhaps quite shocking) comment is too long to quote; I encourage anyone who’s at all interested in utility theory to read the paper.
Replies from: abramdemski↑ comment by abramdemski · 2020-09-22T18:17:30.497Z · LW(p) · GW(p)
I happened upon this old thread, and found the discussion intriguing. Thanks for posting these references! Unless I'm mistaken, it sounds like you've discussed this topic a lot on LW but have never made a big post detailing your whole perspective. Maybe that would be useful! At least I personally find discussions of applicability/generalizability of VNM and other rationality axioms quite interesting.
Indeed, I think I recently ran into another old comment of yours in which you made a remark about how Dutch Books only hold for repeated games? I don't recall the details now.
I have some comments on the preceding discussion. You said:
It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!
For me, it seems that transitivity and completeness are on an equally justified footing, based on the classic money-pump argument.
Just to keep things clear, here is how I think about the details. There are outcomes. Then there are gambles, which we will define recursively. An outcome counts as a gamble for the sake of the base case of our recursion. For gambles A and B, pA+(1-p)B also counts as a gamble, where p is a real number in the range [0,1].
Now we have a preference relation > on our gambles. I understand its negation to be ; saying is the same thing as . The indifference relation, is just the same thing as .
This is different than the development on wikipedia, where ~ is defined separately. But I think it makes more sense to define > and then define ~ from that. A>B can be understood as "definitely choose A when given the choice between A and B". ~ then represents indifference as well as uncertainty like the kind you describe when you discuss bounded rationality.
From this starting point, it's clear that either A<B, or B<A, or A~B. This is just a way of saying "either A<B or B<A or neither". What's important about the completeness axiom is the assumption that exactly one of these hold; this tells us that we cannot have both A<B and B<A.
But this is practically the same as circular preferences A<B<C<A, which transitivity outlaws. It's just a circle of length 2.
The classic money-pump against circularity is that if we have circular preferences, someone can charge us for making a round trip around the circle, swapping A for B for C for A again. They leave us in the same position we started, less some money. They can then do this again and again, "pumping" all the money out of us.
Personally I find the this argument extremely metaphysically weird, for several reasons.
- The money-pumper must be God, to be able to swap arbitrary A for B, and B for C, etc.
- But furthermore, the agent must not understand the true nature of the money-pumper. When God asks about swapping A for B, the agent thinks it'll get B in the end, and makes the decision accordingly. Yet, God proceeds to then ask a new question, offering to swap B for C. So God doesn't actually put the agent in universe B; rather, God puts the agent in "B+God", a universe with the possibility of B, but also a new offer from God, namely, to move on to C. So God is actually fooling the agent, making an offer of B but really giving the agent something different than B. Bad decision-making should not count against the agent if the agent was mislead in such a manner!
- It's also pretty weird that we can end up "in the same situation, but with less money". If the outcomes A,B,C were capturing everything about the situation, they'd include how much money we had!
I have similar (but less severe) objections to Dutch-book arguments.
However, I also find the argument extremely practically applicable, so much so that I can excuse the metaphysical weirdness. I have come to think of Dutch-book and money-pump arguments as illustrative of important types of (in)consistency rather than literal arguments.
OK, why do I find money-pumps practical?
Simply put, if I have a loop in my preferences, then I will waste a lot of time deliberating. The real money-pump isn't someone taking advantage of me, but rather, time itself passing.
What I find is that I get stuck deliberating until I can find a way to get rid of the loop. Or, if I "just choose randomly", I'm stuck with a yucky dissatisfied feeling (I have regret, because I see another option as better than the one I chose).
This is equally true of three-choice loops and two-choice loops. So, transitivity and completeness seem equally well-justified to me.
Stuart Armstrong argues that there is a weak money pump for the independence axiom [LW · GW]. I made a very technical post [LW · GW] (not all of which seems to render correctly on LessWrong :/) justifying as much as I could with money-pump/dutch-book arguments, and similarly got everything except continuity.
I regard continuity as not very theoretically important, but highly applicable in practice. IE, I think the pure theory of rationality should exclude continuity, but a realistic agent will usually have continuous values. The reason for this is again because of deliberation time.
If we drop continuity, we get a version of utility theory with infinite and infinitesimal values. This is perfectly fine, has the advantage of being more general, and is in some sense more elegant. To reference the OP, continuity is definitely just boilerplate; we get a nice generalization if we want to drop it.
However, a real agent will ignore its own infinitesimal preferences, because it's not worth spending time thinking about that. Indeed, it will almost always just think about the largest infinity in its preferences. This is especially true if we assume that the agent places positive probability on a really broad class of things, which again seems true of capable agents in practice. (IE, if you have infinities in your values, and a broad probability distribution, you'll be Pascal-mugged -- you'll only think of the infinite payoffs, neglecting finite payoffs.)
So all of the axioms except independence have what appear to me to be rather practical justifications, and independence has a weak money-pump justification (which may or may not translate to anything practical).
Replies from: abramdemski↑ comment by abramdemski · 2020-09-22T20:40:34.633Z · LW(p) · GW(p)
Correction: I now see that my formulation turns the question of completeness into a question of transitivity of indifference. An "incomplete" preference relation should not be understood as one in which allows strict preferences to go in both directions (which is what I interpret them as, above) but rather, a preference relation in which the relation (and hence the relation) is not transitive.
In this case, we can distinguish between ~ and "gaps", IE, incomparable A and B. ~ might be transitive, but this doesn't bridge across the gaps. So we might have a preference chain A>B>C and a chain X>Y>Z, but not have any way to compare between the two chains.
In my formulation, which lumps together indifference and gaps, we can't have this two-chain situation. If A~X, then we must have A>Y, since X>Y, by transitivity of .
So what would be a completeness violation in the wikipedia formulation becomes a transitivity violation in mine.
But notice that I never argued for the transitivity of ~ or in my comment; I only argued for the transitivity of >.
I don't think a money-pump argument can be offered for transitivity here.
However, I took a look at the paper by Aumann which you cited, and I'm fairly happy with the generalization of VNM therein! Dropping uniqueness does not seem like a big cost. This seems like more of an example of John Wentworth's "boilerplate" point, rather than a counterexample.
↑ comment by Said Achmiz (SaidAchmiz) · 2018-08-15T20:04:52.761Z · LW(p) · GW(p)
Though there’s a great deal more I could say here, I think that when accusations of “looking for Internet debate points” start to fly, that’s the point at which it’s best to bow out of the conversation.
comment by cousin_it · 2018-07-26T09:23:47.201Z · LW(p) · GW(p)
I think your post needs a counterpoint: to deserve that kind of trust, a result needs to also have good empirical reputation. Not all theoretical results are like that. For example, Aumann agreement makes perfect sense in theory and is robust to small changes, but doesn't happen in reality. A big part of studying econ is figuring out which parts have empirical backing and how much.
Replies from: tailcalled, Paperclip Minimizer↑ comment by tailcalled · 2022-05-30T20:25:30.266Z · LW(p) · GW(p)
For example, Aumann agreement makes perfect sense in theory and is robust to small changes, but doesn't happen in reality.
I don't think this is true, it happens all of the time in reality. E.g. if my girlfriend tells me that her university is having a special event, then we disagreed beforehand (in the sense that I put low probability on it while she put high probability on it) but I immediately agree afterwards, for exactly the reasons Aumann's agreement theorem says (she has no reason to lie about her university having a special event, i.e. she's honest; and she is correctly able to know that universities don't usually have special events but that if they announce an event they probably do have them, i.e. she's rational).
Any time you learn something simply from someone telling you about it, it's an application of Aumann's agreement theorem.
Replies from: TAG↑ comment by TAG · 2023-03-13T02:20:53.197Z · LW(p) · GW(p)
Would you believe her if she said God was talking to her?
Aumann s theorem can apply in reality, when the "boilerplate" conditions are approximately met, when there is some mutual trust It still doesn't apply across deep ideological divides, because people with strongly different object level beliefs don't trust other[*]. And of course , those situations are where it would be philosophically significant. So the boilerplate does matter.
[*] Actually, Yudkowsky was inclined to distrust Aumann's rationality because of his theism.
Replies from: tailcalled↑ comment by tailcalled · 2023-03-13T08:02:05.719Z · LW(p) · GW(p)
It's an important challenge, and if we generalize to ideologies in general rather than focusing uniquely on a person who says God is talking to her, it's something I've thought a bunch about.
I think most of the incompatible beliefs people come up with are not directly from people's own experiences, but rather from Aumann-agreeing with other members of the ideologies who push those ideas. This isn't specific to false ideologies; it also applies to e.g. evolution where it took most of human history until Charles Darwin came up with the idea and principles for evolution. It's not something most people have derived for themselves from first principles.
So I tend to think of the object-level differences as being heavily originating in the differences in who one is Aumanning, and Aumann's theorem suggest that agreeing is dependent on trust, so I see ideologies as being constituted by networks of trust between people, where ideas can flow within the networks due to the trust, but they might not flow between the networks due to people not trusting each other there.
The case of someone saying that God is talking to her is somewhat different from this, since it is a personal experience and since it is not based on ideological trust (in fact I am under the impression that a lot of religions would agree that you are crazy if you think God is talking to you?).
I have once encountered this sort of situation - someone close to me who was very mentally ill and had been on lots of psychiatric medications started seeing God in many places and talked about how God wanted her to eat spiders and that the local equivalent of the CIA was spying on her. I think the establishment medical ideology calls this phenomenon "psychosis", and claims that it is due to a brain malfunction. The context and general outcome of this event (where she definitely didn't save the world despite supposedly having God as an advisor) would make me prone to agreeing with that assessment. So while I don't agree that God was actually talking to her, I do agree that she had the perception as if God was talking to her, and that it was just because the rationality assumption in Aumann's theorem was failing that I shouldn't update on it more generally.
Going back to the case of ideologies, I think a few different things are happening. First, ideological networks can embed rationality failures like the above or honesty failures deep into the ideological network, and allow them to spread their ideas to others, corrupting the entire network. Especially if the flawed beliefs they are spreading are sufficiently abstract, they might not have any good way of getting noticed and corrected.
This is not limited to religion; science bundles accurate beliefs like Darwinian evolution together with inaccurate beliefs like that IQ test scores don't depend on test-taking effort. This propensity of science to spread tons of falsehoods suggests that one should not trust science too much. But that also makes it difficult to untangle what should be believed from what shouldn't be believed.
This is getting a bit long and rambly so I'll end my comment here so I can hear what you think in response to this.
Replies from: TAG↑ comment by TAG · 2023-03-14T16:01:58.973Z · LW(p) · GW(p)
I think most of the incompatible beliefs people come up with are not directly from people’s own experiences, but rather from Aumann-agreeing with other members of the ideologies who push those ideas.
It's trust rather than trust in rationality. There's very strong evidence that people get most of their beliefs from their social background, but explicitly irrational ideologoies operate the same way, so there's little evidence that social trust is an Aumann mechanism.
Rationality has this thing where it does ignore the "boilerplate", the annoying details, in multiple cases. That leads to making claims that are too broad -- or diluting the meanings of terms: it;'s often hard to say which. Bayesian probability is treated as some kind of probablistic reasoning, not necessarily qualitative; Aumann's theroem just means reasonable people should agree, etc.
↑ comment by Paperclip Minimizer · 2018-08-14T18:10:58.664Z · LW(p) · GW(p)
Is Aumann robust to untrustworthiness ?
comment by Dmitry Vaintrob (dmitry-vaintrob) · 2024-06-29T19:04:56.135Z · LW(p) · GW(p)
Nitpick, but I don't think the theorem you mention is correct unless you mean something other than what I understand. For the statement I think you want to be true, the function also needs to be a group law, which requires associativity. (In fact, if it's monotonic on the reals, you don't need to enforce commutativity, since all continuous group laws on R are isomorphic.)
Replies from: dmitry-vaintrob, Linch↑ comment by Dmitry Vaintrob (dmitry-vaintrob) · 2024-06-29T19:43:23.385Z · LW(p) · GW(p)
I also wouldn't give this result (if I'm understanding which result you mean) as an example where the assumptions are technicalities / inessential for the "spirit" of the result. Assuming monotonicity or commutativity (either one is sufficient) is crucial here, otherwise you could have some random (commutative) group with the same cardinality as the reals.
Generally, I think math is the wrong comparison here. To be fair, there are other examples of results in math where the assumptions are "inessential for the core idea", which I think is what you're gesturing at. But I think math is different in this dimension from other fields, where often you don't lose much by fuzzing over technicalities (in fact the question of how much to fuss over technicalities like playing fast and loose with infinities or being careful about what kinds of functions are allowed in your fields is the main divider between math and theoretical physics).
In my experience in pure math, when you notice that the "boilerplate" assumptions on your result seem inessential, this is usually for one of the following reasons:
- In fact, a more general result is true and the proof works with fewer/weaker assumptions, but either for historical reasons or for reasons of some results used (lemmas, etc.) being harder in more generality, it's stated in this form
- The result is true in more generality, but proving the more general result is genuinely harder or requires a different technique, and this can sometimes lead to new and useful insights
- The result is false (or unknown) in more technicality, and the "boilerplate" assumptions are actually essential, and understanding why will give more insight into the proof (despite things seeming inessential at first)
- The "boilerplate" assumptions the result uses are weaker than what the theorem is stated with, but it's messy to explain the "minimal" assumptions, and it's easier to compress the result by using a more restrictive but more standard class of objects (in this way a lot of results that are true for some messy class of functions are easier to remember and use for a more restrictive class: most results that use "Schwartz spaces" are of this form; often results that are true for distributions are stated for simplicity for functions, etc.).
- Some assumptions are needed for things to "work right," but are kind of "small": i.e., trivial to check or mostly just controlling for degenerate edge cases, and can be safely compressed away in your understanding of the proof if you know what you're doing (a standard example is checking for the identity in group laws: it's usually trivial to check if true, and the "meaty" part of the axiom is generally associativity; another example is assuming rings don't have 0 = 1, i.e., aren't the degenerate ring with one element).
- There's some dependence on logical technicalities, or what axioms you assume (especially relevant in physics- or CS/cryptography- adjacent areas, where different additional axioms like P != NP are used, and can have different flavors which interface with proofs in different ways, but often don't change the essentials).
I think you're mostly talking about 6 here, though I'm not sure (and not sure math is the best source of examples for this). I think there's a sort of "opposite" phenomenon also, where a result is true in one context but in fact generalizes well to other contexts. Often the way to generalize is standard, and thus understanding the "essential parts" of the proof in any one context are sufficient to then be able to recreate them in other contexts, with suitably modified constructions/axioms. For example, many results about sets generalize to topoi, many results about finite-dimensional vector spaces generalize to infinite-dimensional vector spaces, etc. This might also be related to what you're talking about. But generally, I think the way you conceptualize "essential vs. boilerplate" is genuinely different in math vs. theoretical physics/CS/etc.
comment by ryan_b · 2018-07-26T15:01:36.091Z · LW(p) · GW(p)
I really liked this post, because I have been on both sides of the coin here: that is to say, I have been the person who thought a theory was irrelevant because its assumptions were too extreme, and I have been the person trying to apply the core insights of the theory, and been criticized because the situation to which I was applying it did not meet various assumptions. I was confused each time, and I am pretty sure I have even been on both sides of the same theory at least once.
It is practically inevitable that either side is the correct answer depending on the situation, and possible that I was closer to correct than the person I was disagreeing with. But then, I was confused each time; the simpler explanation by far is that I was confused about the theories under discussion.
When I am trying to learn about a particular theory or field, I now set as the first priority the historical context for its development. This is very reliable for communicating the underlying intuitions, and also can be counted on to describe the real-life situations that inspired them or to which they were applied.