The Mechanics of Disagreement
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-10T14:01:44.000Z · LW · GW · Legacy · 26 commentsContents
26 comments
Two ideal Bayesians cannot have common knowledge of disagreement; this is a theorem. If two rationalist-wannabes have common knowledge of a disagreement between them, what could be going wrong?
The obvious interpretation of these theorems is that if you know that a cognitive machine is a rational processor of evidence, its beliefs become evidence themselves.
If you design an AI and the AI says "This fair coin came up heads with 80% probability", then you know that the AI has accumulated evidence with an likelihood ratio of 4:1 favoring heads - because the AI only emits that statement under those circumstances.
It's not a matter of charity; it's just that this is how you think the other cognitive machine works.
And if you tell an ideal rationalist, "I think this fair coin came up heads with 80% probability", and they reply, "I now think this fair coin came up heads with 25% probability", and your sources of evidence are independent of each other, then you should accept this verdict, reasoning that (before you spoke) the other mind must have encountered evidence with a likelihood of 1:12 favoring tails.
But this assumes that the other mind also thinks that you're processing evidence correctly, so that, by the time it says "I now think this fair coin came up heads, p=.25", it has already taken into account the full impact of all the evidence you know about, before adding more evidence of its own.
If, on the other hand, the other mind doesn't trust your rationality, then it won't accept your evidence at face value, and the estimate that it gives won't integrate the full impact of the evidence you observed.
So does this mean that when two rationalists trust each other's rationality less than completely, then they can agree to disagree?
It's not that simple. Rationalists should not trust themselves entirely, either.
So when the other mind accepts your evidence at less than face value, this doesn't say "You are less than a perfect rationalist", it says, "I trust you less than you trust yourself; I think that you are discounting your own evidence too little."
Maybe your raw arguments seemed to you to have a strength of 40:1, but you discounted for your own irrationality to a strength of 4:1, but the other mind thinks you still overestimate yourself and so it assumes that the actual force of the argument was 2:1.
And if you believe that the other mind is discounting you in this way, and is unjustified in doing so, then when it says "I now think this fair coin came up heads with 25% probability", you might bet on the coin at odds of 57% in favor of heads - adding up your further-discounted evidence of 2:1 to the implied evidence of 1:6 that the other mind must have seen to give final odds of 2:6 - if you even fully trust the other mind's further evidence of 1:6.
I think we have to be very careful to avoid interpreting this situation in terms of anything like a reciprocal trade, like two sides making equal concessions in order to reach agreement on a business deal.
Shifting beliefs is not a concession that you make for the sake of others, expecting something in return; it is an advantage you take for your own benefit, to improve your own map of the world. I am, generally speaking, a Millie-style altruist; but when it comes to belief shifts I espouse a pure and principled selfishness: don't believe you're doing it for anyone's sake but your own.
Still, I once read that there's a principle among con artists that the main thing is to get the mark to believe that you trust them, so that they'll feel obligated to trust you in turn.
And - even if it's for completely different theoretical reasons - if you want to persuade a rationalist to shift belief to match yours, you either need to persuade them that you have all of the same evidence they do and have already taken it into account, or that you already fully trust their opinions as evidence, or that you know better than they do how much they themselves can be trusted.
It's that last one that's the really sticky point, for obvious reasons of asymmetry of introspective access and asymmetry of motives for overconfidence - how do you resolve that conflict? (And if you started arguing about it, then the question wouldn't be which of these were more important as a factor, but rather, which of these factors the Other had under- or over-discounted in forming their estimate of a given person's rationality...)
If I had to name a single reason why two wannabe rationalists wouldn't actually be able to agree in practice, it would be that, once you trace the argument to the meta-level where theoretically everything can be and must be resolved, the argument trails off into psychoanalysis and noise.
And if you look at what goes on in practice between two arguing rationalists, it would probably mostly be trading object-level arguments; and the most meta it would get is trying to convince the other person that you've already taken their object-level arguments into account.
Still, this does leave us with three clear reasons that someone might point to, to justify a persistent disagreement - even though the frame of mind of justification and having clear reasons to point to in front of others, is itself antithetical to the spirit of resolving disagreements - but even so:
- Clearly, the Other's object-level arguments are flawed; no amount of trust that I can have for another person will make me believe that rocks fall upward.
- Clearly, the Other is not taking my arguments into account; there's an obvious asymmetry in how well I understand them and have integrated their evidence, versus how much they understand me and have integrated mine.
- Clearly, the Other is completely biased in how much they trust themselves over others, versus how I humbly and evenhandedly discount my own beliefs alongside theirs.
Since we don't want to go around encouraging disagreement, one might do well to ponder how all three of these arguments are used by creationists to justify their persistent disagreements with scientists.
That's one reason I say clearly - if it isn't obvious even to outside onlookers, maybe you shouldn't be confident of resolving the disagreement there. Failure at any of these levels implies failure at the meta-levels above it, but the higher-order failures might not be clear.
26 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-10T15:19:11.000Z · LW(p) · GW(p)
Ergh, yeah, modified it away from 90% and 9:1 and was just silly, I guess. See, now there's a justified example of object-level disagreement - if not, perhaps, common knowledge of disagreement.
comment by Jef_Allbright · 2008-12-10T15:38:43.000Z · LW(p) · GW(p)
Coming from a background in scientific instruments, I always find this kind of analysis a bit jarring with its infinite regress involving the rational, self-interested actor at the core.
Of course two instruments will agree if they share the same nature, within the same environment, measuring the same object. You can map onto that a model of priors, likelihood function and observed evidence if you wish. Translated to agreement between two agents, the only thing remaining is an effective model of the relationship of the observer to the observed.
Replies from: kragensitaker↑ comment by kragensitaker · 2011-02-26T06:11:37.213Z · LW(p) · GW(p)
The crucial difference here is that the two "instruments" share the same nature, but they are "measuring" different objects — that is, the hypothetical rationalists do not have access to the same observed evidence about the world. But by virtue of "measuring", among other things, one another, they are supposed to come into agreement.
comment by RobinHanson · 2008-12-10T15:43:59.000Z · LW(p) · GW(p)
Of course if you knew that your disputant would only disagree with you when one of these three conditions clearly held, you would take their persistent disagreement as showing one of these conditions held, and then back off and stop disagreeing. So to apply these conditions you need the additional implicit condition that they do not believe that you could only disagree under one of these conditions.
comment by Selfreferencing · 2008-12-10T16:49:17.000Z · LW(p) · GW(p)
There's also an assumption that ideal rationality is coherent (and even rational) for bounded agents like ourselves. Probability theorist and epistemologist John Pollock has launched a series challenge to this model of decision making in his recent 06 book Thinking About Acting.
comment by Selfreferencing · 2008-12-10T16:50:21.000Z · LW(p) · GW(p)
Should be 'a serious challenge'
comment by Selfreferencing · 2008-12-10T16:55:43.000Z · LW(p) · GW(p)
You'll find the whole thing pretty interesting, although it concerns decision theory more than the rationality of belief, although these are deeply connected (the connection is an interesting topic for speculation in itself). Here's a brief summary of the book. I'm pretty partial to it.
Thinking about Acting: Logical Foundations for Rational Decision Making (Oxford University Press, 2006).
The objective of this book is to produce a theory of rational decision making for realistically resource-bounded agents. My interest is not in "What should I do if I were an ideal agent?", but rather, "What should I do given that I am who I am, with all my actual cognitive limitations?"
The book has three parts. Part One addresses the question of where the values come from that agents use in rational decision making. The most common view among philosophers is that they are based on preferences, but I argue that this is computationally impossible. I propose an alternative theory somewhat reminiscent of Bentham, and explore how human beings actually arrive at values and how they use them in decision making.
Part Two investigates the knowledge of probability that is required for decision-theoretic reasoning. I argue that subjective probability makes no sense as applied to realistic agents. I sketch a theory of objective probability to put in its place. Then I use that to define a variety of causal probability and argue that this is the kind of probability presupposed by rational decision making. So what is to be defended is a variety of causal decision theory.
Part Three explores how these values and probabilities are to be used in decision making. In chapter eight, it is argued first that actions cannot be evaluated in terms of their expected values as ordinarily defined, because that does not take account of the fact that a cognizer may be unable to perform an action, and may even be unable to try to perform it. An alternative notion of "expected utility" is defined to be used in place of expected values. In chapter nine it is argued that individual actions cannot be the proper objects of decision-theoretic evaluation. We must instead choose plans, and select actions indirectly on the grounds that they are prescribed by the plans we adopt. However, our objective cannot be to find plans with maximal expected utilities. Plans cannot be meaningfully compared in that way. An alternative, called "locally global planning", is proposed. According to locally global planning, individual plans are to be assessed in terms of their contribution to the cognizer's "master plan". Again, the objective cannot be to find master plans with maximal expected utilities, because there may be none, and even if they are, finding them is not a computationally feasible task for real agents. Instead, the objective must be to find good master plans, and improve them as better ones come along. It is argued that there are computationally feasible ways of doing this, based on defeasible reasoning about values and probabilities.
comment by JamesAndrix · 2008-12-10T17:08:19.000Z · LW(p) · GW(p)
Shouldn't your updating also depend on the relative number of trials? (experience)
Part of this disagreement seems to be what kinds of evidence are relevant to the object level predictions.
comment by HalFinney · 2008-12-10T19:12:12.000Z · LW(p) · GW(p)
Interesting essay - this is my favorite topic right now. I am very happy to see that you clearly say, "Shifting beliefs is not a concession that you make for the sake of others, expecting something in return; it is an advantage you take for your own benefit, to improve your own map of the world." That is the key idea here. However I am not so happy about some other comments:
"if you want to persuade a rationalist to shift belief to match yours"
You should never want this, not if you are a truth-seeker! I hope you mean this to be a desire of con artists and other criminals. Persuasion is evil, is in direct opposition to the goal of overcoming bias and reaching the truth. Do you agree?
"the frame of mind of justification and having clear reasons to point to in front of others, is itself antithetical to the spirit of resolving disagreements"
Such an attitude is not merely opposed to the spirit of resolving disagreements, it is an overwhelming obstacle to your own truth seeking. You must seek out and overcome this frame of mind at all costs. Agreed?
And what do you think would happen if you were forced to resolve a disagreement without making any arguments, object-level or meta; but merely by taking turns reciting your quantitative estimates of likelihood? Do you think you could reach an agreement in that case, or would it be hopeless?
How about if it were an issue that you were not too heavily invested in - say, which of a couple of upcoming movies will have greater box office receipts? Suppose you and a rationalist-wannabe like Robin had a difference of opinion on this, and you merely recited your estimates. Remember your only goal is to reach the truth (perhaps you will be rewarded if you guess right). Do you think you would reach agreement, or fail?
comment by Z._M._Davis · 2008-12-10T19:36:11.000Z · LW(p) · GW(p)
"How about if it were an issue that you were not too heavily invested in [...]"
Hal, the sort of thing you suggest has already been tried a few times over at Black Belt Bayesian; check it out.
comment by Tim_Tyler · 2008-12-10T21:15:33.000Z · LW(p) · GW(p)
Two ideal Bayesians cannot have common knowledge of disagreement; this is a theorem.
To quote from AGREEING TO DISAGREE, By Robert J. Aumann
If two people have the same priors, and their posteriors for a given event A are common knowledge, then these posteriors must be equal. This is so even though they may base their posteriors on quite different information. In brief, people with the same priors cannot agree to disagree. [...]
The key notion is that of 'common knowledge.' Call the two people 1 and 2. When we say that an event is "common knowledge," we mean more than just that both 1 and 2 know it; we require also that 1 knows that 2 knows it, 2 knows that 1 knows it, 1 knows that 2 knows that 1 knows it, and so on. For example, if 1 and 2 are both present when the event happens and see each other there, then the event becomes common knowledge. In our case, if 1 and 2 tell each other their posteriors and trust each other, then the posteriors are common knowledge. The result is not true if we merely assume that the persons know each other's posteriors.
So: the "two ideal Bayesians" also need to have "the same priors" - and the term "common knowledge" is being used in an esoteric technical sense. The implications are that both participants need to be motivated to create a pool of shared knowledge. That effectively means they need to want to believe the truth, and to purvey the truth to others. If they have other goals "common knowledge" is much less likely to be reached. We know from evolutionary biology that such goals are not the top priority for most organisms. Organisms of the same species often have conflicting goals - in that each wants to propagate their own genes, at the expense of those of their competitors - and in the case of conflicting goals, the situation is particularly bad.
So: both parties being Bayesians is not enough to invoke Aumann's result. The parties also need common priors and a special type of motivation which it is reasonable to expect to be rare.
comment by PK · 2008-12-11T04:00:13.000Z · LW(p) · GW(p)
Um... since we're on the subject of disagreement mechanics, is there any way for Robin or Eliezer to concede points/arguments/details without loosing status? If that could be solved somehow then I suspect the dicussion would be much more productive.
comment by Cameron_Taylor · 2008-12-11T04:27:20.000Z · LW(p) · GW(p)
PK: Unfortunately, no. Arguing isn't about being informed. If they both actually 'Overcame Bias' we'd supposedly lose all respect for them. They have to trade that off with the fact that if they stick to stupid details in the face of overwhelming evidence we also lose respect.
Of the '12 virtues' Eleizer mentions, that 'argument' one is the least appealing. The quality of the independent posts around here is far higher than the argumentative ones. Still, it does quite clearly demonstrate the difficulties of Aumman's ideas in practice.
comment by HalFinney · 2008-12-12T03:54:35.000Z · LW(p) · GW(p)
Let me break down these "justifications" a little:
Clearly, the Other's object-level arguments are flawed; no amount of trust that I can have for another person will make me believe that rocks fall upward.This points to the fact that the other is irrational. It is perfectly reasonable for two people to disagree when at least one of them is irrational. (It might be enough to argue that at least one of the two of you is irrational, since it is possible that your own reasoning apparatus is badly broken.)
Clearly, the Other is not taking my arguments into account; there's an obvious asymmetry in how well I understand them and have integrated their evidence, versus how much they understand me and have integrated mine.This would not actually explain the disagreement. Even an Other who refused to study your arguments (say, he didn't have time), but who nevertheless maintains his position, should be evidence that he has good reason for his views. Otherwise, why would your own greater understanding of the arguments on both sides (not to mention your own persistence in your position) not persuade him? Assuming he is rational (and thinks you are, etc) the only possible explanation is that he has good reasons, something you are not seeing. And that should persuade you to start changing your mind.
Clearly, the Other is completely biased in how much they trust themselves over others, versus how I humbly and evenhandedly discount my own beliefs alongside theirs.Again this is basically evidence that he is irrational, and reduces to case 1.
The Aumann results require that the two of your are honest, truth-seeking Bayesian wannabes, to first approximation, and that you see each other that way. The key idea is not whether the two of you can understand each other's arguments, but that refusal to change position sends a very strong signal about the strength of the evidence.
If the two of you are wrapping things up by preparing to agree to disagree, you have to bite the bullet and say that the other is being irrational, or is lying, or is not truth seeking. There is no respectful way to agree to disagree. You must either be extremely rude, or reach agreement.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-12-06T21:58:02.107Z · LW(p) · GW(p)
If the two of you are wrapping things up by preparing to agree to disagree, you have to bite the bullet and say that the other is being irrational, or is lying, or is not truth seeking. There is no respectful way to agree to disagree. You must either be extremely rude, or reach agreement.
Hal, is this still your position, a year later? If so, I'd like to argue against it. Robin Hanson wrote in http://hanson.gmu.edu/disagree.pdf (page 9):
Since Bayesians with a common prior cannot agree to disagree, to what can we attribute persistent human disagreement? We can generalize the concept of a Bayesian to that of a Bayesian wannabe, who makes computational errors while attempting to be Bayesian. Agreements to disagree can then arise from pure differences in priors, or from pure differences in computation, but it is not clear how rational these disagreements are. Disagreements due to differing information seem more rational, but for Bayesians disagreements cannot arise due to differing information alone.
Robin argues in another paper that differences in priors really are irrational. I presume that he believes that differences in computation are also irrational, although I don't know if he made a detailed case for it somewhere.
Suppose we grant that these differences are irrational. It seems to me that disagreements can still be "reasonable", if we don't know how to resolve these differences, even in principle. Because we are products of evolution, we probably have random differences in priors and computation, and since at this point we don't seem to know how to resolve these differences, many disagreements may be both honest and reasonable. Therefore, there is no need to conclude that the other disagreer must be be irrational (as an individual), or is lying, or is not truth seeking.
Assuming that the above is correct, I think the role of a debate between two Bayesian wannabes should be to pinpoint the exact differences in priors and computation that caused the disagreement, not to reach immediate agreement. Once those differences are identified, we can try to find or invent new tools for resolving them, perhaps tools specific to the difference at hand.
Replies from: RobinHanson↑ comment by RobinHanson · 2009-12-06T22:10:42.431Z · LW(p) · GW(p)
My Bayesian wannabe paper is an argument against disagreement based on computation differences. You can "resolve" a disagreement by moving your opinion in the direction of the other opinion. If failing to do this reduces your average accuracy, I feel I can call that failure "irrational".
Replies from: Wei_Dai, timtyler↑ comment by Wei Dai (Wei_Dai) · 2009-12-06T22:29:53.569Z · LW(p) · GW(p)
You can "resolve" a disagreement by moving your opinion in the direction of the other opinion. If failing to do this reduces your average accuracy, I feel I can call that failure "irrational".
Do you have a suggestion for how much one should move one's opinion in the direction of the other opinion, and an argument that doing so would improve average accuracy?
If you don't have time for that, can you just explain what you mean by "average"? Average over what, using what distribution, and according to whose computation?
Replies from: timtylercomment by Nick_Tarleton · 2008-12-12T04:24:46.000Z · LW(p) · GW(p)
Don't you think it's possible to consider someone irrational or non-truthseeking enough to maintain disagreement on one issue, but still respect them on the whole?
If you regard persistent disagreement as disrespectful, and disrespecting someone as bad, this is likely to bias you towards agreeing.
comment by steven · 2008-12-12T20:45:12.000Z · LW(p) · GW(p)
Hal, it also requires that you see each other as seeing each other that way, that you see each other as seeing each other as seeing each other that way, that you see each other as seeing each other as seeing each other as seeing each other that way, and so on.
comment by Tim_Tyler · 2008-12-12T23:15:06.000Z · LW(p) · GW(p)
The Aumann results require that the two of your are honest, truth-seeking Bayesian wannabes, to first approximation, and that you see each other that way.
You also have to have the time, motivation and energy to share your knowledge. If some brat comes up to you and tells you that he's a card-carrying Bayesian initiate - and that p(rise(tomorrow,sun)) < 0.0000000001 - and challenges you to prove him wrong - you would probably just think he had acquired an odd prior somehow - and ignore him.
comment by DanielLC · 2010-09-05T07:17:26.902Z · LW(p) · GW(p)
If this looks like a reciprocal trade, you're doing it wrong. Done right, the change in your belief after finding out how much the other persons' belief changed would average at zero. They might change their belief more than you expected, leading to you realizing that they're less sure of themselves then you thought.
comment by Douglas_Reay · 2012-11-21T10:40:00.432Z · LW(p) · GW(p)
Foxed link: Millie-style altruist
comment by Douglas_Reay · 2012-11-21T11:03:37.799Z · LW(p) · GW(p)
Perhaps there does exist a route towards resolving this situation.
Suppose Eliezer has a coin for one week, during which he flips it from time to time. He doesn't write down the results, record how many times he flips it, or even keeps a running mental tally. Instead, at the end of the week, relying purely upon his direct memory of particular flips he can remember, he makes an estimate: "Hmm, I think I can remember about 20 of those flips fairly accurately and, of those 20 flips, I have 90% confidence that 15 of them came up heads."
The coin is then passed to Robin, who does the same exercise the following week. At the end of that week, Robin thinks to himself "I think I can remember doing about 40 flips, and I have 80% confidence that 10 of them came up heads."
They then meet up and have the following conversation:
- Eliezer: 75% chance of a head
- Robin: 25% chance of a head, not taking your data into account yet, just mine.
- Eliezer: Ok, so first level of complexity is we could just average that to get 50%. But can we improve upon that?
- Robin: My sample size was 40
- Eliezer: My sample size was 20 so, second level of complexity, we could add them together to get 25 heads of out 60 flips, giving 42% chance of a head
- Robin: Third level of complexity, how confident are you about your numbers? I'm 80% confident of mine
- Eliezer: I'm 90% confident of mine. So using that as a weighting would give us (0.9x15+0.8x10)/(0.9x20+0.8x40) which is 21.5 out of 50 which is 43% chance of a head.
- Robin: But Eliezer, you always overestimate how confident you are about your memory, whereas I'm conservative. I don't think your memory is any better than mine. I think 42% is the right answer.
- Eliezer: Ok, let's go to level 4. Can we find some objective evidence? Did you do any of your flips in the presence of a third party? I can remember 5 incidents where someone else saw the flip I did. We could take a random sampling of my shared flips and then go ask the relevant third parties for confirmation, then do the same for a random sample of your shared flips, and see if your theory about our memories is bourne out.
In the end, as long as you can trace back at least some (a random sampling) of the facts people are basing their estimates upon to things that can be checked against reality, you should have some basis to move forwards.