Causes of disagreements

post by JustinShovelain · 2009-07-16T21:51:57.422Z · LW · GW · Legacy · 20 comments

Contents

20 comments

You have a disagreement before you. How do you handle it?

Causes of fake disagreements:

Is the disagreement real? The trivial case is an apparent disagreement occuring over a noisy or low information channel. Internet chat is especially liable to fail this way because of the lack of tone, body language, and relative location cues. People can also disagree through the use of differing definitions with corresponding denotations and connotations. Fortunately, when recognized this cause of disagreement rarely produces problems; the topic at issue rarely is the definitions themselves. If there is a game theoretic reason the agents may also give the appearance of disagreement even though they might well agree in private. The agents could also disagree if they are victims of a Man-in-the-middle attack where someone is intercepting and altering the messages passed between the two parties. Finally, the agents could disagree simply because they are in different contexts. Is the sun yellow I ask? Yes, say you. No, say the aliens at Eta Carinae.

Causes of disagreements about predictions:

  Evidence
Assuming the disagreement is real what does that give us? Most commonly the disagreement is about the facts that predicate our actions. To handle these we must first consider our relationship to the other person and how they think (a la superrationality); observations made by others may not be given the same weight we would give those observations if we had made them ourselves. After considering this we must then merge their evidence with our own in a controlled way. With people this gets a bit tricky. Rarely do people give us information we can handle in a cleanly Bayesian way (a la Aumann's agreement theorem). Instead we must merge our explicit evidence sets along with vague abstracted probabilistic intuitions that are half speculation and half partially forgotten memories.

  Priors
If we still have a disagreement after considering the evidence what now? The agents could have "started" at different locations in prior or induction space. While it is true that a persons "starting" point and what evidence they've seen can be conflated, it is also possible that they really did start at different locations.

  Resource limitations
The disagreement could also be caused by resource limitations and implementation details. Cognition could have sensitive dependence on initial conditions. For instance, when answering the question "is this red?" slight variations in lighting conditions can make people respond differently on boundary cases. This illustrates both sensitive dependence on initial conditions and also the fact that some types of information (what you saw exactly) just cannot be communicated effectively. Our mental processes are also inherently noisy leading to differing errors in processing the evidence and increasing the need to rehash an argument multiple times. We suffer from computational space and time limitations making computational approximations necessary. We learn these approximations slowly across varying situations and so may disagree with someone even if the prediction relevant evidence is on hand, our other "evidence" used to develop these approximations may vary and inadvertently leak into our answers. Our approximation methods may differ. Finally, it takes time to integrate all of the evidence at hand and people differ on the amount of time and resources they have to do so.

  Systematic errors
Sadly, it is also possible that one or the other party could simply have a deeply flawed prediction system. They could make systematic errors and have broken or missing corrective feedback loops. They could have disruptive feedback loops that drain the truth from predictions. Their methods of prediction may invalidly vary with what is being considered; their thoughts may shy away from subjects such as death or disease or flaws in their favorite theory and their thoughts may be attracted to what will happen after they win the lottery. Irrationality and biases; emotions and inability to abstract. Or even worse, how is it possible to eliminate a disagreement with someone who disagrees with himself and presents an inconsistent opinion?

Other causes of disagreement:
  Goals
I say that dogs are interesting, you say they are boring and yet we both agree on our predictions. How is this possible? This type of disagreement would fall under disagreement about what utility function to apply and between utilitarian goal-preserving agents it is irresolvable in a direct manner; however, indirect ways such as trading boring dogs for interesting cats works much of the time.  Plus, we are not utilitarian agents (e.g. circular preferences) ; perhaps there are strategies available to us for resolving conflicts of this form that are not available to utilitarian ones?

  Epiphenomenal
Lastly, it is possible for agents to agree on all observable predictions and yet disagree on unobservable predictions. Predictions without consequences aren't predictions at all, how could they be? If the disagreement still exists after realizing that there are no observable consequences look elsewhere for the cause, it cannot be here. Why disagree over things of no value? The disagreement must be caused by something; look there not here.


How to use this taxonomy:
I tried to list the above sections in the order one should check for each type of cause if you were to use the sections as a decision tree (ease of checking and fixing, fit to definition, probability of occurrence). This taxonomy is symmetric between the disagreeing parties and many of the sections lend themselves naturally to looping; merging evidence piece by piece, refining calculations iteration by iteration, .... This taxonomy can also be applied recursively to meta disagreements and disagreements found in the process of analyzing the original one. What are the termination conditions for analyzing a disagreement? They come in five forms: complete agreement, satisfying agreement, impossible to agree, acknowledgment of conflict, and dissolving the question. Being a third party to a disagreement changes the analysis only in that you are no longer doing the symmetric self analysis but rather looking in upon a disagreement with that additional distance that entails.



Many thanks to Eliezer Yudkowsky, Robin Hanson, and the LessWrong community for much thought provoking material.

(ps This is my first post and I would appreciate any feedback: what I did well, what I did badly, and what I can do to improve.)

Links:
1. http://lesswrong.com/lw/z/information_cascades/
2. http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/

20 comments

Comments sorted by top scores.

comment by Skylar626 · 2009-08-12T04:15:39.329Z · LW(p) · GW(p)

You did a really good job on your first post. This post is also very good at what it sets out to do, that is, pool some of LW's and OB's best ideas on a topic (disagreement) into a few quick nuggets of wisdom. I would prefer more in text citation to previous articles that outline why this course of action is ideal. Good job!

comment by RobinHanson · 2009-07-18T13:50:17.267Z · LW(p) · GW(p)

How do you know what in what order these should be checked? How do you know which is more common? You might also distinguish evidence, about the case directly, and meta-evidence, about the info and reasoning of the parties who disagree.

comment by HalFinney · 2009-07-17T17:15:49.273Z · LW(p) · GW(p)

You haven't really addressed the fundamental paradox of disagreement:

Given that two people disagree, which side should you assume is right, a priori? Clearly the symmetry of the situation makes it impossible to give an a priori rule if this is all the information you have.

Now let us add one piece of information: you are one of the two people. Does this give you grounds to assume that your view is the one which is right? And if so, does it not argue equally well that the other person should assume that his own view is right? But this is a contradiction, because both opposing views can't be simultaneously more likely to be right than the other.

Hence, knowing only that you are in a disagreement, you should a priori assume your view is equally likely to be right or wrong. This should be the starting point for any analysis of disagreement, yet it is seldom adopted.

Replies from: Nick_Tarleton, thomblake
comment by Nick_Tarleton · 2009-07-18T23:31:55.106Z · LW(p) · GW(p)

Now let us add one piece of information: you are one of the two people. Does this give you grounds to assume that your view is the one which is right? And if so, does it not argue equally well that the other person should assume that his own view is right?

I may have non-indexical information about my own intelligence, rationality, honesty, expertise, etc., the comparison of which with my prior expectations of those features of the other person might swing me in either direction.

Replies from: HalFinney
comment by HalFinney · 2009-07-19T22:37:48.305Z · LW(p) · GW(p)

Right, but realistically most of the time people start with the assumption that they are right. Also consider that probably more than 50% of people think they're smarter than average, and probably the better advice is to start off most disagreements assuming you're wrong!

Replies from: thomblake, HalFinney
comment by thomblake · 2009-07-19T22:56:06.707Z · LW(p) · GW(p)

probably more than 50% of people think they're smarter than average

And most of the people around here are right!

comment by HalFinney · 2009-07-20T00:44:48.112Z · LW(p) · GW(p)

Maybe a better heuristic is to consider whether your degree of assurance in your position is more or less than your average degree of assurance over all topics on which you might encounter disagreements. Hopefully there would be less of a bias on this question of whether you are more confident than usual. Then, if everyone adopted the policy of believing themselves if they are unusually confident, and believing the other person if they are less confident than usual, average accuracy would increase.

comment by thomblake · 2009-07-17T17:24:19.746Z · LW(p) · GW(p)

But this is a contradiction, because both opposing views can't be simultaneously more likely to be right than the other.

As stated, this is not a contradiction. Look:

  • A is arguing against B
  • A should assume that p
  • B should assume that ~p

That A has good reason to assume that p and B has good reason to assume that ~p is not contradictory; if they're both rational, it just entails that they have access to different information.

And the fact that I believe that p gives me a reason to believe that p, all else being equal; after all, I can't be expected to re-evaluate all of my beliefs at all times, and must assume that my reasons for starting to believe that p were good ones to begin with. (That doesn't mean that this isn't a good time to re-evaluate my reasons to believe that p, if they're accessible to me)

Replies from: HalFinney
comment by HalFinney · 2009-07-17T20:10:02.247Z · LW(p) · GW(p)

But the truth can't be dependent on which person you are, right? If we say you are Bob, or we say you are Carol, that doesn't change whether there is life on Mars. Therefore what one should assume about the truth does not change merely based on which person they are, that was my reasoning. Put more technically, truth (about factual matters where people might disagree) is not "indexical".

Replies from: thomblake
comment by thomblake · 2009-07-17T20:16:24.817Z · LW(p) · GW(p)

No, the truth isn't dependent upon which person I am, but what I should believe isn't directly dependent upon the truth (if it was, then there wouldn't be any disagreement in the first place). Rather, what I believe is dependent upon what constitutes a good reason for me to believe something, and that is indexical.

Replies from: RobinHanson
comment by RobinHanson · 2009-07-18T13:51:59.777Z · LW(p) · GW(p)

But how could you know they should believe P about X while you should believe Q? You can't think you are both doing what you should if you believe different things right?

Replies from: thomblake
comment by thomblake · 2009-07-18T17:11:07.045Z · LW(p) · GW(p)

If I was one of the disputants, then I would not know that the other person should believe P. Similarly, as an outside observer I would know that at least one of them is certainly incorrect (assuming your P and Q are inconsistent).

You're changing the context without warrant.

If I'm in the situation then I'll do what I should. I must then assume that the other person either has some reason to believe that Q, or that they're being irrational, or some other handy explanation. By the principle of charity I should then perhaps assume they have a good reason to believe that Q and so I should re-evaluate my reasons for believing that P.

That doesn't change that it's not contradictory for me to prefer beliefs I already hold to beliefs I don't, and to expect other people to follow the same rule of thumb. What alternative could there even be in the absence of a reason to re-evaluate your beliefs?

Replies from: HalFinney
comment by HalFinney · 2009-07-19T22:39:58.912Z · LW(p) · GW(p)

One other thing I'll note as a problem with this as a heuristic ("prefer your own side in a disagreement") is that more or less by definition, it's going to be wrong at least 50% of the time (greater than 50% in the case where you're both wrong). That's not a really great heuristic.

Replies from: thomblake
comment by thomblake · 2009-07-19T22:52:48.016Z · LW(p) · GW(p)

it's going to be wrong at least 50% of the time

You maybe.

But seriously, it's not just a heuristic for disagreements. "One should prefer the beliefs one already has, all else equal" is a pretty good heuristic (a lot better than "one should have random beliefs" or "One should adopt beliefs one does not have, all else equal").

My point was simply an answer to the question, "Now let us add one piece of information: you are one of the two people. Does this give you grounds to assume that your view is the one which is right?" If I've established that one does have such a reason in general about one's beliefs, then the answer is clearly "yes".

Replies from: HalFinney
comment by HalFinney · 2009-07-20T00:39:45.422Z · LW(p) · GW(p)

I'd agree that "in general, you should believe yourself" is a simpler rule than "in general, you should believe yourself, except when you come across someone else who has a different belief". And simplicity is a plus. There are good reasons to prefer simple rules.

The question is whether this simplicity outweighs the theoretical arguments that greater accuracy can be attained by using the more complex rule. Perhaps someone who sufficiently values simplicity can reasonably argue for adopting the first rule.

ETA: Maybe I am wrong about the first rule: it should be "in general, you should believe yourself, except when you come across evidence that you are wrong". And then the question is, how strong evidence is it to meet someone who came up with a different view. But this brings us back to the symmetry argument that that is actually a lot stronger evidence than most people imagine.

Replies from: thomblake
comment by thomblake · 2009-07-20T00:43:56.349Z · LW(p) · GW(p)

I think we may have exhausted any disagreement we actually had.

As I noted early on, I agree that coming across someone else with a different belief is a good occasion for re-evaluating one's beliefs. From here, it will be hard to pin down a true substantive difference.

comment by eirenicon · 2009-07-17T15:52:15.624Z · LW(p) · GW(p)

I would be surprised if the aliens of Eta Carinae had an opinion, as their star system suffered a supernova impostor event about 8000 years ago. Besides, it's a relatively new binary system, only a few million years old: probably not old enough to evolve life, even if something could arise in such an unlikely environment. If there is extraterrestrial life there, though, they're in trouble, as Eta Carinae is expected to actually go supernova within the next million years or so ("within" meaning a million years from now... or next week). So that's where my disagreement with you lies ;)

Replies from: James_K
comment by James_K · 2009-07-18T03:00:23.980Z · LW(p) · GW(p)

He didn't say they were from Eta Carinae, he said they were at there. They could easily be from somewhere else.

comment by dclayh · 2009-07-17T05:08:15.175Z · LW(p) · GW(p)

What are the termination conditions for analyzing a disagreement? They come in five forms: complete agreement, satisfying agreement, impossible to agree, acknowledgment of conflict, and dissolving the question.

Could you go into more detail here? In particular I'd like to know the difference between "impossible to agree" and "acknowledgment of conflict".

Replies from: CronoDAS
comment by CronoDAS · 2009-07-17T11:40:30.732Z · LW(p) · GW(p)

My guess is, it has something to do with whether of not you think further discussion would be useful.