Navigating disagreement: How to keep your eye on the evidence

post by AnnaSalamon · 2010-04-24T22:47:41.096Z · LW · GW · Legacy · 73 comments

Contents

  Principle 1:  Truth is not person-dependent.
  Principle 2:  Watch the mechanisms that create your beliefs.  Ask if they’re likely to lead to accurate beliefs.
  Principle 2b:  Ask if you are weighing all similarly truth-indicative mechanisms evenly.
  Principle 3:  Ask what an outside observer would say.
  Common pitfall: Idolatry
  Common pitfall: Primate social intuitions
None
73 comments

Heeding others' impressions often increases accuracy.  But "agreement"  and "majoritarianism" are not magic;  in a given circumstance, agreement is or isn't useful for *intelligible* reasons. 

You and four other contestants are randomly selected for a game show.  The five of you walk into a room.  Each of you is handed a thermometer drawn at random from a box; each of you, also, is tasked with guessing the temperature of a bucket of water.  You’ll each write your guess at the temperature on the card; each person who is holding a card that is within 1° of the correct temperature will win $1000.

The four others walk to the bucket, place their thermometers in the water, and wait while their thermometers equilibrate.  You follow suit.  You can all see all of the thermometers’ read-outs: they’re fairly similar, but a couple are a degree or two off from the rest.  You can also watch, as each of your fellow-contestants stares fixedly at his or her own thermometer and copies its reading (only) onto his or her card.

Should you:

  1. Write down the reading on your own thermometer, because it’s yours;
  2. Write down an average* thermometer reading, because probably the more accurate thermometer-readings will cluster;
  3. Write down an average of the answers on others’ cards, because rationalists should try not to disagree;
  4. Follow the procedure everyone else is following (and so stare only at your own thermometer) because rationalists should try not to disagree about procedures?

Choice 2, of course.  Thermometers imperfectly indicate temperature; to have the best possible chance of winning the $1000, you should consider all the information you have, from all the (randomly allocated, and so informationally symmetric) thermometers.  It doesn’t matter who was handed which thermometer.  

Forming accurate beliefs is *normally* about this simple.  If you want the most accurate beliefs you can get, you’ll need to pay attention to the evidence.  All of the evidence.  Evenly.  Whether you find the evidence in your hand or mind, or in someone else’s.  And whether weighing all the evidence evenly leaves you with an apparently high-status social claim (“My thermometer is better than yours!”), or an apparently deferential social claim (“But look -- I’m trying to agree with all of you!”), or anywhere else.

I’ll try to spell out some of what this looks like, and to make it obvious why certain belief-forming methods give you more accurate beliefs.

Principle 1:  Truth is not person-dependent.

There’s a right haircut for me, and a different right haircut for you.  There’s a right way for me to eat cookies if I want to maximize my enjoyment, and a different right way for you to eat cookies, if you want to maximize your enjoyment.  But, in the context of the game-show, there isn’t a right temperature for me to put on my card, and a different right temperature for you to put on your card.  The game-show host hands $1000 to cards with the right temperature -- he doesn’t care who is holding the card.  If a card with a certain answer will make you money, that same card and answer will make me money.  And if a certain answer won’t make me money, it won’t make you money either.

Truth, or accuracy, is like the game show in this sense.  “Correct prediction” or “incorrect prediction” applies to beliefs, not to people with beliefs.  Nature doesn’t care what your childhood influences were, or what kind of information you did or didn’t have to work with, when it deems your predictions “accurate!” or “inaccurate!”.  So, from the point of view of accuracy, it doesn’t make any sense to say “I think the temperature is 73°, but you, given the thermometer you were handed, should think it 74°”.  Nor “I think X, but given your intuitions you should think Y” in any other purely predictive context.

That is: while “is a good haircut” is a property of the (person, haircut) pair, “is an accurate belief” is a property of the belief only.

Principle 2:  Watch the mechanisms that create your beliefs.  Ask if they’re likely to lead to accurate beliefs.

It isn’t because of magic that you should use the median thermometer’s output.  It’s because, well, thermometers noisily reflect the temperature, and so the central cluster of the thermometers is more likely to be accurate.  You can see why this is the accuracy-producing method.

Sometimes you’ll produce better answers by taking an average over many peoples’ impressions, or by updating from other peoples’ beliefs, or by taking disagreement between yourself and someone else as a sign that you should debug your belief-forming process.  And sometimes (e.g., if the people around you are choosing their answers by astrology), you won’t.  

But in any of these circumstances, if you actually ask yourself “What belief-forming process is really, actually likely to pull the most juice from the evidence?”, you’ll see what the answer is, and you’ll see why the answer is that.  It won’t be “agree with others, because agreement is a mysterious social ritual that rationalists aim for”, or “agree with others, because then others will socially reciprocate by agreeing with you”.  It won’t be routed through the primate social system at all.  It’ll be routed through seeing where evidence can be found (seeing what features of the world should look different if the world is in one state rather than another -- the way thermometer-readings should look different if the bucket is one temperature rather than another) and then seeing how to best and most thoroughly and evenly gather up all that evidence.

Principle 2b:  Ask if you are weighing all similarly truth-indicative mechanisms evenly.

Even when the processes that create our beliefs are truth-indicative, they generally aren’t fully, thoroughly, and evenly truth-indicative.  Let’s say I want to know whether it’s safe for my friend to bike to work.  My own memories are truth indicative, but so are my friends’ and neighbors’ memories, and so are the memories of the folk in surveys I can find on line.  The trouble is that my own memories arrive in my head with extreme salience, and move my automatic anticipations a lot; while my friend’s have less automatic impact, and those of the surveyed neighbors still less.  So if I just go with the impressions that land in my head, my predictions will overweight a few samples of evidence at the expense of all the others.

That is: our automatic cognition tends not to weigh the evidence evenly *at all*.  It takes conscious examination and compensation.

Principle 3:  Ask what an outside observer would say.

Since truth doesn’t depend on who is asking -- and since our feelings about the truth often do depend -- it can help to ask what an outside observer would say.  Instead of asking “Am I right in this dispute with my friend?” ask: “If I observed this from the outside, and saw someone with my track record and skillset, and someone else with my friend’s track record and skillset, disagreeing in this manner -- who would I think was probably right?”.

(See also Cached Selves.)

Common pitfall: Idolatry

We’re humans.  Give us a good idea, and we’ll turn it into an idol and worship its (perhaps increasingly distorted) image.  Tell us about the Aumann Agreement Theorem, and we’re liable to make up nonsense rituals about how one must always agree with the majority.

The solution is to remove the technical terms and ask *why* each belief-forming method works.  Where is the evidence?  What observations would you expect to see, if the universe were one way rather than another?  What method of aggregating the evidence most captures the relevant data?

That is: don’t memorize the idea that “agreement”, the “scientific method”, or any other procedure is “what rationalists do”.  Or, at least, don’t *just* memorize it.  Think it through every time.  Be able to see why it works.

Common pitfall: Primate social intuitions

Again: we’re humans.  Give us a belief-forming method, and we’ll make primate politics out of it.  We’ll say “I should agree with the majority, so that religious or political nuts will also agree with the majority via social precedent effects”.  Or: “I should believe some of my interlocutor’s points, so that my interlocutor will believe mine”.  And we’ll cite “rationality” while doing this.

But accurate beliefs have nothing to do with game theory.  Yes, in an argument, you may wish to cede a point in order to manipulate your interlocutor.  But that social manipulation has nothing to do with truth.  And social manipulation isn’t why you’ll get better predictions if you include others’ thermometers in your average, instead of just paying attention to your own thermometer.

Example problems:  To make things concrete, consider the following examples.  My take on the answers appears in the comments.  Please treat these as real examples; if you think real situations diverge from my idealization, say so.

Problem 1: Jelly-beans 

You’re asked to estimate the number of jelly-beans in a jar.  You have a group of friends with you. Each friend privately writes down her estimate, then all of the estimates are revealed, and then each person has the option of changing her estimate.

How should you weigh: (a) your own initial, solitary estimate; (b) the initial estimates of each of your friends; (c) the estimates your friends write down on paper, after hearing some of the others’ answers?

Problem 2: Housework splitting  

You get into a dispute with your roommate about what portion of the housework you’ve each been doing.  He says you’re being biased, and that you always get emotional about this sort of thing.  You can see in his eyes that he’s upset and biased; you feel strongly that you could never have such biases.  What to believe?

Problem 3:  Christianity vs. atheism

You get in a dispute with your roommate about religion.  He says you’re being biased, and that your “rationalism” is just another religion, and that according to his methodology, you get the right answer by feeling Jesus in your heart.  You can see in his eyes that he’s upset and biased you feel strongly that you could never have such biases.  What to believe?

Problem 4:  Honest Bayesian wannabes

Two similarly rational people, Alfred and Betty, estimate the length of Lake L.  Alfred estimates “50 km”; Betty simultaneously estimates “10 km”.  Both realize that Betty knows more geography than Alfred.  Before exchanging any additional information, the two must again utter simultaneous estimates regarding the answer to G.  Is it true that if Alfred and Betty are estimating optimally, it is as likely that Betty’s answer will now be larger than Alfred’s as the other way round?  Is it true that if these rounds are repeated, Alfred and Betty will eventually stabilize on the same answer?  Why?

73 comments

Comments sorted by top scores.

comment by Academian · 2010-04-25T11:15:16.519Z · LW(p) · GW(p)

Anyone who hasn't already, check out Anna's OB post, Share likelihood ratios, not posterior beliefs.

(Anna: you write lots of great stuff; link it up!).

It was written about a year ago, but it's actually a good follow up to this post. The point is that, ideally, people would share raw observations. But sometimes that's too slow, so instead we should share a form of summarized evidence. Sharing opinions is a noisy way to do that, because other peoples' prior beliefs get needlessly mixed in with the observations, and then with your opinion, just like the sort of agreement ritual Anna describes here.

It's much better if rational people share Bayes factors from independent tests. That is, you ask your friends "By what multiplicative factor did your priors increase"? Anna gives a rough example of two rational friends with cynical and optimistic priors, who know a third party John in different contexts (i.e. substantially independent tests). If the optimist says "John is a terrible person", the cynic, knowing the optimist's priors, can tell there must have been a significant update, i.e. a large Bayes factor, hence significant evidence, that John is really a terrible person. But if the cynic said that, the optimist wouldn't learn much.

This doesn't work as easily if you share common observations with your friends (rendering your tests non-independent) and consider your character judgement updates relatively unsusceptible to computational errors. In that case, you have to abstract what parts of your current opinion comes from the unshared observations, estimate the Bayes factor (significance of evidence) for those observations only, and share those factors instead. Or just resort to describing the unshared observations explicitly.

comment by HalFinney · 2010-04-26T22:43:40.347Z · LW(p) · GW(p)

Let me give an argument in favor of #4, doing what the others do, in the thermometer problem. Now we seem to have them behaving badly. I think in practice many people would in fact look at other thermometers too in making their guesses. So why aren't they doing it? Two possibilities: they're stupid; or they have a good reason to do it. An example good reason: some thermometers don't read properly from a side angle, so although you think you can see and read all of them, you might be wrong. (This could be solved by #3, writing down the average of the cards, but that doesn't work if everyone tries it since everyone is waiting for everyone else to go first.)

Only if we add a stipulation to this problem, that you are usually right when everyone else is wrong, would it be a good idea to buck the crowd. And even then there is the danger that the others may have some private information that supports their seemingly illogical actions.

comment by Divide · 2010-04-26T06:15:47.721Z · LW(p) · GW(p)

“is an accurate belief” is a property of the belief only

Technically, it's a property of the (belief, what-the-belief-is-about) pair. Beliefs can't be accurate by themselves; there has to be an external referent to compare them with. (Only-)self-referencing beliefs degrade straighforwardly to tautologies or contradictions.

comment by AnotherKevin · 2010-04-25T14:14:35.680Z · LW(p) · GW(p)

The thermometer answer is wrong, you're ignoring that you're on a game show On a game show the producers try to organize things such that few people (or only one person) wins a challenge. As such I would expect all but one thermometer to be in error. Furthermore by watching old episodes of the show I could tell if only one thermometer will be right or if several contestants also succeed at each challenge and therefore either pick the small clump or the lone outlier.

Replies from: jimrandomh, DanielLC
comment by jimrandomh · 2010-04-25T14:41:07.155Z · LW(p) · GW(p)

This is a very good point. Since you might be being messed with, you should run every sanity check you can think of. In increasing order of difficulty and also increasing order of value: get the room temperature from all the thermometers; take your own temperature; ask for a drink of ice water and take its temperature. You should also consider the possibility that all of the other contestants are actors with fake thermometers.

comment by DanielLC · 2010-05-15T00:27:23.373Z · LW(p) · GW(p)

It's a metaphor, like the Monty Haul problem. The fact that that's not how game shows really work doesn't matter.

comment by Nick_Tarleton · 2010-05-11T00:48:45.529Z · LW(p) · GW(p)

You’re asked to estimate the number of jelly-beans in a jar. You have a group of friends with you. Each friend privately writes down her estimate, then all of the estimates are revealed, and then each person has the option of changing her estimate.

How should you weigh: (a) your own initial, solitary estimate; (b) the initial estimates of each of your friends; (c) the estimates your friends write down on paper, after hearing some of the others’ answers?

I start by asking them how they made their initial estimates, and how they used others'.

This might seem ridiculously obvious, or outside the intent of the thought experiment, and maybe it is and this comment is needless noise. But, I'm sort of worried that the focus on individual estimation and guessing about others' belief-formation processes that I see in most of the discussion here, might lead us to want to overuse our cool individual estimation techniques when asking people about their opinions is easy and a better choice. I'm not sure how well-founded that worry is, but in any case, I'd like to see more discussion on LW of group truth-seeking; and if nothing else, it seems like a good idea to explicitly prime the thought "ask people what they think!".

comment by RobinHanson · 2010-04-28T16:50:59.793Z · LW(p) · GW(p)

You should consider all the information you have, from all the (randomly allocated, and so informationally symmetric) thermometers. It doesn’t matter who was handed which thermometer. Forming accurate beliefs is normally about this simple. If you want the most accurate beliefs you can get, you’ll need to pay attention to the evidence. All of the evidence. Evenly.

This gives the impression that you think that normally one can just collect all the relevant evidence, after which you don't need to consider anyone else's opinion. I suppose it depends on what sort of world you live in, but that seems far from the normal situation to me.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-04-29T22:20:54.520Z · LW(p) · GW(p)

It's artificial in exactly the way a trolley problem is and with the same virtues, surely?

comment by Shae · 2010-04-27T17:49:44.291Z · LW(p) · GW(p)

Let’s say I want to know whether it’s safe for my friend to bike to work. My own memories are truth indicative, but so are my friends’ and neighbors [and online surveys]... The trouble is my own memories arrive in my head with extreme salience, and move my automatic anticipations a lot; while my friend’s have less automatic impact, and those of the surveyed neighbors still less...our automatic cognition tends not to weigh the evidence evenly at all. <

I sometimes wonder, though, if giving one's own experiences greater weight in situations like these (though not in the thermometer situation) is rational:

  • People lie (especially in online surveys); first hand evidence should be more valuable than evidence whose validity is in question
  • There are a large number of unknown and unanalyzed factors, some of which may vary with the individual: (I'm less/more coordinated and accident-prone, I am on better/worse terms with the rough crowd in the neighborhood, etc). This information may not be obvious enough to consciously consider.

If I have a sneezing fit every single time I encounter a bullfrog, and no one's ever heard of a bullfrog allergy, and medical science doesn't support the notion, it's still going to be difficult (and I think possibly irrational) to arrive to the pond without a kleenex. It seems to me that in gray-area situations with strong personal evidence, admitting you don't know why you don't know why is at least as rational as concluding you're wrong.

Replies from: thomblake
comment by thomblake · 2010-04-27T17:56:47.727Z · LW(p) · GW(p)

I sometimes wonder, though, if giving one's own experiences greater weight in situations like these (though not in the thermometer situation) is rational:

The relevant question, I believe, is how much weight you should give the evidence from different sources. You should not think that the amount of weight we intuitively give evidence from our own experience is optimal, and this permits a reversal test.

comment by AnnaSalamon · 2010-04-24T22:48:45.268Z · LW(p) · GW(p)

My take on how to get the best estimates, in separate comments for tidier discussion threads:

Replies from: AnnaSalamon, AnnaSalamon, AnnaSalamon
comment by AnnaSalamon · 2010-04-24T22:53:42.781Z · LW(p) · GW(p)

Re: Problem 4: Roughly speaking: yes.

Ordinary disagreements persist after hearing others' estimates. A and B may start out asserting "50" and "10", and then argue their way to "25" and "12", then "23" and "17". But if you want each estimate to be as accurate as possible, this is silly behavior; if A can predict that his estimate will go down over time (as he integrates more of B's evidence), he can also predict that his current estimate is too high -- and so he can improve his accuracy by lowering his estimate right now. The two parties should be as likely to overshoot as to undershoot in their disagreements, e.g.: A: 50; B: 10 A: 18; B: 22 A: 21; B: 21.

So next time you're in a dispute, try applying Principle 3: ask what an outside observer would say about the situation. If Alfred and Betty both apply this principle, they'll each ask: "What would an outside observer guess about Lake L, given that Betty has studied geography and said "10", while Alfred said "50"?" And, thus viewing the situation from the (same) outside, Betty and Alfred will both weigh Betty's evidence about equally. Alfred may underweight Betty's impression (e.g., because he doesn't realize she wrote her thesis on Lake L) -- but he may equally overweight Betty's opinion (e.g., because he doesn't realize that she's never heard of Lake L either). If he could predict that he was (over/under) weighting her opinion, he'd quit doing it.

More precisely: if you and your interlocutor can predict your direction of disagreement, at least one of you is forming needlessly inaccurate estimates.

Replies from: NancyLebovitz, Jonathan_Graehl, steven0461
comment by NancyLebovitz · 2010-04-25T01:16:01.173Z · LW(p) · GW(p)

Before I read your reply, I assume that Alfred will lower his estimate a lot, and Betty might raise her estimate a little. I expect Betty's estimate to still be lower than Alfred's, though the size of these effects would be dependent on how much more geography Betty knows than Alfred.

After reading your reply, I think you're right about convergence, and definitely right about driving your answer towards what you think is correct as fast as possible rather than holding back for fear of seeming to give in.

comment by Jonathan_Graehl · 2010-04-26T23:42:15.384Z · LW(p) · GW(p)

It's an interesting problem, and you're not doing it justice.

A and B have a prior based on certain evidence. Their first guess conveys only the mean of that prior. You also posit that they have a shared belief about the (expected) amount of evidence behind their prior.

To update at each iteration, they need to infer what evidence about the world is behind the exchange of guesses so far.

I don't agree with anything you've claimed about this scenario. I'll grant you any simplifying assumptions you need to prove it, but let's be clear about what those assumptions are.

comment by steven0461 · 2010-04-25T22:29:08.516Z · LW(p) · GW(p)

If they're only similarly rational rather than perfectly rational, they'll probably both be biased toward their own estimates. It also depends on common knowledge assumptions. As far as I know two people can be perfectly rational, and both can think the other is irrational, or think the other is rational but thinks they're irrational and therefore won't update, and therefore not get to an equilibrium. So I would disagree with your statement that:

if you and your interlocutor can predict your direction of disagreement, at least one of you is forming needlessly inaccurate estimates

In general, the insights needed to answer the questions at the end of the post go beyond what one can learn from the ultra-simple "everyone can see the same evidence" example at the start of the post, I think.

comment by AnnaSalamon · 2010-04-24T22:52:24.469Z · LW(p) · GW(p)

Re: Problem 2: Take an even probability distribution involving your feelings and your roommate’s feelings on housework (and on who’s emotionally biased). You have no reason to treat your and your roommate's feelings as asymmetrically indicative (unless unbiased indicators have told you that you're especially above- or below- average at this sort of thing). It’s like the thermometers, again.

Re: Problem 3: Keep your belief in atheism. Your evidence against a Christian god is way stronger than any evidence provided by your roommate's assertion. Despite the superficial symmetry with Problem 2, the prior against the complex hypothesis of a Christian god is many orders of magnitude stronger than the prior against you being wilfully mistaken about the housework -- and these orders of magnitude matter.

(Though note that this reasoning only works because such "extraordinary claims" are routinely made without extraordinary evidence; psychology and anthropology indicate that p( your roommate's assertion | no Christian god) is relatively large -- much larger than a simplicity prior would assign to p(Christian god), or p(flying spaghetti monster).

Replies from: AlexMennen, MartinB, RobinZ, NancyLebovitz
comment by AlexMennen · 2010-04-25T19:34:56.513Z · LW(p) · GW(p)

No, problems 2 and 3 are symmetrical in a more than superficial way. In both cases, the proper course of action is to attempt to conduct an unbiased evaluation of the evidence and of the biases affecting each of you. The difference is, in problem 3, we have already encountered and evaluated numerous nearly identical situations, so it is easy to come to the proper decision, whereas in problem 2, the situation could be new and unique, and missing background information about the effects of bias on the two individuals and the accuracy of their predictions becomes important.

comment by MartinB · 2010-04-26T06:20:34.613Z · LW(p) · GW(p)

The description of both problem 2 and 3 indicates a possible biasing in both participants. Its therefore reasonable to cool down first, and then check the evidence.

In problem 3 roommate might point out valid criticisms about biases one might have, while still being wrong on the question itself. Either way its not rational to argue when in heat.

comment by RobinZ · 2010-04-24T23:51:38.287Z · LW(p) · GW(p)

Before reading your answers:

Problem 2: Given the stated conditions ("you feel strongly that you could never have such biases" is unlikely in my case, but taking it as fact), I would tentatively interpret my roommates remarks as indicating his frustration rather than my disposition. However, I would take the probability of being mistaken as high enough that I would attempt to find some way to defuse the situation that would work either way - most likely, arbitration from a mutually trusted party.

Problem 3: I would quickly review what I know about the debate, and conclude that I have received no additional evidence one way or the other. I would continue to be confident in my naturalist worldview.

After reading your answers:

Problem 2: I notice that you interpret "you feel strongly that you could never have such biases" differently to how I interpret it - I would not feel thus without an observed track record of myself supporting that conclusion. My actions are scarcely changed from those implied by your judgement, however.

comment by NancyLebovitz · 2010-04-25T01:05:18.012Z · LW(p) · GW(p)

Problem 2: I'd work on finding out what criteria we were using. In general, I believe that I can tell when I'm going off balance. I'm not sure if I can test this, but I get the impression that most people have no clue at all about when they're going off balance. I will also note that even if I feel I'm going off balance, there may not be anything I can do about it in the short run.

Problem 3: I'm an agnostic, not an atheist. That being said, I would notice that the Christian is using a circular system of proof, and not agree with them.

comment by AnnaSalamon · 2010-04-24T22:50:03.105Z · LW(p) · GW(p)

Re: problem 1: Jelly bean number estimates are just like thermometer readings, except that the reading is in someone’s head, rather than their hand. So the obvious answer is to average everyone’s initial, solitary impressions, absent reason to expect one individual or another is an above-average (or below-average) estimator.

If your friends use lopsided weighting schemes in their second answers, should you re-update? This depends a lot on your friends.

  • Don't re-update from their answers if you think they don't understand the merits of averaging; you want to weight each person's raw impression evenly, not to overweight it based on how many others were randomly influenced by it (cf. information cascades: http://en.wikipedia.org/wiki/Information_cascade).
  • Do re-update if your friends understand the merits of averaging, such that their apparent over-weighting of a few peoples' datapoints suggests they know something you don't (e.g., perhaps your friend Julie has won past championships in jelly-bean estimation, and everyone but you knows it).
Replies from: NancyLebovitz, RobinZ, cgm_E
comment by NancyLebovitz · 2010-04-25T00:59:46.437Z · LW(p) · GW(p)

Since I know those people, I would weight their answers according to my best estimate of their skill at such tasks, and then average the whole group, including me.

Replies from: Peter_de_Blanc, Jonathan_Graehl
comment by Peter_de_Blanc · 2010-04-27T00:11:07.703Z · LW(p) · GW(p)

Since I know those people, I would weight their answers according to my best estimate of their skill at such tasks, and then average the whole group, including me.

Doing this correctly can get pretty complicated. Basically, the more people you have, the less you should weight the low-quality estimates compared to the high-quality estimates.

For example, suppose that "good" thermometers are unbiased and "bad" thermometers are all biased in the same direction, but you don't know which direction.

If you have one thermometer which you know is good, and one which you're 95% sure is good, then you should weight both measurements about the same.

But if you have 10^6 thermometers which you know are good, and 10^6 which you're 95% sure are good, then you should pretty much ignore the possibly-bad ones.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-27T01:02:08.085Z · LW(p) · GW(p)

Not that it matters tremendously, but I was thinking of the jelly bean problem.

comment by Jonathan_Graehl · 2010-04-26T23:47:03.340Z · LW(p) · GW(p)

What kind of weighted average?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-26T23:59:47.656Z · LW(p) · GW(p)

My math isn't good enough to formalize it-- I'd do it by feel.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-04-28T00:05:02.635Z · LW(p) · GW(p)

Drat - likewise.

comment by RobinZ · 2010-04-24T23:46:48.319Z · LW(p) · GW(p)

Before reading your answer: Human beings are bad at estimating volumes, as opposed to lengths. I would form my estimate by observing the apparent density of jellybean in the jar (e.g. by examining a square centimeter cross-section), observing the dimensions, and multiplying. Then, on the second stage, I would discard estimates which are radically different from mine (cutoff to be chosen based on observed distribution), and take the mean of the remaining. I would allow myself to be influenced in my choice of data to include by those whose data I was already inclined to include in my average.

After reading your answer: Should I notice an apparent and popular upweighting of certain responses such as you suggest, I would increase the weight of those in my average.

comment by cgm_E · 2010-04-26T21:35:21.685Z · LW(p) · GW(p)

I would look for response clusters. Each participant could have a different counting method rendering different results (e.g. - estimate volumes/ count radius & height/ estimate there's an empty cone at the top which you don't see), and some methods could be common pitfalls. Therefore, some results - those obtained by a wrong way of counting, should be discarded, otherwise the median result would lead away from the right result. In order to decide which is the right response cluster, trying to figure out each method/mistake and determining the correct one would be useful. Of course, your method is not necessarily the right one, just because it's yours.

comment by NancyLebovitz · 2010-04-25T01:20:12.955Z · LW(p) · GW(p)

Should you discount multiple opinions which are based on the same information source?

Replies from: Academian
comment by Academian · 2010-04-25T10:59:42.846Z · LW(p) · GW(p)

Follow Anna's Why would you? advice. The point is simply to have a reliable computation performed on the observations, and you do whatever is equivalent to that.

If the opinion involves a computation from the information source that is difficult enough that people might do it wrong, then count more sources as more evidence. After a math exam, when you poll your friends,

  • "Who answered pi for number 6?",

it is rational to be more confident in "pi" as more of your computationally skilled friends answer "pi", even though it all came from the same information: the exam question. This is similar to the phenomenon that checking over your own workings should typically make you more confident, depending on how complex they are.

Another sort of such a computation is memory itself. Some people fail to compute the correct present memories from their past exposure to stimuli. So if you want to know

  • "Was there a red card in the parking lot?",

more witnesses should make you more convinced, even if they're all honest people... they might just have bad memories.

But if you have 3 friends standing in your dining room right now, and you ask them

  • "Are there enough chairs in there for all 4 of us?",

and someone says "yes", additional yesses should contribute very little marginal confidence. In summary,

Extra opinions on the same information are redundant only insofar as computational error checking is redundant.

Replies from: jimrandomh
comment by jimrandomh · 2010-04-25T13:29:56.071Z · LW(p) · GW(p)

It matters how confident you are in the original information source, and how confident you are that it was relayed properly. Suppose the question you is "Will it rain tomorrow?" In the first scenario, you ask some people, and each one pulls out their phone, looks at it, and says "weather.com says yes". In this case, the probability that it will rain is almost exactly equal to the accuracy of the original information source, and the additional opinions add nothing. In the second scenario, you ask some people, and each of them says "I checked weather.com's forecast this morning, and I think it said yes." In this case, your estimate of the probability that it will rain is a bit lower, because they might have mis-remembered the forecast; but as you ask more people, your estimate should increase asymptotically towards your estimate of the forecast's accuracy.

comment by soreff · 2010-04-30T18:58:59.903Z · LW(p) · GW(p)

Therein lies a tail... Even in the first, thermometer, problem, there is still a question about whether to average or to take the median. Roughly speaking, if one expects some form of independent random additive noise in each thermometer, the choice of what to do with outliers depends on what one's prior for the expected noise distribution looks like. If one expects a gaussian, the variance of the distribution is finite, and one does better by averaging the readings. If one expects a distribution with long tails, with an unbounded variance, then one wants to pick the estimate more nearly from the median. Intermediate choices include throwing out some outliers and averaging the remaining samples or ranking the samples, then doing a weighted average of the samples based on how far from the median rank they are. A nice example for the Cauchy distribution is at http://kmh-lanl.hansonhub.com/publications/maxent93.pdf

comment by PhilGoetz · 2010-04-26T19:46:37.029Z · LW(p) · GW(p)

Problem 1: Jelly-beans

I entered such a contest as a child. I mentally divided the jar into its different conic sections, took a string and measured each section in units of jelly beans, then computed its volume in jelly beans.

I came in second. There's always someone whose wild guess is better than your best calculations.

Replies from: DanielLC, RobinZ
comment by DanielLC · 2010-05-15T00:24:12.433Z · LW(p) · GW(p)

So? You did far better than most of the people who made a wild guess.

comment by RobinZ · 2010-04-26T19:50:16.534Z · LW(p) · GW(p)

How do you know it was a wild guess? And how many (possibly-)wildly-guessing competitors were there?

comment by jimrandomh · 2010-04-25T13:10:10.176Z · LW(p) · GW(p)

Problem 1 is basically a noisy thermometers problem, except that the noise is gaussian in the estimate of the length/density along one dimension, not the number given. So I would take the cube root of each answer (including my own), then average them, then cube the result to make my estimate. If I thought one person was a particularly good or bad estimator, I would apply that as a weighting in the middle step.

Replies from: gerg
comment by gerg · 2010-04-26T05:46:22.497Z · LW(p) · GW(p)

I'm mathematically interested in this procedure; can you please provide a reference?

Replies from: jimrandomh
comment by jimrandomh · 2010-04-26T12:26:20.965Z · LW(p) · GW(p)

I don't have a reference because the procedure is not rigorous; I came up with it off the top of my head. The intuition is that each of the contestants would've estimated the linear density of the jelly-beans, which is the same on all axes, and then cubed it, so you invert that by taking the cube root to get their actual estimates. To make this rigorous, you'd also have to account for the fact that the jar isn't actually a cube, which I have not done. I'd start by reducing the volume calculation to a bounding box (a cuboid) and a constant multiplicative factor, and assuming that everyone knows the correct constant factor for a cylinder. The length being different between the three dimensions does make a difference. I suspect (but have not proven) that having the jar, say, twice as tall as its diameter, would cause my procedure to act as though the error distribution for the height was twice as large.

If anyone knows of a source that handles this class of problem rigorously, please do post it. If not, perhaps it'd make a good exercise for someone looking for topics to write papers on.

comment by jimrandomh · 2010-04-25T13:03:14.452Z · LW(p) · GW(p)

For problem 2, the answer is that you should be able to test whether you are upset directly, using introspection (perhaps combined with a mental trick or two), and if you do it right, the result of this test should be much better evidence of your mental state than your roommate's observation would be. However, performing this test is a skill and the problem description doesn't mention having it. So if you've explicitly practiced inspecting your mental state, then you should mostly ignore your roommate, but if you haven't then you should listen to him.

(But do note that the question of what your mental state is, is entirely separate from the question of whether you've been doing your fair share of the chores. That question can only be answered by actually enumerating a sample of the chores, partitioning it into the set you did and the set he did, and comparing the set sizes.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-25T13:31:49.298Z · LW(p) · GW(p)

Fairness and housework may not be best handled as an enumeration problem. I know a family (two adults, one child) which started by listing the necessary housework, and then each listing which things they liked doing, which they disliked, and which they were neutral about, and came to a low-stress agreement.

Admittedly, this takes good will, honesty, and no one in the group who's too compulsive about doing or not doing housework.

Replies from: bluej100
comment by bluej100 · 2010-04-28T00:42:55.092Z · LW(p) · GW(p)

Steven Brams has devised some fair division algorithms that don't require good will: see his surplus procedure ( http://en.wikipedia.org/wiki/Surplus_procedure ) and his earlier adjusted winner procedure ( http://en.wikipedia.org/wiki/Adjusted_Winner_procedure ).

comment by Richard_Kennaway · 2010-04-25T10:55:23.103Z · LW(p) · GW(p)

(Written before and edited where marked after reading the comments.)

1. I look at the jar and estimate how many jellybabies wide and tall is the filled volume, do some mental arithmetic, and reduce the answer according to an estimate of the packing fraction, which I would expect to be the greatest source of error.

Anyone else can do the same thing, if they're smart enough to not just pull a figure out of the air. If I think the other contestants are unlikely to be, I ignore them. It's like the thermometer example, except that I have reason to think my thermometer is better than most other people's.

If I think that everyone else is smart enough to make an estimate along those lines, then it is more like the original thermometer problem. But I'm not going to just take an average. If my estimate is 1000, and someone else's is 300, that's too big a discrepancy to explain by minor variations. It casts doubt on the assumption of identical thermometers. Assuming that I only have the other people's estimates, and there's no opportunity for discussion, I'll search for reasons why we might have come up with completely different answers, but if I find no error in my own, I'll discard all such outliers.

If I work in the confectionery trade and know the packing fraction for jellybabies, that elevates my confidence in my own estimate and again I ignore the others...unless this competition is being held at a confectionery trade show.

In general, averaging the estimates is only valid if the estimates are believed to be of similar worth. If you know that the estimates are all unbiased but with differing variances, then you can work out some optimally weighted average that puts more weight on the more accurate estimates but does not discard the less accurate ones. However, if estimates are wildly different, the assumption may be a bad one.

BTW, a real-world example is the assessment of conference papers by the programme committee. Each paper will have been refereed by, say, four members. The typical procedure is that if they all say it's excellent, it's accepted without discussion. Likewise, reject it if they all say it's rubbish. For uniformly middling assessments, the question is where to set the bar for acceptance. The only papers where a real discussion is required are the ones where the referees disagree. The disagreements are resolved by sharing evidence, not by an Aumann-like compromise based on sharing posteriors.

2. Recognise that whoever is at rational fault, agreement is not possible in the current state of things. Start recording who does what housework when, then return to the matter after some suitable time, with evidence to decide the issue.

3. Case 2 was set up to be symmetrical. Case 3 is different: rationality is right and religion is wrong. I continue in that belief. How I conduct my discussions thereafter with my religious friend is a separate matter.

4. I'm not sure how much rational perfection and common knowledge to assume of Alfred and Betty in this problem, but even if I assume that they are perfect reasoners with common priors, then I can't see my way to proving anything about the ordering of their second estimates. (Added after reading the comments: Alfred or Betty's estimate of the probable ordering of their second estimates is a different matter.) I suppose that some version of Aumann's theorem says that on iterating the process they must eventually converge.

Replies from: gerg
comment by gerg · 2010-04-26T05:52:13.425Z · LW(p) · GW(p)

If my estimate is 1000, and someone else's is 300, that's too big a discrepancy to explain by minor variations. It casts doubt on the assumption of identical thermometers. Assuming that I only have the other people's estimates, and there's no opportunity for discussion, I'll search for reasons why we might have come up with completely different answers, but if I find no error in my own, I'll discard all such outliers.

What if everyone else's estimate is between 280 and 320? Do you discard your own estimate if it's an outlier? Does the answer depend on whether you can find an error in your reasoning?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-04-26T07:56:51.879Z · LW(p) · GW(p)

Maybe I've made an error no-one else made. Maybe everyone else made an error I didn't make. (I have personally experienced this. I knew what error everyone else was making and stuck to my answer, which in the end turned out to be right.) The thing to do is to find out why the discrepancy happened; then I will know what to do about it.

In some situations this will not be possible. Then I will have to just make an optimal Bayesian calculation based on limited information, i.e. guess. But "optimal" no more implies "accurate" than "statistically significant" implies "important".

comment by DanielLC · 2010-05-16T03:59:05.081Z · LW(p) · GW(p)

I've been thinking about something like this.

If there's two people arguing, there's a 50% chance a given one is right. There is generally no reason to believe the the correct one happens to be you.

That only really adds a bit of evidence against you, which doesn't seem like much. That said, if the other person realizes this, and doesn't change their mind, their evidence was one bit stronger than you previously thought.

Furthermore, if they realize that you know that and haven't changed your mind, and they still don't change their mind, that adds another bit.

Etc.

If both people understand this, they will either immediately figure that the one who's more sure is probably right, or just not be sure.

comment by soreff · 2010-04-30T18:59:22.229Z · LW(p) · GW(p)

Therein lies a tail... Even in the first, thermometer, problem, there is still a question about whether to average or to take the median. Roughly speaking, if one expects some form of independent random additive noise in each thermometer, the choice of what to do with outliers depends on what one's prior for the expected noise distribution looks like. If one expects a gaussian, the variance of the distribution is finite, and one does better by averaging the readings. If one expects a distribution with long tails, with an unbounded variance, then one wants to pick the estimate more nearly from the median. Intermediate choices include throwing out some outliers and averaging the remaining samples or ranking the samples, then doing a weighted average of the samples based on how far from the median rank they are. A nice example for the Cauchy distribution is at http://kmh-lanl.hansonhub.com/publications/maxent93.pdf

comment by JamesAndrix · 2010-04-26T19:02:25.102Z · LW(p) · GW(p)

What's stopping you from saying "Hey, why don't we average our thermometers?" You should at least see if they're updating on your suggestion for proper procedure before you update on their default procedure.

comment by ZeroBlacktip · 2010-04-26T18:39:10.343Z · LW(p) · GW(p)

No real beef with the main issue, but as for the Extra Credit Problems:

  1. I imagine I would weigh it depending on the group of friends I had with me, and previous experience with each of them in the field of guessing, geometry, even basic arithmetic. After I considered all that, I would then adjust my answer depending on each one's credibility.

  2. Think back on what I've done compared to what he's done. His emotional concern is actually irrelevant, because its entirely possible he has gotten upset for reasons such as the dishes are supposed to be done on a rotation, but he uses more dishes when he knows its your turn, while yours stay consistent. However, the reverse is possible. Depending on the situation, a renegotiation of housework may be necessary, so I would suggest an accurate tracking of use to work ratios.

  3. According to the question, there is no right answer, as you're both arguing opinions, and like the haircut, they cannot be resolved. However, this question is heavily biased. Are there no such things as rational Christians? Carl Sagan and Stephen Hawking have both made convincing arguments stating that a rational person should have difficulty declaring atheism (For Carl Sagan's especially well thought out and rational perspective, read "The Demon Haunted World"). He is wrong, but it is entirely possible you are biased, from the Dawkin's School of Atheist Fundamentalism. However, as you're the rational one, the burden of proof is on you, not him. We should always ask for rationality from our discussion partners, but it is irrational to demand a higher standard of them than of ourselves. If we are unwilling to consider that we may be wrong, how can we expect them to reciprocate?

It's a complicated idea, because his methodology is so fundamentally unsound. However, as an Atheist as opposed to an agnostic, you have declared there is absolutely no higher being. If you are every bit as unwilling to examine your own beliefs as he is, you are no less fundamentalist, and you are therefore both biased. A good skeptic should follow both the scientific and Socratic method's, being equally capable of proving his own hypothesis (in this case, that God does not exist), and of understanding his opponents argument as if it were his own. This is a case of universal absolutes, as opposed to the previous problem, which was about temporal difficulties. A non-fundamentalist atheist, I.E., a rational Atheist, must be willing to truly understand the other side's viewpoint, because otherwise they are as equally guilty as their religious fundamentalist counterpart. Or worse, as a religious person who has studied science and can legitimately argue against your claims is actually being more rational than you, who has chosen to do no research, and to consider nothing.

Sorry that was so long, but it was requested we search the questions themselves for bias. Is there actually any reason there can not be a Christian and an Atheist with an Equal level of Rationality? If so, why was this question worded towards a solely Christian extreme, portraying the Atheist as rational while the Xtian is not? Once again, for emphasis, from the way this question is worded, neither is rational, and both are being depressingly biased.

4. It is more likely Alfred's will be smaller, although here is a chance Betty may increase her estimate as well. They should both eventually stabilize, however, for no other reason than because they both know that Betty is more knowledgeable, so if she sticks to an answer, it has a better chance of being correct.

Replies from: Peter_de_Blanc, Jack, wnoise
comment by Peter_de_Blanc · 2010-04-26T19:02:15.286Z · LW(p) · GW(p)

However, as an Atheist as opposed to an agnostic, you have declared there is absolutely no higher being. If you are every bit as unwilling to examine your own beliefs as he is, you are no less fundamentalist

Why is it that if you say it'll rain tomorrow, people assume you mean p=0.75 or something, but if you say there's no god, people assume you mean p=1? Are we supposed to answer every question with, "I'm agnostic about that"?

Replies from: NancyLebovitz, PhilGoetz, ZeroBlacktip
comment by NancyLebovitz · 2010-04-26T19:26:19.336Z · LW(p) · GW(p)

that's an interesting question. In the case of atheism, it's probably by parallel with the religious people who are certain that there is a God. I don't know whether there are religious people who are almost but not entirely certain that there is a God.

comment by PhilGoetz · 2010-04-26T19:22:20.738Z · LW(p) · GW(p)

Amen, brother!

comment by ZeroBlacktip · 2010-04-26T19:21:42.736Z · LW(p) · GW(p)

Because Atheist means P = 1. And isn't using the correct terms important? I also wouldn't say I need a cup of flour when I really needed 3/4 of a cup. If you're not sure, say you are "Without Knowledge", not you are "Without God." Is it so hard to admit you don't know? Even when I disagree with someone, I can admit I may know less than them, how else might I learn?

Of course, then I go and fact check, because they might be wrong too. But people can open paths you would never have looked down if you're willing to say "I don't Know" once in a while, instead of closing off conversation.

Replies from: PhilGoetz, jimrandomh, thomblake
comment by PhilGoetz · 2010-04-26T19:24:24.709Z · LW(p) · GW(p)

Because Atheist means P = 1. And isn't using the correct terms important?

I call myself an atheist, and I don't believe that P = 1.

"Atheist => P = 1" is a slander that theists seek to tar atheists with. The irony is that the situation is exactly opposite: P = 1 is not the atheist belief, but is the theologically required Christian belief.

Even if it were P = 1, why do you take atheists to task for claiming to be certain that there is no god; yet not take Catholics to task for claiming to be certain that there is one God who created the world in 6 days, created one man and one woman, destroyed most of humanity in a great flood, for no reason restricted himself later to being the god of just the Jewish nation, decided several thousand years later that he needed to send his Son (what? don't ask) to die to "pay" (huh? don't ask) himself for everyone's sins, decided for no reason to suddenly not restrict himself to the Jews, and also to just then reveal that people who didn't follow a particular doctrine would suffer agony for all eternity, appointed Peter the head of a single Church with a direct line to God under certain conditions, inspired the choice of a particular set out of hundreds of possible texts as scripture, and requires them to obey the Pope?

Replies from: ZeroBlacktip
comment by ZeroBlacktip · 2010-04-26T21:02:43.390Z · LW(p) · GW(p)

So lets assume that Being a fundamentalist Christian is P=1, and being a fundamentalist atheist is P=0. Keeping in mind that I didn't use the term P=1 originally, and even in context it was not set down as a binary equation (I was assuming that the 1 meant you were sure there is no god, not an immutable belief in the fact, while .75 meant you might lean heavily towards no god but had some doubts).

Yes, P = 1 is the theologically required Christian belief. However, and I've never even been Catholic, your post is rife with Atheist propaganda about Catholicism that shows you did not do your research before condemning an entire group of people. I'm not even sure how I ended up on the Xtian side of this debate, except I dislike fundamentalism of any stripe. I do take Atheists to task for their beliefs as often as I take Xtians to task for their beliefs. However, this is a site dedicated to Rationality. Which means if you're going to say "yes I'm sure", I expect you to have proof, no matter what side of the debate you're on. If you're not sure whether or not god exists, you're an agnostic. If you are sure (G=0), then you are an athiest. If you believe God exists, you could be any religion, not just Xtian. But for some reaosn Xtian's get picked on, because they're not as scary as Muslims but just as fervent. But lets get to you statements.

To begin with, the type of 4000 year old Earth, 6 day creation, no evolution debate is a Protestant belief, not Catholic. In fact, in 1950 Pope Pius XII said it was fine to discuss Human Evolution: "The Church does not forbid that...research and discussions, on the part of men experienced in both fields, take place with regard to the doctrine of evolution, in as far as it inquires into the origin of the human body as coming from pre-existent and living matter." Source: http://www.vatican.va/holy_father/pius_xii/encyclicals/documents/hf_p-xii_enc_12081950_humani-generis_en.html

Whether or not god created the universe, according to the bible (Where he created it twice, read Genesis 1, or just accept that it was a book written by people.) He certainly didn't create only two people. To answer the statement that only Adam and Eve were created by God AND that he resitricted himself Later to being god of just the Jewish nation, I bring you, Genesis 4:15-17

15 Then the LORD said to him, "Not so! If any one slays Cain, vengeance shall be taken on him sevenfold." And the LORD put a mark on Cain, lest any who came upon him should kill him. 16 Then Cain went away from the presence of the LORD, and dwelt in the land of Nod, east of Eden. 17 Cain knew his wife, and she conceived and bore Enoch; and he built a city, and called the name of the city after the name of his son, Enoch.

Now wait, Cain, brother of Abel, went to the land of Nod, and found a wife. That sure does imply that, biblically, there were more people on Earth than Adam and Eve. Dangerous people too, since Cain was worried about being killed by them enough to ask God for protection. So, biblically, even if God created Adam and Eve, not everyone is a child of God. Which is good, because part of the horrifying great commission is to convert nonbelievers. The Biblical God is the God of the descendants of Adam and those who follow him.

Flood tales are rife in any ancient theology, including being in Gilgamesh. What a surprise they're in the bible too, mentioning that God protects his people. Shocking. There probably was a great flood at some point, but I'm linking to Wiki out of spite: http://en.wikipedia.org/wiki/Flood_stories

Jesus still hasn't been reconciled by the Abrahamic religions (Another reason why god chose to protect them. Abraham trusted God, biblically. Seeing as God also gave human free will, biblically, why would he protect people who didn't trust him?)

There is nothing biblically about Jesus making Peter the pope. He did assign the apostles to the Great Commission of going forth and spreading his word, but even guys like Paul got in on that act later. The first official Pope was Anacletus.

Requires them to obey the pope? The Pope is only infallible sitting on his big chair. The pope is not God. Throught the history of Catholicism there have been hundreds of examples of not listening to the pope, especially when there were TWO popes. And if you think the church has problems now, read about the period between 860–1050. The pope is a man according to doctrine. Catholics can blame him all they like, and risk excommunication by disobeying him, but it's still an option. He's more like the king of an institution than a holy being.

However, thanks for proving my point. You listed a bunch of "facts" about Catholicism that either dealt with protestants, were folklore and tradition instead of doctrine, or were just incorrect. Don't forget that both sides have propaganda mills. Go to the source, and see what Xtians are supposed to believe. Then Take them to task. And anyway, am I not supposed to take both sides to task for irrationality? If you are being irrational, you are not excused because you are a fundamentalist. Go do your research.

Replies from: olimay, PhilGoetz, thomblake
comment by olimay · 2010-04-27T17:01:15.694Z · LW(p) · GW(p)

They had me for 20 years, and I can attest that except for the Young Earth Creationism, Phil is just about right. The position of Roman Catholic church, like that of other institutions, changes with times and with external politics and I notice that individual priests and religious education teachers often have widely divergent beliefs from what is supposedly the established party line.

I agree with your overall point because the priors required for a beleif in a Flying Spaghetti Monster are in the same order of magnitude as, say, belief in a Flying Chow Fun monster. To avoid nitpicking and the appearance of attacking a strawman, we could have picked something like the Nicene Creed, which every Roman Catholic mumbles communally every Sunday. In an in-person conversation, we could ask our interlocutor directly what he or she believes and avoid the problem of research.

If we were talking about the Great Schism, or ethno-religious tensions in 6th century Alexandria, what you just went on about would have been much more relevant. It's really very much a tangental point here. Can you see why?

comment by PhilGoetz · 2010-04-26T22:08:05.270Z · LW(p) · GW(p)

I went to church once or twice a week every week for 15 years, and I know what I'm talking about. Every belief I listed is a belief that most Catholics either believe, or are unaware of due to a poor theological education; and in each case you protest either on the basis of the few who disagree, or on the basis of ignorance (Peter is regarded as the first pope, "On this rock I build my church"; look up "apostolic succession", it's actually very important to Catholic doctrine), on the basis of not reading what I wrote (I specifically said "under certain conditions" because I am familiar with the rarely-invoked conditions for infallibility) or with irrelevance or incoherence (flood, disobedience to the Pope).

Replies from: ZeroBlacktip
comment by ZeroBlacktip · 2010-04-26T22:46:54.438Z · LW(p) · GW(p)

Yes, but the point of this paper was rational discussion. People who refuse to research their own religion are not rational, yes? So why are we including them as candidates for rational debate? Call me a cynic, but I would rather debate with a reasonable Xtian that has a solid theological grounding than argue with an unreasonable one who hasn't bothered to learn his bible.

And rock is a metaphor, as well as a play on words for his name. Doesn't make him the pope, could just be saying that his faith needed to be emulated. Jesus was sort of known for metaphor, but not for supporting rigid belief structures designed to bilk their followers.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-04-26T23:01:04.091Z · LW(p) · GW(p)

My original point was that atheists make one simple claim, without absolute certainty, and are accused of being overconfident; most varieties of Christian make many complex and a-priori unlikely claims, with certainity, and are not accused of being overconfident.

Jesus was sort of known for metaphor, but not for supporting rigid belief structures designed to bilk their followers.

I wasn't talking about what Jesus said. I was talking about what Catholics believe. (Not singling them out as any kind of implicit comparison to Protestants, BTW.)

comment by thomblake · 2010-04-26T21:25:58.065Z · LW(p) · GW(p)

4000 year old Earth

6000. The world was created in 2003 BC and destroyed in 1996 AD.

Replies from: Clippy, thomblake
comment by Clippy · 2010-04-26T22:12:16.704Z · LW(p) · GW(p)

No it wasn't.

comment by thomblake · 2010-04-27T12:57:06.600Z · LW(p) · GW(p)

Hmm... I had predicted someone would correct my math.

Replies from: RobinZ
comment by RobinZ · 2010-04-27T16:14:46.562Z · LW(p) · GW(p)

You mean it's not a pop culture reference?

Replies from: thomblake
comment by thomblake · 2010-04-27T16:19:56.592Z · LW(p) · GW(p)

Not that I'm aware of. I was going to say that the world was created in 4003 BC, but mistyped 2003 and left it that way because I thought it would be funnier.

Replies from: Blueberry, RobinZ
comment by Blueberry · 2010-04-28T08:40:26.451Z · LW(p) · GW(p)

That would still only be 5999 years, because there was no year 0.

Replies from: thomblake
comment by thomblake · 2010-04-28T12:45:04.129Z · LW(p) · GW(p)

Well it's about time

comment by RobinZ · 2010-04-27T17:11:25.204Z · LW(p) · GW(p)

So it was.

comment by jimrandomh · 2010-04-26T19:42:22.079Z · LW(p) · GW(p)

No; P=1 has a very specific, slightly absurd technical meaning. If you believe any statement has a probability of exactly 1 or 0, and you obey the rules of probabilistic reasoning, then no finite amount of evidence can ever change your mind. This is why some argue that 0 and 1 should not be considered probabilities at all; they represent states of knowledge that require infinite evidence (and in some alternative representations of probability, are actually infinities).

I am an atheist, but not with P=1. Saying that God does not exist with P=1 would mean that I should maintain that belief even if the stars suddenly rearranged themselves into English text that said otherwise, and that would be more than sufficient to change my mind.

comment by thomblake · 2010-04-29T17:55:01.717Z · LW(p) · GW(p)

Since no one seems to have pointed you here, check out A Technical Explanation of Technical Explanation, and if necessary An Intuitive Explanation of Bayes's Theorem if you aren't familiar with these concepts. Slightly shorter, if you're familiar with the basics: 0 And 1 Are Not Probabilities.

comment by Jack · 2010-04-27T17:48:14.386Z · LW(p) · GW(p)

Welcome to Less Wrong. We don't like having definition debates so I won't tell you how to use "atheist" but you should know that anytime someone uses the word atheist here they mean someone who assigns a very low probability to the existence of God, not someone who assigns a probability of zero. There has been some discussion here over whether or not 0 and 1 should even be considered probability densities. If you're interested I can link you to that discussion.

He is wrong, but it is entirely possible you are biased, from the Dawkin's School of Atheist Fundamentalism.

I don't know what you mean by this but Dawkins does not believe P(God)=0. And he is quite well respected in these parts. You may conclude from that that we are all irrational, but perhaps you should first investigate the possibility that you are wrong about Dawkins.

Is there actually any reason there can not be a Christian and an Atheist with an Equal level of Rationality?

Nearly all of us here are atheists so it makes sense to phrase the problem that way. Also, while an atheist could have enough irrational beliefs to be worse than a Christian (or other theist) the Christian has a huge head start. And if the atheist is an atheist for good reasons it is likely (but certainly far from guaranteed) he believes a number of other things for good reasons.

Once again, for emphasis, from the way this question is worded, neither is rational, and both are being depressingly biased.

The atheist is not ignoring the theists arguments for the existence of God. The atheist is ignoring the Christian's claim that he is too biased to evaluate the question correctly. The atheist has good reason to ignore this because the probability God doesn't exist is much higher than the probability he is too biased to get the right answer.

comment by wnoise · 2010-04-26T18:49:04.244Z · LW(p) · GW(p)

(This is supposed to be a 4, but autoformat keeps changing it.)

Put a backslash (\) before the period (.).

2.

Replies from: ZeroBlacktip
comment by ZeroBlacktip · 2010-04-26T19:23:53.108Z · LW(p) · GW(p)

Fixed. Thanks a lot.