Evidence and counterexample to positive relevance

post by fsopho · 2013-05-25T18:40:36.006Z · LW · GW · Legacy · 37 comments

Contents

  The lottery counterexample
None
37 comments

I would like to share a doubt with you. Peter Achinstein, in his The Book of Evidence considers two probabilistic views about the conditions that must be satisfied in order for e to be evidence that h. The first one says that e is evidence that h when e increases the probability of h when added to some background information b:

(Increase in Probability) e is evidence that iff P(h|e&b) > P(h|b).

 

The second one says that e is evidence that h when the probability of h conditional on e is higher than some threshold k:

(High Probability) e is evidence that h iff P(h|e) > k.

 

A plausible way of interpreting the second definition is by saying that k = 1/2. When one takes k to have such fixed value, it turns out that P(h|e) > k has the same truth-conditions as P(h|e) > P(~h|e) - at least if we are assuming that P is a function obeying Kolmogorov's axioms of the probability calculus. Now, Achinstein takes P(h|e) > to be a necessary but insufficient condition for e to be evidence that h - while he claims that P(h|e&b) > P(h|b) is neither necessary nor sufficient for e to be evidence that h. That may seem shocking for those that take the condition fleshed out in (Increase in Probabilityat least as a necessary condition for evidential support (I take it that the claim that it is necessary and sufficient is far from accepted - presumably one also wants to qualify e as true, or as known, or as justifiably believed, etc). So I would like to check one of Achinstein's counter-examples to the claim that increase in probability is a necessary condition for evidential support.

The relevant example is as follows:

 

The lottery counterexample

Suppose one has the following background b and piece of evidence e1:

b:  This is a fair lottery in which one ticket drawn at random will win.

e1The New York Times reports that Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery.

Further, one also learns e2:

e2The Washington Post reports that Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery. 

So, one has evidence in favor of

h:  Bill Clinton will win the lottery.

 

The point now is that, although it seems right to regard e2 as being evidence in favor of h, it fails to increase h's probability conditional on (b&e1) - at least so says Achinstein. According to his example, the following is true:

 

P(h|b&e1&e2) = P(h|b&e1) = 999/1000.

 

Well, I have my doubts about this counterexample. The problem with it seems to me to be this: that e1 and e2 are taken to be the same piece of evidence. Let me explain. If e1 and e2 increase the probability of h, that is because they increase the probability of a further proposition:

 

g: Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery,

 

and, as it happens, g increases the probability of h. That The New York Times reports g, assuming that the New York Times is reliable, increases the probability of g - and the same can be said about The Washington Post reporting g. But the counterexample seems to assume that both e1 and e2 are equivalent with g, and they're not. Now, it is clear that P(h|b&g) = P(h|b&g&g), but this does not show that e2 fails to increase h's probability on (b&e1). So, if it is true that e2 increases the probability of g conditional on e1, that is, if P(g|e1&e2) > P(g|e1), and if it is true that g increases the probability of h, then it is also true that e2 increases the probability of h. I may be missing something, but this reasoning sounds right to me - the example wouldn't be a counterexample. What do you think?

37 comments

Comments sorted by top scores.

comment by [deleted] · 2013-05-25T18:59:42.412Z · LW(p) · GW(p)

What do you think?

I think philosophers need to spend less time trying to come up with necessary-and-sufficient definitions of english words.

Replies from: fsopho, ozziegooen
comment by fsopho · 2013-05-26T12:49:31.581Z · LW(p) · GW(p)

I agree that some philosophical searches for analyses of concepts turn out generating endless, fruitless, sequences of counterexamples and new definitions. However, it is not the case that, always, when we are trying to find out the truth conditions for something, we are engaged in such kind of unproductive thinking. As long as we care about what it is for something to be evidence for something else (we may care about this because we want to understand what gives support to scientific theories, etc), it seems legitimate for us to look for satisfactory truth conditions for 'e is evidence that h'. Trying to make the boundaries of our concepts clear is also part of the project of optimizing our rationality.

comment by ozziegooen · 2013-05-25T19:48:37.091Z · LW(p) · GW(p)

While I find this particular re-definition a bit silly, that doesn't mean that in general having more succinct definitions isn't a good thing.

If the second definition of evidence were used, it would mean that "collecting evidence" would be a fundamentally different thing than it would be in the first case. In the second, "evidence" is completely relative to what is already known, and lots of new material would not be included.

So if I go out and ask people to "collect evidence", their actions should be different depending on which definition we collectively used.

In addition, the definition would lead to interesting differences in quantities. If we used the first definition, me having "lots of evidence" could mean having lots of redundant evidence for one small part of something (of course, it would also be helpful to quantify "lots", but I believe that or something similar could be done). In the second definition, I imagine it would be much more useful to what I actually want.

This new definition makes "evidence" much more coupled to resulting probabilities, which in itself could be a good thing. However, it seems like an unintuitive stretch for my current understanding of the word, so I would prefer that rather than re-defining the word, a condition were used. For example, "updating evidence" for the second definition.

Replies from: fsopho
comment by fsopho · 2013-05-26T13:03:51.896Z · LW(p) · GW(p)

Thanks, that's interesting. The exercise of thinking how people would act to gather evidence having in mind the two probabilistic definitions gives food for thought. Specifically, I'm thinking that, if we were to tell people: "Look for evidence in favor of h and, remember, evidence is that which ...", where we substitute '...' by the relevant definition of evidence, they would gather evidence in a different way from the way we naturally look for evidence for some hypotheses. The agents to whom that advice was given would have a reflexive access to their own definition of evidence, and they would gather only what is in the scope of that definition. People being given the first definition of evidence could balk when looking for evidence that Obama will be involved in an airplane accident: if they find out that Obama will be in an airplane today, they find out evidence that Obama will be involved in an airplane accident. Now given that these people would have the advice we gave them in mind, they could start questioning themselves if they didn't receive silly advice.

comment by Vaniver · 2013-05-25T22:38:31.272Z · LW(p) · GW(p)

Your interpretation about g is correct.

The high probability interpretation is not a useful interpretation of "evidence," and there's a much easier way to discuss why: implication. P("A or ~A"|"My socks are white")=1, because P("A or ~A") is 1, and conditioning on my socks being white cannot make that less true. It is not sensible to describe the color of my socks as evidence for the truth value of "A or ~A".

The increase in probability definition is sensible, and what is used locally for Bayesian evidence.

Replies from: fsopho, jshibby
comment by fsopho · 2013-05-26T13:09:51.974Z · LW(p) · GW(p)

Thanks Vaniver. Doesn't your example shows something unsatisfactory about the High Probability interpretation also? Given that P(A or ~A|My socks are white)>1/2, that my socks are white would also count as evidence that A or ~A. Your point seems to suggest that there must be something having to do with content in common between the evidence and the hypothesis.

comment by jshibby · 2013-05-29T17:26:26.187Z · LW(p) · GW(p)

Actually, early empiricists wanted to consider tautologies just those that were confirmed by any evidence whatsoever. (This enables an empiricist to have a pure evidential base of only empirical events.) It doesn't sound great, but some folks like the conclusion that anything (or anything possible) is evidence for a tautology.

comment by Richard_Kennaway · 2013-05-25T19:33:43.940Z · LW(p) · GW(p)

Have you read this?

Replies from: fsopho
comment by fsopho · 2013-05-26T13:12:26.820Z · LW(p) · GW(p)

Yes I did - but thanks for the tip anyway.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-05-26T16:20:37.377Z · LW(p) · GW(p)

Well, it's a complete answer to the conundrum.

Replies from: fsopho
comment by fsopho · 2013-05-26T21:04:38.251Z · LW(p) · GW(p)

This is not a case where we have two definitions talking about two sorts of things (like sound waves versus perception of sound waves). This is a case where we have two rival mathematical definitions to account for the relation of evidential support. You seem to think that the answer to questions about disputes over distinct definitions is in that post you are referring to. I read the post, and I didn't find the answer to the question I'm interested in answering - which is not even that of deciding between two rival definitions.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-05-27T07:45:57.000Z · LW(p) · GW(p)

This is not a case where we have two definitions talking about two sorts of things (like sound waves versus perception of sound waves). This is a case where we have two rival mathematical definitions to account for the relation of evidential support.

What is this "relation of evidential support", that is a given thing in front of us? From your paraphrase of Achinstein, and the blurb of his book, it is clear that there is no such thing, any more than "sound" means something distinct from either "vibrations" or "aural perceptions". "Sound" is a word that covers both of these, and since both are generally present when we ordinarily talk of sound, the unheard falling tree appears paradoxical, leading us to grasp around for something else that "sound" must mean. "Evidence" is a word that covers both of the two definitions offered, and several others, but the fact that our use of the word does not seem to match any one of them does not mean that there must be something else in the world that is the true meaning of "evidence".

The analogy with unheard falling trees is exact.

What would you expect to accomplish by discovering whether some particular e really is "evidence" for some h, that would not be accomplished by discovering whether each of the concrete definitions is satisfied? If you know whether e is "fortitudinence" for h (increases its probability), and you know whether e is "veritescence" for h (gives a posterior probability above 1/2), what else do you want to know?

BTW, around here "fortitudinence" is generally called "Bayesian evidence" for reasons connected with Bayes theorem, but again, that's just a definition. There are reasons why that is an especially useful concept, but however strong those reasons, one is not discovering what the word "evidence" "really means".

Replies from: fsopho
comment by fsopho · 2013-05-27T14:02:39.586Z · LW(p) · GW(p)

Thanks. I would say that what we have in front of us are clear cases where someone have evidence for something else. In the example given, we have in front of us that both, e1 and e2 (together with the assumption that the NYT and WP are reliable) are evidence for g. So, presumably, there is an agreement between people offering the truth conditions for 'e is evidence that h' about the range of cases where there is evidence - while the is no agreement between people answering the question about the sound of the three, because the don't agree on the range of cases where sound occurs. Otherwise, there would be no counterexamples such as the one that Achinstein tried to offer. If I offer some set of truth-conditions for Fa, and one of the data that I use to explain what it is for something to be F is the range of cases where F is applied, then if you present to me a case where F applies but it is not satisfied by the truth-conditions I offered, I will think that there is something wrong with that truth-conditions.

Trying to flesh out truth-conditions for a certain type of sentence is not the same thing as giving a definition. I'm not saying you're completely wrong on this, I just really think that this is not merely verbal dispute. About what would I expect to accomplish by finding out the best set of truth-conditions for 'e is evidence that h', I would say that a certain concept that is used in the law, natural science and philosophy has now clear boundaries, and if some charlatan offers an argument in a public space for some conclusion of his interest, I can argue with him that he has no evidence for his claims.

Thanks for the reference to the fortitudinence concept - I didn't know it yet.

comment by DSherron · 2013-05-29T21:39:02.841Z · LW(p) · GW(p)

Um, sorry, but seriously?! Arguing about definitions of words? This is entirely ridiculous and way below the minimum rationality that should be expected from posts on Less Wrong. Downvoted for proposing serious discussion of a topic that deserves no such thing. Since you seem sincere I'll try and give you a quick overview of the problems here, but you really need to reread the sequence "A Human's Guide to Words" to get a full picture.

First, while I have an answer to what the useful definition of evidence is (in the sense that it describes a useful feature of reality), I will refrain from pointing it out here because it is irrelevant to the topic at hand. If someone really needed the word "evidence" for some reason, including potential hypothesis-favoring-data sufficient to convince me that most people mean something very different from me by the word "evidence", I'd be willing to give up the word. After all, words don't have meanings, they're just mental paintbrush handles for someone else's brain, and if that handle paints the wrong picture then I'll use a different one.

That said, the thrust of the problem with your post is exactly the same as the definitional dispute over a tree in a forest. There is no "true" meaning of evidence, and anyone arguing about one is doing so with an intent to sneak in connotations to win an argument by appealing to basic human fallacies. Definitional Disputes are an indisputably Dark Side tactic; the person doing might be honest, but if so then they are severely confused. Most people couldn't identify the difference between good and bad epistemology if it hit them in the face, and this does not make them evil, but it does make them wrong. Why would anyone care what the "true meaning" of evidence is, when they could just break down the various definitions and use each consistently and clearly? The only reason to care runs along the lines of "evidence is an important concept [hidden inference], this is the true definition, therefore this definition is important", replacing "important" with something specific to some discussion.

Only think about words as paintbrush handles, and the problem goes away. You can then start focusing on the concept behind your handle, and trying to communicate instead of win. Once you and your audience can all understand what is being said - that is, when the pictures you draw in their brain match the pictures in your head - then you're done. If you dispute anything at that point, it will be your true dispute, and it will have a resolution - either you have different priors, one or more of you is irrational, or you will walk away in agreement (or you'll run out of time - humans aren't ideal reasoners after all). Play Rationalist Taboo - what question is this post even asking, when you remove explicit reference to the word "evidence"? You can't ask questions about a concept which you can't even identify.

I feel like I've seen an increasing amount of classical Traditional Rationality bullshit on this site lately, mostly in Discussion. That could just be me starting to notice it more, but I feel like I need to make a full post about it that I can link to whenever this stuff comes up. These are basic errors, explicitly warned against in the Sequences, and the whole point of Less Wrong is supposed to be somewhere where this sort of crap is avoided. Apologies for language.

comment by DanielLC · 2013-05-25T19:01:13.121Z · LW(p) · GW(p)

In the counterexample, e1 and e2 are each evidence of h on there own, but when the other is known, they are not.

(High Probability) e is evidence that h iff P(h|e) > k.

By this definition, if e and h are independent, but h has a prior probability higher than k, then e is evidence for h. For that matter, you could get something like P(h|e) = k+ε, P(h) = 1-ε. By this definition, e is evidence of h, even though e makes h dramatically less likely.

Also, this means that a confession is not evidence for a crime, because you need to know the language for it to mean anything.

Replies from: pragmatist
comment by pragmatist · 2013-05-27T09:54:57.106Z · LW(p) · GW(p)

By this definition, if e and h are independent, but h has a prior probability higher than k, then e is evidence for h.

No, because in that case Achinstein's first condition won't be satisfied. If I'm reading the post right, both conditions need to be satisfied in order for e to count as evidence for h according to this definition.

Replies from: fsopho
comment by fsopho · 2013-05-27T14:05:36.220Z · LW(p) · GW(p)

Actually, Achinstein's claim is that the first one does not need to be satisfied - the probability of h does not need to be increased by e in order for e to be evidence that h. He gives up the first condition because of the counterexamples.

Replies from: pragmatist
comment by pragmatist · 2013-05-27T16:44:30.490Z · LW(p) · GW(p)

Well, duh. You're right, the post was pretty clear about this. I need to read more carefully. So does he believe that the second condition is both necessary and sufficient? That seems prone to a bunch of counterexamples also.

Replies from: fsopho
comment by fsopho · 2013-05-27T18:25:45.752Z · LW(p) · GW(p)

So, he claims that it is just a necessary condition - not a sufficient one. I didn't reach the point where he offers the further conditions that, together with high probability, are supposed to be sufficient for evidential support.

p.s: still, you earned a point for the comment =|

comment by Decius · 2013-05-25T22:27:30.629Z · LW(p) · GW(p)

I concur. Now consider the case where you observed the ticket sales, and you saw Bill buy 999 tickets (g). Then someone tells you that Bill bought 999 tickets(e1). Is e1 evidence that Bill will win the lottery?

Suppose that you saw the ticket sales, and saw someone other than Bill buy 501 out of 1000 tickets. Is there any possible evidence that Bill will win? (Assume tickets are nontransferable & other loopholes are accounted for-the odds are 501:499 against)

Suppose that you didn't see the ticket sales, but a reliable source (The New York Times) reports that Bill bought somewhere around 500 tickets, but they don't know the exact number. Would that be evidence that Bill will win? (Assume that there are many people as eligible to buy tickets)

I will continue using the definition "e is evidence of h iff P(h|e) > P(h)". I don't think that P(h|b) is meaningful unless P(h|~b) is also meaningful.

Replies from: fsopho
comment by fsopho · 2013-05-26T13:27:39.917Z · LW(p) · GW(p)

Thanks. Your first question is showing a case where the evidential support of e1 is swamped by the evidential support of g, right? It seems that, if I have g as evidence, e1 doesn't change my epistemic situation as regards the proposition that Bill will win the lottery. So if we answer that e1 is not evidence that h in this case, we are assuming that if one piece of evidence is swamped by another, it is not evidence anymore. I wouldn't go that way (would you?), because in a situation where I didn't see Bill buying the tickets, I still would have e2 as evidence. About the question over not knowing the exact number of tickets bought by Bill, I don't know what to say besides this: seems to be a case where Jeffrey conditionalization is wellcome, given the 'uncertain' character of evidence.

Replies from: Decius
comment by Decius · 2013-05-27T05:05:17.950Z · LW(p) · GW(p)

My first case is where g is given- you know it as well as you know anything else, including the other givens. I would not say that whether a particular fact is evidence depends on the order in which you consider them.

Do you concur that "It is 75% likely that Bill will win the lottery given 'it is not the case that This is a fair lottery in which one ticket drawn at random will win. . . .'" p(h|~b) is not a meaningful statement?

Replies from: fsopho
comment by fsopho · 2013-05-27T14:10:36.619Z · LW(p) · GW(p)

All right, I see. I agree that order is not determinant for evidential support relations.

It seems to me that the relevant sentence is not meaningful, or false.

Replies from: Decius
comment by Decius · 2013-05-27T19:33:43.497Z · LW(p) · GW(p)

I think that we agree that neither of the definitions offered in the post are correct.

Can you see any problem with "e is evidence of h iff P(h|e) > P(h)", other than cases where evidence interacts in some complex manner such that P(h|e1)>P(h); P(h|e2)>P(h); but P(h|e1&e1)<P(h) (I'm not sure that is even possible, but I think it can be done with three mutually exclusive hypotheses).

Replies from: fsopho
comment by fsopho · 2013-05-27T21:58:26.919Z · LW(p) · GW(p)

Yes, we agree on that. There is an example that copes with the structure you just mentioned. Suppose that

h: I will get rid of the flu

e1: I took Fluminex

e2: I took Fluminalva

b: Fluminex and Fluminalva cancel each other's effect against flu

Now suppose that both, Fluminex and Fluminalva, are effective against flu. Given this setting, P(h|b&e1)>P(h|b) and P(h|b&e2)>P(h|b), but P(h|b&e1&e2)<P(h|b). If the use of background b is bothering you, just embed the information about the canceling of effects in each of the pieces of evidence e1 and e2.

I see further problems with the Positive Relevance account, like the one that lies in saying that the fact that a swimmer is swimming is evidence that she will drown - just because swimming increases the probability of drowning. I see more hope for a combination of these two accounts, but one in which quantification over the background b is very important. We shouldn't require that in order for e to be evidence that h it has to increase the probability of h conditional in any background b.

Replies from: Decius
comment by Decius · 2013-05-27T23:35:17.401Z · LW(p) · GW(p)

I don't understand what it would mean to divorce a hypothesis h from the background b.

Suppose you have the flu (background b); there is zero chance that you don't have the flu, so P(~b)=0 and P(x&~b)=0, therefore P(x|~b)=0 (or undefined, but can be treated as zero for these purposes).

Since P(x)=P(x|b)+P(x|~b), P(x)=P(x|b) EDIT: As pointed out below, P(x)=P(x|b)P(b)+P(x|~b)P(~b). This changes nothing else . If we change the background information, we change b and are dealing with a new hypothetical universe (for example, one in which taking both Fluminex and Fluminalva increases the duration of a flu.)

In that universe, you need prior beliefs about whether you are taking Fluminex and Fluminalva, (and both, if they aren't independent) as well as their effectiveness separately and together, in order to come to a conclusion.

P, h, and e are all dependent on the universe b existing, and a different universe (even one that only varies in a tiny bit of information) means a different h, even if the same words are used to describe it. Evidence exists only in the (possibly hypothetical) universe that it actually exists in.

Replies from: fsopho
comment by fsopho · 2013-05-28T12:40:34.619Z · LW(p) · GW(p)

Me neither - but I am not thinking that it is a good idea to divorce h from b.

Just a technical point: P(x) = P(x|b)P(b) + P(x|~b)P(~b)

Replies from: Decius
comment by Decius · 2013-05-29T02:10:18.679Z · LW(p) · GW(p)

Given a deck of cards shuffled and arranged in a circle, the odds of the northernmost card being the Ace of Spades should be 1/52. h=the northernmost card is the Ace of Spades (AoS)

Turning over a card at random which is neither the AoS nor the northernmost card is evidence for h.

Omega providing the true statement "The AoS is between the KoD and 5oC" is not evidence for or against, unless the card we turned over is either adjacent to the northernmost card or one of the referenced cards.

If we select another card at random, we can update again- either to 2%, 50%, 1, or 0. (2% if none of the referenced cards are shown, 50% if an adjacent card is picked and it is either KoD or 5oC, 1 if the northernmost card is picked and it is the AoS, and 0 if one of the referenced cards turns up where it shouldn't be.)

That seems enough proof that evidence can alter the evidential value of other evidence.

comment by elharo · 2013-05-26T12:14:19.220Z · LW(p) · GW(p)

I think your example eviscerates the first "Increase in Probability" definition, at least as presented here, and shows that it doesn't account for non-independent evidence. If I wake up one morning and read in the New York Times reports that Bill Clinton has bought 99.9 % of tickets in a lottery, this strongly increases my estimate of the probability that Clinton will win the lottery. Reading the same story in the Washington Post does not materially increase my estimate given the background information in the New York Times. (I suppose it slightly reduces the probability that the Times story is a hoax or mistaken. Just maybe that's relevant.) Thus by this definition the Post story is not evidence (or at least is very weak evidence) that Bill Clinton will win the lottery.

However, suppose instead I wake up and read the Post story first. This now provides strong evidence that Bill Clinton will the lottery. The Times story is weak evidence at best. So depending on the irrelevant detail of which story I read first, one is strong evidence and one is weak evidence? That seems wrong. I don't want the strength of evidence to depend more on the irrelevant detail of which order I encounter two pieces of evidence. So perhaps what's being defined here is not the quality of the evidence but the usefulness of new evidence to me given what I already know?

Of course evidence and probabilities, especially non-independent probabilities are not additive.

This is not inobvious, so I notice that I am confused. I have to think that I'm misunderstanding this definition; or that there are details in the book that you're not reporting.

Replies from: Vaniver, fsopho
comment by Vaniver · 2013-05-26T17:37:31.729Z · LW(p) · GW(p)

I think your example eviscerates the first "Increase in Probability" definition, at least as presented here, and shows that it doesn't account for non-independent evidence.

There's a deep philosophical point at stake here. Is probability a 1) quantification of a person's uncertainty, or 2) a statement about the universe?

2) is a position that is not well-regarded here, and I would recommend Probability is in the Mind and then possibly Probability is Subjectively Objective as to why.

If 1), then their previous knowledge matters. It's one particular person's uncertainty. You start off very uncertain about the lottery result. Then you read the first newspaper report, which makes you much less uncertain about the lottery result. Then you read the second newspaper report, which makes you very slightly less uncertain about the lottery result. As you watch your uncertainty decrease, you see that the first report has a huge effect (and thus is strong evidence), and the second report has a small effect (and thus is weak evidence). With more background knowledge about correctness, the second report drops to 0 effect (and thus is not evidence to you).

The mathematical way to think about this is that the strength of evidence is a function of both e and b. This is necessary to ensure that there's a probability shift, and if you don't force a shift, you can have even more silly 'evidence', like sock color being evidence for logical tautologies.

One might object that they're not particularly interested in measuring their personal uncertainty, but in affecting the beliefs of others. If you wanted to convince someone else that Bill Clinton is probably going to win the lottery, it seems reasonable to be indifferent to whether they read the Times or the Post, so long as they read one. But your personal measure of evidence is wildly different between the two papers! How do we reconcile your personal measure of uncertainty, and your desire to communicate effectively?

The answer I would give is being more explicit about the background b. It's part of our function, and so let's acknowledge it. When b is "b & e1", then e2 is not significant evidence. When b is just b, e2 is significant evidence, and so if you want to convince someone else of h and their current knowledge is just b, you can be indifferent between e1 and e2 because P(h|b,e1)=P(h|b,e2).

Let's further demonstrate that with a modification to the counterexample like the one Manfred suggested. Suppose I learn e1, that the New York Times reports that Bill Clinton owns all but one of the tickets, by reading that day's copy of the Times, which I'll call r1.

Suppose my background, which I'll call t, is that the New York Times (and my perception of it) is truthful. So P(e1|t,r1)=1. I liked the prose so much, I give the newspaper a second read, which I'll call r2. Is r2 evidence for e1? Well, P(e1|t,r1,r2)=1, which is the same as P(e1|t,r1), which suggests by the "increase in probability" definition that reading the newspaper a second time is not evidence for what the newspaper says. (Note that if we relax the assumption that my reading comprehension is perfect, then reading something a second time is evidence for what I thought it said, assuming I think the same thing after the second read. If we only relax the assumption that the newspapers are perfectly correct, we don't get a change in evidence.)

Does it seem reasonable that, given perfect reading comprehension, that considering the same piece of evidence twice should only move your uncertainty once? If so, what is the difference between that and the counterexample where one newspaper says "Times" on the front, and the other says "Post"?

(If you relax the assumption that the newspapers are perfectly correct, then the second newspaper is evidence by the "increase in probability" definition, because of the proposition g discussed in the OP.)

Replies from: fsopho
comment by fsopho · 2013-05-26T22:19:04.648Z · LW(p) · GW(p)

Right, so, one think that is left open by both definitions is the kind of interpretation given to the function P. Is that suppose to be interpreted as a (rational) credence function? If so, the Positive Relevance account would say that e is evidence that h when one is rational in having a bigger credence in h when one has e as evidence than when one does not have e as evidence. For some, though, it would seem that in our case the agent that already knows b and e1 wouldn't be rational in having a bigger credence that Bill will win the lottery if she learns e2.

But I think we can try to solve the problem without having to deal with the interpretation of the probability issue. One way to go, for the defender of the Positive Relevance account, would be to say that the counterexample assumes a universal quantification over the conditionalizing sentence that was not intended - one would be interpreting Positive Relevance as saying:

  • (For every background b) e is evidence that h iff P(h|e&b) > P(h|b)

But such interpretation, the defender of Positive Relevance could say, is wrong, and it is wrong just because of the kinds of examples as the one presented in the post. So, in order for e2 to be evidence that h, e2 does not need to increase the probability of h conditional on every conceivable background b. Specifically, it doesn't need to increase the probability of h conditional on b when b contains e1, for example. But how would the definition look like without such quantification. Well, I don't quite know sufficiently about it yet (this is new to me), but I think that maybe the following would do:

  • (For every tautology b) e is evidence that h iff P(h|e&b) > P(h|b)

The new definition does not require e to increase h's probability conditional on every possible background. How does that sound?

Replies from: Vaniver
comment by Vaniver · 2013-05-27T02:27:26.456Z · LW(p) · GW(p)

It's not clear to me why exactly you want the definition of evidence to not rely on the particular background of the mind where the P resides.

If you limit b to tautologies, you kill its usefulness. "This is a fair lottery in which one ticket drawn at random will win" isn't a tautology.

Replies from: fsopho
comment by fsopho · 2013-05-27T14:16:17.061Z · LW(p) · GW(p)

But that these are the truth conditions for evidential support relations does not mean that only tautologies can be evidence, nor that only sets of tautologies can be one's background. If you prefer, this is supposed to be a 'test' for checking if particular bits of information are evidence for something else. So I agree that backgrounds in minds is one of the things we got to be interested in, as long as we want to say something about rationality. I just don't think that the usefulness of the test (the new truth-conditions) is killed. =]

comment by fsopho · 2013-05-26T13:40:32.856Z · LW(p) · GW(p)

So, I'll kind of second the observation in the comment above. It seems to me that, from the fact that reading the same story in the Washington Post does not make your epistemic situation better, it does not seem to follow that the Post story is not evidence that Bill win the lottery. That is: from the fact that a certain piece of evidence is swamped by another piece of evidence in a certain situation, it does not follow that the former is not evidence. We can see that it is evidence just following your steps: we conceive another situation where I didn't read the Times story but I read the Post story - and it is evidence that Bill win the lottery in this situation.

I agree that it seems just wrong to grant that strong evidence and weak evidence is determined by the access we have to evidence in order of time. But from the fact that one does not gain more justification to believe h by learning e it does not follow that e is evidence that h, all things considered.

comment by Manfred · 2013-05-25T20:59:48.636Z · LW(p) · GW(p)

Memorizing a book might make you more knowledgeable, but memorizing the same book twice will just make you more bored.

Similarly, acquiring new information about the world will improve your probability estimates, but acquiring the same information in two different ways will just make your equations more complicated.

comment by ozziegooen · 2013-05-25T19:14:33.792Z · LW(p) · GW(p)

It seems very strange to me to look at e1 and e2 as point values without weights or confidence distributions. Taking no other information about e1 and e2 other than the fact that they both indicate a 999/1000 victory is quite limiting and makes for impossible meta-analysis. Other information could include what you think of both sources.

If you could get an accurate 90% confidence interval for each (or any accurate probability distribution) this could make a lot more sense. This must encompass the expected error in the New York Time's error margin, especially if they don't have one. For example, you may find that whenever reference a statistic, 90% of the time it is has at most 15% error compared to the true result (this would be really useful btw, someone should do this). Even if you estimate this number, you could still get a workable value.

If their 90% confidence interval was 0% error, and their reported statistics were always exactly true, then I do not believe you would update at all from e2.

I feel like it is possible to combine two 90% confidence intervals, and my guess is that any two with the same mean would result in higher certainty than at least the worst estimate (the one with the wider 90% confidence interval), possible higher than both. Solving this mathematically is something I'm not too sure about.

Replies from: fsopho
comment by fsopho · 2013-05-26T13:51:17.369Z · LW(p) · GW(p)

Yeah, one of the problems of the example is that it seems to take for granted that both, the NYT and WP are 100% reliable.