How do Bayesians tell what does and doesn't count as evidence (which, e.g., hypotheses may render more or less probable if true)? Is it possible for something to fuzzily-be evidence?

post by LVSN · 2021-11-30T12:52:37.929Z · LW · GW · 3 comments

This is a question post.

Contents

  Answers
    7 Richard_Kennaway
    2 TAG
    2 ChristianKl
    1 Alexander Gietelink Oldenziel
None
3 comments

"The basic idea underlying most uses of Bayes’ theorem is that a hypothesis is supported by any evidence which is rendered (either sufficiently or simply) probable by the truth of that hypothesis."
— Bayesian Argumentation: The Practical Side of Probability

Answers

answer by Richard_Kennaway · 2021-11-30T15:29:10.010Z · LW(p) · GW(p)

For the pure, ideal Bayesian, everything is "evidence". Given the probabilities that you currently assign to all possible statements about the world, when you observe that some statement P is true, you update all your probabilities in accordance with the mathematical rules.

If I then ask, "suppose I don't observe that P is true, only something suggesting that P is likely true?" the answer is that in that case I did not observe P. I observed something else Q. It is then the truth of Q that I should use to update my probabilities for P and everything else.

comment by Charlie Steiner · 2021-12-01T03:41:31.687Z · LW(p) · GW(p)

To elaborate rather than adding another answer:

For more human purposes, we can't treat everything as evidence because it's really hard to know what the implications are of every piece of raw data - even if those implications are perfectly deterministic, we haven't got the brain power to figure them all out. And we can sort of turn this argument around to see that when we can figure out some implications of a piece of raw data, then we can treat it as evidence.

And so in everyday life we do the same thing we do when solving all other sorts of problems - we use heuristics to judge when we think we can make interesting use of some information, and then we apply our limited brain power to the task. This can lend some fuzziness to our interpretations, but it's probably better to be careful and say that it's how we're treating things as evidence that's fuzzy, it's not an inherent property of the data (this means that different people might see different levels of fuzz in the same situation).

Replies from: TAG
comment by TAG · 2021-12-01T15:26:13.173Z · LW(p) · GW(p)

It's not just a case of any two agents having fuzzy approximations to the same world view. In the least convenient case, agents will start off with radically different beliefs, and those beliefs will affect what they consider to be evidence, and how they interpret evidence. So there is no reason for agents to ever converge in the least convenient case .

Aumann's theorem assumes the most convenient case

Replies from: JBlack
comment by JBlack · 2021-12-02T00:29:43.328Z · LW(p) · GW(p)

Aumann's theorem assumes rational agents. Such agents consider every observation to be evidence, and update the probability of every hypothesis in the distribution appropriately. That includes agents who start with radically different beliefs, because for rational agents "belief" is just a distribution over possible hypotheses.

The problem is that each hypothesis is a massively multidimensional model, and no real person can even properly fit one in their mind. There is no hope whatsoever that anyone can accurately update weightings over an enormous number of hypotheses on every observation.

So we live in an even less convenient world than the "least convenient case" that was proposed. Nobody in the real world is rational in the sense of Aumann's theorem. Not even a superintelligent AGI ever will be, because the space of all possible hypotheses about the world is always enormously more complex than the actual world, and the actual world is more complex than any given agent in it.

answer by TAG · 2021-12-01T15:45:24.549Z · LW(p) · GW(p)

Realistic Bayesians can't treat "everything" as evidence , any more than they can consider every hypothesis .

answer by ChristianKl · 2021-12-01T12:22:18.207Z · LW(p) · GW(p)

Evaluating information as evidence is a skill. It's a skill that's learned with practice. Being good at most skills isn't about simply following a set of explicit rules but learning how to execute the skill with real world practice.

One good practice for that is making forcasts about how likely certain future events happen to be.  

3 comments

Comments sorted by top scores.

comment by Pattern · 2021-12-06T01:47:25.567Z · LW(p) · GW(p)

It's possible for something to be evidence, without you being sure of which way it points. Suppose there is a room with a liar, who always, lies, and another who always tells the truth within. After you figure out who is who, past info from each may be interpreted.

Replies from: LVSN
comment by LVSN · 2021-12-06T16:30:36.966Z · LW(p) · GW(p)

Seems like "evidence" is a terrible word for the concept! "Data" is better, though "sensory data" is even less misleading while a bit clunkier, and "the set of propositions safely taken for granted" is the least misleading and the clunkiest.

Additionally: imagine the evidence appeared very quickly, and was about an emotionally charged subject. People might misremember the evidence as being one thing when it was actually something similar, but still different, and perhaps critically different. Shouldn't it be regarded an extremely important Bayesian skill to correctly interpret and remember your experiences? Since they will be used to measure the correct amount of confidence in explanations.

comment by Pattern · 2021-12-14T02:02:36.693Z · LW(p) · GW(p)
"The basic idea underlying most uses of Bayes’ theorem

In line with that, the obvious answer is, sort of:

Information/occurences which make the hypothesis more or less likely.