Deception by cherry picking

post by AlephNeil · 2011-02-20T05:11:54.770Z · LW · GW · Legacy · 4 comments

[This is an idea I've had 'kicking around' for a long time - may as well see what LW makes of it.]

The Bayesian update procedure tacitly presupposes that the event we're updating on is not itself random. Indeed, naively updating on a random event, that turns out to be correlated with the variable of interest, is how people get Monty Hall wrong.

A false statement can often be made to seem plausible if you naively update on a set of misleading, 'cherry picked' facts.

To make this concrete, imagine a biased coin which we know has probability 1/3 or 2/3 of landing heads - in fact it's 1/3 but we don't know that. Say it's tossed 2000 times. Then someone who wanted to mislead us could cherry pick a sample of 100 in which, say, 70 of the coin tosses landed heads, and hope we assume they picked their sample randomly. (More insidiously, using Derren Brown's variety of dark arts they could even trick us into choosing that sample ourselves, believing that we're choosing 'of our own free will'.)

But now here's the thing: That sample of 100 probably has a high minimum description length. If it had a sufficiently low minimum description length - like if it consisted of 100 contiguous tosses - then even if we suspected "Derren Brown" was trying to manipulate us, our sample would still give us evidence that heads has probability 2/3.

I think there should be a theorem which looks like:

"The largest x such that we'd be irrational not to increase our subjective log-odds of event E by at least x, even if our data was provided by an adversary" = log[P(data|E) / P(data|¬E)] - "The minimum description length of the data" - O(1)

Anyone familiar with "the theory of how to update on evidence provided by adversaries" (assuming it exists)?

4 comments

Comments sorted by top scores.

comment by Eugine_Nier · 2011-02-20T05:19:43.909Z · LW(p) · GW(p)

Anyone familiar with "the theory of how to update on evidence provided by adversaries" (assuming it exists)?

Yes, the LW-jargon term for this is filtered evidence. Eliezer's post on the subject is here.

Replies from: AlephNeil
comment by AlephNeil · 2011-02-20T05:24:44.409Z · LW(p) · GW(p)

Ah - thanks for the link. (Thanks to you too, CronoDAS)

comment by CronoDAS · 2011-02-20T05:17:55.184Z · LW(p) · GW(p)

This was touched on in the Sequences, but there doesn't seem to be any particularly good resolution to the issue.

comment by cousin_it · 2011-02-20T07:58:20.175Z · LW(p) · GW(p)

Thanks for reminding me! This seems to be another contradiction in Eliezer's views that he hasn't quite resolved. Beautiful Probability:

And then there's the Bayesian reply: "Excuse you? The evidential impact of a fixed experimental method, producing the same data, depends on the researcher's private thoughts? And you have the nerve to accuse us of being 'too subjective'?"

What Evidence Filtered Evidence:

When someone says, "The 4th coinflip came up heads", we are not conditioning on the 4th coinflip having come up heads - we are not taking the subset of all possible worlds where the 4th coinflip came up heads - rather we are conditioning on the subset of all possible worlds where a speaker following some particular algorithm said "The 4th coinflip came up heads." The spoken sentence is not the fact itself; don't be led astray by the mere meanings of words.

I think these opinions contradict each other because the stopping rule of the experiment, which Eliezer claimed to be irrelevant in the first example, may act as the "clever arguer" from the second example. I have no idea which of these two opinions is correct. In statistics, this is known as the debate about the likelihood principle.