Bayesian Flame

post by cousin_it · 2009-07-26T16:49:51.120Z · LW · GW · Legacy · 163 comments

Contents

163 comments

There once lived a great man named E.T. Jaynes. He knew that Bayesian inference is the only way to do statistics logically and consistently, standing on the shoulders of misunderstood giants Laplace and Gibbs. On numerous occasions he vanquished traditional "frequentist" statisticians with his superior math, demonstrating to anyone with half a brain how the Bayesian way gives faster and more correct results in each example. The weight of evidence falls so heavily on one side that it makes no sense to argue anymore. The fight is over. Bayes wins. The universe runs on Bayes-structure.

Or at least that's what you believe if you learned this stuff from Overcoming Bias.

Like I was until two days ago, when Cyan hit me over the head with something utterly incomprehensible. I suddenly had to go out and understand this stuff, not just believe it. (The original intention, if I remember it correctly, was to impress you all by pulling a Jaynes.) Now I've come back and intend to provoke a full-on flame war on the topic. Because if we can have thoughtful flame wars about gender but not math, we're a bad community. Bad, bad community.

If you're like me two days ago, you kinda "understand" what Bayesians do: assume a prior probability distribution over hypotheses, use evidence to morph it into a posterior distribution over same, and bless the resulting numbers as your "degrees of belief". But chances are that you have a very vague idea of what frequentists do, apart from deriving half-assed results with their ad hoc tools.

Well, here's the ultra-short version: frequentist statistics is the art of drawing true conclusions about the real world instead of assuming prior degrees of belief and coherently adjusting them to avoid Dutch books.

And here's an ultra-short example of what frequentists can do: estimate 100 independent unknown parameters from 100 different sample data sets and have 90 of the estimates turn out to be true to fact afterward. Like, fo'real. Always 90% in the long run, truly, irrevocably and forever. No Bayesian method known today can reliably do the same: the outcome will depend on the priors you assume for each parameter. I don't believe you're going to get lucky with all 100. And even if I believed you a priori (ahem) that don't make it true.

(That's what Jaynes did to achieve his awesome victories: use trained intuition to pick good priors by hand on a per-sample basis. Maybe you can learn this skill somewhere, but not from the Intuitive Explanation.)

How in the world do you do inference without a prior? Well, the characterization of frequentist statistics as "trickery" is totally justified: it has no single coherent approach and the tricks often give conflicting results. Most everybody agrees that you can't do better than Bayes if you have a clear-cut prior; but if you don't, no one is going to kick you out. We sympathize with your predicament and will gladly sell you some twisted technology!

Confidence intervals: imagine you somehow process some sample data to get an interval. Further imagine that hypothetically, for any given hidden parameter value, this calculation algorithm applied to data sampled under that parameter value yields an interval that covers it with probability 90%. Believe it or not, this perverse trick works 90% of the time without requiring any prior distribution on parameter values.

Unbiased estimators: you process the sample data to get a number whose expectation magically coincides with the true parameter value.

Hypothesis testing: I give you a black-box random distribution and claim it obeys a specified formula. You sample some data from the box and inspect it. Frequentism allows you to call me a liar and be wrong no more than 10% of the time reject truthful claims no more than 10% of the time, guaranteed, no prior in sight. (Thanks Eliezer for calling out the mistake, and conchis for the correction!)

But this is getting too academic. I ought to throw you dry wood, good flame material. This hilarious PDF from Andrew Gelman should do the trick. Choice quote:

Well, let me tell you something. The 50 states aren't exchangeable. I've lived in a few of them and visited nearly all the others, and calling them exchangeable is just silly. Calling it a hierarchical or multilevel model doesn't change things - it's an additional level of modeling that I'd rather not do. Call me old-fashioned, but I'd rather let the data speak without applying a probability distribution to something like the 50 states which are neither random nor a sample.

As a bonus, the bibliography to that article contains such marvelous titles as "Why Isn't Everyone a Bayesian?" And Larry Wasserman's followup is also quite disturbing.

Another stick for the fire is provided by Shalizi, who (among other things) makes the correct point that a good Bayesian must never be uncertain about the probability of any future event. That's why he calls Bayesians "Often Wrong, Never In Doubt":

The Bayesian, by definition, believes in a joint distribution of the random sequence X and of the hypothesis M. (Otherwise, Bayes's rule makes no sense.) This means that by integrating over M, we get an unconditional, marginal probability for f.

For my final quote it seems only fair to add one more polemical summary of Cyan's point that made me sit up and look around in a bewildered manner. Credit to Wasserman again:

Pennypacker: You see, physics has really advanced. All those quantities I estimated have now been measured to great precision. Of those thousands of 95 percent intervals, only 3 percent contained the true values! They concluded I was a fraud.

van Nostrand: Pennypacker you fool. I never said those intervals would contain the truth 95 percent of the time. I guaranteed coherence not coverage!

Pennypacker: A lot of good that did me. I should have gone to that objective Bayesian statistician. At least he cares about the frequentist properties of his procedures.

van Nostrand: Well I'm sorry you feel that way Pennypacker. But I can't be responsible for your incoherent colleagues. I've had enough now. Be on your way.

There's often good reason to advocate a correct theory over a wrong one. But all this evidence (ahem) shows that switching to Guardian of Truth mode was, at the very least, premature for me. Bayes isn't the correct theory to make conclusions about the world. As of today, we have no coherent theory for making conclusions about the world. Both perspectives have serious problems. So do yourself a favor and switch to truth-seeker mode.

163 comments

Comments sorted by top scores.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-26T17:35:47.832Z · LW(p) · GW(p)

Hypothesis testing: I give you a black-box random distribution and claim it obeys a specified formula. You sample some data from the box and inspect it. Frequentism often allows you to call me a liar and be wrong no more than 10% of the time, guaranteed, no priors in sight.

Wrong. If all black boxes do obey their specified formulas, then every single time you call the other person a liar, you will be wrong. P(wrong|"false") ~ 1.

I'm thinking you still haven't quite understood here what frequentist statistics do.

It's not perfectly reliable. They assume they have perfect information about experimental setups and likelihood ratios. (Where does this perfect knowledge come from? Can Bayesians get their priors from the same source?)

A Bayesian who wants to report something at least as reliable as a frequentist statistic, simply reports a likelihood ratio between two or more hypotheses from the evidence; and in that moment has told another Bayesian just what frequentists think they have perfect knowledge of, but simply, with far less confusion and error and mathematical chicanery and opportunity for distortion, and greater ability to combine the results of multiple experiments.

And more importantly, we understand what likelihood ratios are, and that they do not become posteriors without adding a prior somewhere.

Replies from: cousin_it
comment by cousin_it · 2009-07-26T17:45:50.678Z · LW(p) · GW(p)

Thanks for the catch, struck out that part.

Yes, you can get your priors from the same source they get experimental setups: the world. Except this source doesn't provide priors.

ETA: likelihood ratios don't seem to communicate the same info about the world as confidence intervals to me. Can you clarify?

Replies from: conchis
comment by conchis · 2009-07-26T19:54:57.265Z · LW(p) · GW(p)

Wrong. If all black boxes do obey their specified formulas, then every single time you call the other person a liar, you will be wrong. P(wrong|"false") ~ 1.

Ok, bear with me. cousin_it's claim was that P(wrong|boxes-obey-formulas)<=.1, am I right? I get that P(wrong|"false" & boxes-obey-formulas) ~ 1, so the denial of cousin_it's claim seems to require P("false"|boxes-obey-formulas) > .1? I assumed that the point was precisely that the frequentist procedure will give you P("false"|boxes-obey-formulas)<=.1. Is that wrong?

Replies from: cousin_it
comment by cousin_it · 2009-07-26T21:58:57.123Z · LW(p) · GW(p)

My claim was what Eliezer said, and it was incorrect. Other than that, your comment is correct.

Replies from: conchis
comment by conchis · 2009-07-26T22:17:36.321Z · LW(p) · GW(p)

Ah, I parsed it wrongly. Whoops. Would it be worth replacing it with a corrected claim rather than just striking it?

Replies from: cousin_it
comment by cousin_it · 2009-07-26T22:42:06.684Z · LW(p) · GW(p)

Done. Thanks for the help!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-26T17:39:09.255Z · LW(p) · GW(p)

a good Bayesian must never be uncertain about the probability of any future event

Who? Whaa? Your probability is your uncertainty.

Replies from: orthonormal, marks
comment by orthonormal · 2009-07-26T20:21:36.594Z · LW(p) · GW(p)

Also, didn't we already cover metauncertainty here?

Replies from: Cyan, Nick_Tarleton
comment by Cyan · 2009-07-26T21:24:53.587Z · LW(p) · GW(p)

Yup. Shalizi's point is that once you've taken meta-uncertainty into account (by marginalizing over it), you have a precise and specific probability distribution over outcomes.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-26T21:36:14.873Z · LW(p) · GW(p)

Well, yes. You have to bet at some odds. You're in some particular state of uncertainty and not a different one. I suppose the game is to make people think that being in some particular state of uncertainty, corresponds to claiming to know too much about the problem? The ignorance is shown in the instability of the estimate - the way it reacts strongly to new evidence.

Replies from: Cyan
comment by Cyan · 2009-07-26T22:35:19.577Z · LW(p) · GW(p)

I'm with you on this one. What Shalizi is criticizing is essentially a consequence of the desideratum that a single real number shall represent the plausibility of an event. I don't think the methods he's advocating dispense with the desideratum, so I view this as a delicious bullet-shaped candy that he's convinced is a real bullet and is attempting to dodge.

comment by Nick_Tarleton · 2009-07-26T20:29:33.938Z · LW(p) · GW(p)

Shalizi says "Bayesian agents never have the kind of uncertainty that Rebonato (sensibly) thinks people in finance should have". My guess is that this means (something that could be described as) uncertainty as to how well-calibrated one is, which AFAIK hasn't been explicitly covered here.

comment by marks · 2009-07-28T07:06:50.406Z · LW(p) · GW(p)

I think what Shalizi means is that a Bayesian model is never "wrong", in the sense that it is a true description of the current state of the ideal Bayesian agent's knowledge. I.e., if A says an event X has probability p, and B says X has probability q, then they aren't lying even if p!=q. And the ideal Bayesian agent updates that knowledge perfectly by Bayes' rule (where knowledge is defined as probability distributions of states of the world). In this case, if A and B talk with each other then they should probably update, of course.

In frequentist statistics the paradigm is that one searches for the 'true' model by looking through a space of 'false' models. In this case if A says X has probability p and B says X has probability q != p then at least one of them is wrong.

comment by Rune · 2009-07-26T17:55:03.202Z · LW(p) · GW(p)

Can you give a detailed numerical examples of some problem where the Bayesian and Frequentist give different answers, and you feel strongly that the Frequentist's answer is better somehow?

I think you've tried to do that, but I don't fully understand most of your examples. Perhaps if you used numbers and equations, that would help a lot of people understand your point. Maybe expand on your "And here's an ultra-short example of what frequentists can do" idea?

Replies from: cousin_it
comment by cousin_it · 2009-07-26T19:41:54.376Z · LW(p) · GW(p)

Short answer: Bayesian answers don't give coverage guarantees.

Long answer: see the comments to Cyan's post.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-26T19:58:58.631Z · LW(p) · GW(p)

"Coverage guarantees" is a frequentist concept. Can you explain where Bayesians fail by Bayesian lights? In the real world, somewhere?

Replies from: Cyan, cousin_it
comment by Cyan · 2009-07-26T22:24:01.795Z · LW(p) · GW(p)

How about this: a Bayesian will always predict that she is perfectly calibrated, even though she knows the theorems proving she isn't.

Replies from: Eliezer_Yudkowsky, wedrifid, cousin_it
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-26T23:56:58.609Z · LW(p) · GW(p)

A Bayesian will have a probability distribution over possible outcomes, some of which give her lower scores than her probabilistic expectation of average score, and some of which give her higher scores than this expectation.

I am unable to parse your above claim, and ask for specific math on a specific example. If you know your score will be lower than you expect, you should lower your expectation. If you know something will happen less often than the probability you assign, you should assign a lower probability. This sounds like an inconsistent epistemic state for a Bayesian to be in.

Replies from: Cyan
comment by Cyan · 2009-07-29T02:32:24.249Z · LW(p) · GW(p)

I spent some time looking up papers, trying to find accessible ones. The main paper that kicked off the matching prior program is Welch and Peers, 1963, but you need access to JSTOR.

The best I can offer is the following example. I am estimating a large number of positive estimands. I have one noisy observation for each one; the noise is Gaussian with standard deviation equal to one. I have no information relating the estimands; per Jaynes, I give them independent priors, resulting in independent posteriors*. I do not have information justifying a proper prior. Let's say I use a flat prior over the positive real line. No matter the true value of each estimand, the sampling probability of the event "my posterior 90% quantile is greater than the estimand" is less than 0.9 (see Figure 6 of this working paper by D.A.S. Fraser). So the more estimands I analyze, the more sure I am that the intervals from 0 to my posterior 90% quantiles will contain less than 90% of the estimands.

I don't know if there's an exact matching prior in this problem, but I suspect it lacks the correct structure.

* This is a place I think Jaynes goes wrong: the quantities are best modeled as exchangeable, not independent. Equivalently, I put them in a hierarchical model. But this only kicks the problem of priors guaranteeing calibration up a level.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-29T04:22:55.317Z · LW(p) · GW(p)

I'm sorry, but the level of frequentist gibberish in this paper is larger than I would really like to work through.

If you could be so kind, please state:

What the Bayesian is using as a prior and likelihood function;

and what distribution the paper assumes the actual parameters are being drawn from, and what the real causal process is governing the appearance of evidence.

If the two don't match, then of course the Bayesian posterior distributions, relative to the experimenter's higher knowledge, can appear poorly calibrated.

If the two do match, then the Bayesian should be well-calibrated. Sure looks QED-ish to me.

Replies from: Cyan
comment by Cyan · 2009-07-29T05:08:56.356Z · LW(p) · GW(p)

The example doesn't come from the paper; I made it myself. You only need to believe the figure I cited -- don't bother with the rest of the paper.

Call the estimands mu_1 to mu_n; the data are x_1 to x_n. The prior over the mu parameters is flat in the positive subset of R^n, zero elsewhere. The sampling distribution for x_i is Normal(mu_i,1). I don't know the distribution the parameters actually follow. The causal process is irrelevant -- I'll stipulate that the sampling distribution is known exactly.

Call the 90% quantiles of my posterior distributions q_i. From the sampling perspective, these are random quantities, being monotonic functions of the data. Their sampling distributions satisfy the inequality Pr(q_i > mu_i | mu_i) < 0.9. (This is what the figure I cited shows.) As n goes to infinity, I become more and more sure that my posterior intervals of the form (0, q_i] are undercalibrated.

You might cite the improper prior as the source of the problem. However, if the parameter space were unrestricted and the prior flat over all of R^n, the posterior intervals would by correctly calibrated.

But it really is fair to demand a proper prior. How could we determine that prior? Only by Bayesian updating from some pre-prior state of information to the prior state of information (or equivalently, by logical deduction, provided that the knowledge we update on is certain). Right away we run into the problem that Bayesian updating does not have calibration guarantees in general (and for this, you really ought to read the literature), so it's likely that any proper prior we might justify does not have a calibration guarantee.

comment by wedrifid · 2009-07-27T13:04:01.756Z · LW(p) · GW(p)

How about this: a Bayesian will always predict that she is perfectly calibrated, even though she knows the theorems proving she isn't.

Wanna bet? Literally. Have a Bayesian to make and a whole bunch of predictions and then offer her bets with payoffs based on what apparent calibration the results will reflect. See which bets she accepts and which she refuses.

Replies from: Cyan
comment by Cyan · 2009-07-27T13:22:43.911Z · LW(p) · GW(p)

Are you volunteering?

Replies from: wedrifid
comment by wedrifid · 2009-07-27T13:43:55.696Z · LW(p) · GW(p)

Sure. :)

But let me warn you... I actually predict my calibration to be pretty darn awful.

Replies from: Cyan
comment by Cyan · 2009-07-27T15:00:29.208Z · LW(p) · GW(p)

We need a trusted third party.

Replies from: wedrifid, cousin_it
comment by wedrifid · 2009-07-27T15:23:27.819Z · LW(p) · GW(p)

Find a candidate.

I was about to suggest we could just bet raw ego points by publicly posting here... but then I realised I prove my point just by playing.

It should be obvious, by the way, that if the predictions you have me make pertain to black boxes that you construct then I would only bet if the odds gave a money pump. There are few cases in which I would expect my calibration to be superior to what you could predict with complete knowledge of the distribution.

Replies from: Cyan, cousin_it
comment by Cyan · 2009-07-27T15:33:34.305Z · LW(p) · GW(p)

It should be obvious, by the way, that if the predictions you have me make pertain to black boxes that you construct then I would only bet if the odds gave a money pump.

Phooey. There goes plan A.

Replies from: wedrifid
comment by wedrifid · 2009-07-27T15:56:39.260Z · LW(p) · GW(p)

;)

Replies from: Cyan
comment by Cyan · 2009-07-27T16:11:02.181Z · LW(p) · GW(p)

Plan B involves trying to use some nasty posterior inconsistency results, so don't think you're out of the woods yet.

Replies from: wedrifid
comment by wedrifid · 2009-07-27T16:40:58.471Z · LW(p) · GW(p)

I am convinced in full generality that being offered the option of a bet can only provide utility >= 0. So if the punch line is 'insuficiently constrained rationality' then yes, the joke is on me!

And yes, I suspect trying to get my head around that paper would (will) be rather costly! I'm a goddam programmer. :P

comment by cousin_it · 2009-07-27T15:25:16.509Z · LW(p) · GW(p)

I volunteer, if y'all tell me what to do.

comment by cousin_it · 2009-07-27T15:14:55.506Z · LW(p) · GW(p)

I volunteer.

comment by cousin_it · 2009-07-27T09:52:44.385Z · LW(p) · GW(p)

I think this is incorrect. A Bayesian doesn't predict a variance of zero on their calibration calculated ten samples later.

comment by cousin_it · 2009-07-26T20:47:09.974Z · LW(p) · GW(p)

Of course not. If you choose to care only about the things Bayes can give you, it's a mathematical fact that you can't do better.

Replies from: wedrifid
comment by wedrifid · 2009-07-26T21:22:19.342Z · LW(p) · GW(p)

I didn't like the "by Bayesian lights" phrase either. What I take as the relevant part of the question is this:

Can you provide an example of a frequentist concept that can be used to make predictions in the real world for which a bayesian prediction will fail?

"Bayesian answers don't give coverage guarantees" doesn't demonstrate anything by itself. The question is could the application of Bayes give a prediction equal to or superior to the prediction about the real world implicit in a coverage guarantee?

If you can provide such an example then you will have proved many people to be wrong in a significant, fundamental way. But I haven't seen anything in this thread or in either of Cyan's which fits that category.

Replies from: cousin_it
comment by cousin_it · 2009-07-26T21:32:16.304Z · LW(p) · GW(p)

Once again: the real-world performance (as opposed to internal coherence) of the Bayesian method on any given problem depends on the prior you choose for that problem. If you have a well-calibrated prior, Bayes gives well-calibrated results equal or superior to any frequentist methods. If you don't, science knows no general way to invent a prior that will reliably yield results superior to anything at all, not just frequentist methods. For example, Jaynes spent a large part of his life searching for a method to create uninformative priors with maxent, but maxent still doesn't guarantee you anything beyond "cross your fingers".

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-26T21:33:43.509Z · LW(p) · GW(p)

If your prior is screwed up enough, you'll also misunderstand the experimental setup and the likelihood ratios. Frequentism depends on prior knowledge just as much as Bayesianism, it just doesn't have a good formal way of treating it.

Replies from: cousin_it
comment by cousin_it · 2009-07-27T06:34:02.106Z · LW(p) · GW(p)

I give you some numbers taken from a normal distribution with unknown mean and variance. If you're a frequentist, your honest estimate of the mean will be the sample mean. If you're a Bayesian, it will be some number off to the side, depending on whatever bullshit prior you managed to glean from my words above - and you don't have the option of skipping that step, and don't have the option of devising a prior that will always exactly match the frequentist conclusion because math doesn't allow it in the general case . (I kinda equivocate on "honest estimate", but refusing to ever give point estimates doesn't speak well of a mathematician anyway.) So nah, Bayesianism depends on priors more, not "just as much".

If tomorrow Bayesians find a good formalization of "uninformative prior" and a general formula to devise them, you'll happily discard your old bullshit prior and go with the flow, thus admitting that your careful analysis of my words about "unknown normal distribution" today wasn't relevant at all. This is the most fishy part IMO.

(Disclaimer: I am not a crazy-convinced frequentist. I'm a newbie trying to get good answers out of Bayesians, and some of the answers already given in these threads satisfy me perfectly well.)

Replies from: Cyan, wedrifid
comment by Cyan · 2009-07-27T06:57:19.474Z · LW(p) · GW(p)

The normal distribution with unknown mean and variance was a bad choice for this example. It's the one case where everyone agrees what the uninformative prior is. (It's flat with respect to the mean and the log-variance.) This uninformative prior is also a matching prior -- posterior intervals are confidence intervals.

Replies from: cousin_it, cousin_it
comment by cousin_it · 2009-07-27T07:27:33.380Z · LW(p) · GW(p)

I didn't know that was possible, thanks. (Wow, a prior with integral=infinity! One that can't be reached as a posterior after any observation! How'd a Bayesian come by that? But seems to work regardless.) What would be a better example?

ETA: I believe the point raised in that comment still deserves an answer from Bayesians.

Replies from: wedrifid, Erik, prase
comment by wedrifid · 2009-07-27T12:55:57.160Z · LW(p) · GW(p)

ETA: I believe the point raised in that comment still deserves an answer from Bayesians.

Done, but I think a more useful reply could be given if you provided an actual worked example where a frequentist tool leads you to make a different prediction than the application of Bayes would (and where you prefer the frequentist prediction.) Something with numbers in it and with the frequentist prediction provided.

Replies from: Cyan
comment by Cyan · 2009-07-27T14:42:58.461Z · LW(p) · GW(p)

Here's one. There is one data point, distributed according to 0.5*N(0,1) + 0.5*N(mu,1).

Bayes: any improper prior for mu yields an improper posterior (because there's a 50% chance that the data are not informative about mu). Any proper prior has no calibration guarantee.

Frequentist: Neyman's confidence belt construction guarantees valid confidence coverage of the resulting interval. If the datum is close to 0, the interval may be the whole real line. This is just what we want [claims the frequentist, not me!]; after all, when the datum is close to 0, mu really could be anything.

Replies from: PhilGoetz, wedrifid
comment by PhilGoetz · 2009-08-04T17:30:16.395Z · LW(p) · GW(p)

Can you explain the terms "calibration guarantee", and what "the resulting interval" is? Also, I don't understand why you say there is a 50% chance the data is not informative about mu. This is not a multi-modal distribution; it is blended from N(0,1) and N(mu,1). If mu can be any positive or negative number, then the one data point will tell you whether mu is positive or negative with probability 1.

Replies from: Cyan
comment by Cyan · 2009-08-04T19:55:02.544Z · LW(p) · GW(p)

Can you explain the terms "calibration guarantee"...

By "calibration guarantee" I mean valid confidence coverage: if I give a number of intervals at a stated confidence, then relative frequency with which the estimated quantities fall within the interval is guaranteed to approach the stated confidence as the number of estimated quantities grows. Here we might imagine a large number of mu parameters and one datum per parameter.

... and what "the resulting interval" is?

Not easily. The second cousin of this post (a reply to wedrifid) contains a link to a paper on arXiv that gives a bare-bones overview of how confidence intervals can be constructed on page 3. When you've got that far I can tell you what interval I have in mind.

Also, I don't understand why you say there is a 50% chance the data is not informative about mu. This is not a multi-modal distribution; it is blended from N(0,1) and N(mu,1).

I think there's been a misunderstanding somewhere. Let Z be a fair coin toss. If it comes up heads the datum is generated from N(0,1); if it comes up tails, the datum is generated from N(mu,1). Z is unobserved and mu is unknown. The probability distribution of the datum is as stated above. It will be multimodal if the absolute value of mu is greater than 2 (according to some quick plots I made; I did not do a mathematical proof).

If mu can be any positive or negative number, then the one data point will tell you whether mu is positive or negative with probability 1.

If I observe the datum 0.1, is mu greater than or less than 0?

comment by wedrifid · 2009-07-29T19:37:31.861Z · LW(p) · GW(p)

Thanks Cyan.

I'll get back to you when (and if) I've had time to get my head around Neyman's confidence belt construction, with which I've never had cause to acquaint myself.

Replies from: Cyan
comment by Cyan · 2009-07-29T20:46:59.022Z · LW(p) · GW(p)

This paper has a good explanation. Note that I've left one of the steps (the "ordering" that determines inclusion into the confidence belt) undetermined. I'll tell you the ordering I have in mind if you get to the point of wanting to ask me.

Replies from: wedrifid
comment by wedrifid · 2009-07-29T23:33:14.304Z · LW(p) · GW(p)

That's a lot of integration to get my head around.

Replies from: Cyan
comment by Cyan · 2009-07-30T00:05:02.584Z · LW(p) · GW(p)

All you need is page 3 (especially the figure). If you understand that in depth, then I can tell you what the confidence belt for my problem above looks like. Then I can give you a simulation algorithm and you can play around and see exactly how confidence intervals work and what they can give you.

comment by Erik · 2009-07-27T12:39:47.509Z · LW(p) · GW(p)

It's called an improper prior. There's been some argument about their use but they seldom lead to problems. The posteriors usually has much better behavior at infinity and when they don't, that's the theory telling us that the information doesn't determine the solution to the problem.

The observation that an improper prior cannot be obtain as a posterior distribution is kind of trivial. It is meant to represent a total lack of information w.r.t. some parameter. As soon you have made an observation you have more information than that.

comment by prase · 2009-07-27T15:26:28.649Z · LW(p) · GW(p)

Maybe the difference lies in the format of answers?

  • We know: set of n outputs of a random number generator with normal distribution. Say {3.2, 4.5, 8.1}.
  • We don't know: mean m and variance v.
  • Your proposed answer: m = 5.26, v = 6.44.
  • A Bayesian's answer: a probability distribution P(m) of the mean and another distribution Q(v) of the variance.

How does a frequentist get them? If he hasn't them, what's his confidence in m = 5.26 and v = 6.44? What if the set contains only one number - what is the frequentist's estimate for v? Note that a Bayesian has no problem even if the data set is empty, he only rests with his priors. If the data set is large, Bayesian's answer will inevitably converge at delta-function around the frequentist's estimate, no matter what the priors are.

Replies from: cousin_it
comment by cousin_it · 2009-07-27T15:36:43.240Z · LW(p) · GW(p)

http://www.xuru.org/st/DS.asp

50% confidence interval for mean: 4.07 to 6.46, stddev: 2.15 to 4.74

90% confidence interval for mean: 0.98 to 9.55, stddev: 1.46 to 11.20

If there's only one sample, the calculation fails due to division by n-1, so the frequentist says "no answer". The Bayesian says the same if he used the improper prior Cyan mentioned.

Replies from: prase, Cyan
comment by prase · 2009-07-27T15:59:26.630Z · LW(p) · GW(p)

Hm, should I understand it that the frequentist assumes normal distribution of the mean value with peak at the estimated 5.26?

If so, then frequentism = bayes + flat prior.

Improper priors are however quite tricky, they may lead to paradoxes such as the two-envelope paradox.

Replies from: cousin_it, cousin_it
comment by cousin_it · 2009-07-27T16:02:42.308Z · LW(p) · GW(p)

The prior for variance that matches the frequentist conclusion isn't flat. And even if it were, a flat prior for variance implies a non-flat prior for standard deviation and vice versa. :-)

Replies from: prase
comment by prase · 2009-07-27T16:48:39.821Z · LW(p) · GW(p)

Of course, I meant flat distribution of the mean. The variance cannot be negative at least.

comment by cousin_it · 2009-07-27T16:02:19.195Z · LW(p) · GW(p)

In this problem, yes. In the general case no one knows exactly what the flat prior is, e.g. if there are constraints on model parameters.

comment by Cyan · 2009-07-27T15:46:27.465Z · LW(p) · GW(p)

Using the flat improper prior I was talking about before, when there's only one data point the posterior distribution is improper, so the Bayesian answer is the same as the frequentist's.

comment by cousin_it · 2009-07-27T07:19:52.089Z · LW(p) · GW(p)

Yep, I know that. Woohoo, an improper prior!

comment by wedrifid · 2009-07-27T12:48:28.183Z · LW(p) · GW(p)

I give you some numbers taken from a normal distribution with unknown mean and variance. If you're a frequentist, your honest estimate of the mean will be the sample mean. If you're a Bayesian, it will be some number off to the side, depending on whatever bullshit prior you managed to glean from my words above - and you don't have the option of skipping that step, and don't have the option of devising a prior that will always exactly match the frequentist conclusion because math doesn't allow it in the general case . (I kinda equivocate on "honest estimate", but refusing to ever give point estimates doesn't speak well of a mathematician anyway.) So nah, Bayesianism depends on priors more, not "just as much".

A Bayesian does not have the option of 'just skipping that step' and choosing to accept whichever prior was mandated by Fisher (or whichever other statistitian created or insisted upon the use of the particular tool in question). It does not follow that the Bayesian is relying on 'Bullshit' more than the frequentist. In fact, when I use the label 'bullshit' I usually mean 'the use of authority or social power mechanisms in lieu of or in direct defiance of reason'. I obviously apply 'bullshit prior' to the frequentist option in this case.

Replies from: cousin_it, orthonormal
comment by cousin_it · 2009-07-27T14:25:13.944Z · LW(p) · GW(p)

A Bayesian does not have the option of 'just skipping that step' and choosing to accept whichever prior was mandated by Fisher

Why in the world doesn't a Bayesian have that option? I thought you were a free people. :-) How'd you decide to reject those priors in favor of other ones, anyway? As far as I currently understand, there's no universally accepted mathematical way to pick the best prior for every given problem and no psychologically coherent way to pick it of your head either, because it ain't there. In addition to that, here's some anecdotal evidence: I never ever heard of a Bayesian agent accepting or rejecting a prior.

Replies from: wedrifid
comment by wedrifid · 2009-07-27T14:56:50.973Z · LW(p) · GW(p)

That was a partial quote and partial paraphrase of the claim made by cousin_it (hang on, that's you! huh?). I thought that the "we are a free people and can use the frequentist implicit priors whenever they happen to be the best available" claim had been made more than enough times so I left off that nitpick and focussed on my core gripe with the post in question. That is, the suggestion that using priors because tradition tells you to makes them less 'bullshit'.

I think your inclusion of 'just' alows for the possibility that off all possible configurations of prior probabilities the frequentist one so happens to be the one worth choosing.

I never ever heard of a Bayesian agent accepting or rejecting a prior.

I'm confused. What do you mean by accepting or rejecting a prior?

Replies from: cousin_it
comment by cousin_it · 2009-07-27T15:07:42.569Z · LW(p) · GW(p)

Funny as it is, I don't contradict myself. A Bayesian doesn't have the option of skipping the prior altogether, but does have the option of picking priors with frequentist justifications, which option you call "bullshit", though for the life of me I can't tell how you can tell.

Frequentists have valid reasons for their procedures besides tradition: the procedures can be shown to always work, in a certain sense. On the other hand, I know of no Bayesian-prior-generating procedure that can be shown to work in this sense or any other sense.

I'm confused. What do you mean by accepting or rejecting a prior?

Some priors are very bad. If a Bayesian somehow ends up with such a prior, they're SOL because they have no notion of rejecting priors.

Replies from: wedrifid, janos
comment by wedrifid · 2009-07-27T17:14:30.294Z · LW(p) · GW(p)

Some priors are very bad. If a Bayesian somehow ends up with such a prior, they're SOL because they have no notion of rejecting priors.

There are two priors for A that a bayesian is unable to update from. p(A) = 0 and p(A) = 1. If a Bayesian ever assigns p(a) = 0 || 1 and are mistaken then they fail at life. No second chances. Shalizi's hypothetical agent started with the absolute (and insane) belief that the distribution was not a mix of the two gaussians in question. That did not change through the application of Bayes rule.

Bayesians cannot reject a prior of 0. They can 'reject' a prior of "That's definitely not going to happen. But if I am faced with overwhelming evidence then I may change my mind a bit." They just wouldn't write that state as p=0 or imply it through excluding it from the a simplified model without being willing to review the model for sanity afterward.

comment by janos · 2009-07-27T15:48:23.172Z · LW(p) · GW(p)

I am trying to understand the examples on that page, but they seem strange; shouldn't there be a model with parameters, and a prior distribution for those parameters? I don't understand the inferences. Can someone explain?

Replies from: cousin_it
comment by cousin_it · 2009-07-27T15:52:58.468Z · LW(p) · GW(p)

Well, the first example is a model with a single parameter. Roughly speaking, the Bayesian initially believes that the true model is either a Gaussian around 1, or a Gaussian around -1. The actual distribution is a mix of those two, so the Bayesian has no chance of ever arriving at the truth (the prior for the truth is zero), instead becoming over time more and more comically overconfident in one of the initial preposterous beliefs.

comment by orthonormal · 2009-07-27T19:17:32.641Z · LW(p) · GW(p)

Vocabulary nitpick: I believe you wrote "in luew of" in lieu of "in lieu of".

Sorry, couldn't help it. IAWYC, anyhow.

Replies from: wedrifid
comment by wedrifid · 2009-07-27T20:11:59.735Z · LW(p) · GW(p)

Damn that word and its excessive vowels!

comment by Cyan · 2009-07-26T22:15:19.097Z · LW(p) · GW(p)

I didn't mean to rehabilitate frequentism! I only meant to point out that calibration is a frequentist optimality criterion, and one that Bayesian posterior intervals can be proved not to have in general. I view this as a bullet to be bitten, not dodged.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-26T22:23:04.055Z · LW(p) · GW(p)

It's out of your hands now. Overcoming Bayes!

comment by PhilGoetz · 2009-07-27T16:14:08.475Z · LW(p) · GW(p)

Can someone do something I've never seen anyone do - lay out a simple example in which the Bayesian and frequentist approaches give different answers?

Replies from: marks, Cyan
comment by marks · 2009-07-28T06:27:26.400Z · LW(p) · GW(p)

I've had some training in Bayesian and Frequentist statistics and I think I know enough to say that it would be difficult to give a "simple" and satisfying example. The reason is that if one is dealing with finite dimensional statistical models (this is where the parameter space of the model is finite) and one has chosen a prior for those parameters such that there is non-zero weight on the true values then the Bernstein-von Mises theorem guarantees that the Bayesian posterior distribution and the maximum likelihood estimate converge to the same probability distribution (although you may need to use improper priors). The covers cases where we consider finite outcomes such as a toss of a coin or rolling a die.

I apologize if that's too much jargon, but for really simple models that are easy to specify you tend to get the same answer. Bayesian stats starts to behave different than frequentist statistics in noticeable ways when you consider infinite outcome spaces. An example here might be where you are considering probability distributions over curves (this arises in my research on speech recognition). In this case even if you have a seemingly sensible prior you can end up in the case where, in the limit of infinite data, you will end up with a posterior distribution that is different from the true distribution.

In practice if I am learning a Gaussian Mixture Model for speech curves and I don't have much data then Bayesian procedures tend to be a bit more robust and frequentist procedures end up over-fitting (or being somewhat random). When I start getting more data using frequentist methods tend to be algorithmically more tractable and get better results. So I'll end with faster computation time and say on the task of phoneme recognition I'll make fewer errors.

I'm sorry if I haven't explained it well, the difference in performance wasn't really evident to me until I spent some time actually using them in machine learning. Unfortunately, most of the disadvantage of Bayesian approaches aren't evident for simple statistical problems, but they become all too evident in the case of complex statistical models.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-04T17:22:22.996Z · LW(p) · GW(p)

Thanks much!

and one has chosen a prior for those parameters such that there is non-zero weight on the true values then the Bernstein-von Mises theorem guarantees that the Bayesian posterior distribution and the maximum likelihood estimate converge to the same probability distribution (although you may need to use improper priors)

What do "non-zero weight" and "improper priors" mean?

EDIT: Improper priors mean priors that don't sum to one. I would guess "non-zero weight" means "non-zero probability". But then I would wonder why anyone would introduce the term "weight". Perhaps "weight" is the term you use to express a value from a probability density function that is not itself a probability.

Replies from: marks
comment by marks · 2009-08-05T05:42:21.267Z · LW(p) · GW(p)

No problem.

Improper priors are generally only considered in the case of continuous distributions so 'sum' is probably not the right term, integrate is usually used.

I used the term 'weight' to signify an integral because of how I usually intuit probability measures. Say you have a random variable X that takes values in the real line, the probability that it takes a value in some subset S of the real line would be the integral of S with respect to the given probability measure.

There's a good discussion of this way of viewing probability distributions in the wikipedia article. There's also a fantastic textbook on the subject that really has made a world of difference for me mathematically.

comment by Cyan · 2009-07-27T16:17:09.666Z · LW(p) · GW(p)

How about this?

comment by RolfAndreassen · 2009-07-26T22:07:36.286Z · LW(p) · GW(p)

I had another thought on the subject. Consider flipping a coin; a Bayesian says that the 50% estimate of getting tails is just your own inability to predict with sufficient accuracy; a frequentist says that the 50% is a property of the coin - or to be less straw-making about it, a property of large sets of indistinguishable coin-flips. So, ok, in principle you could build a coin-predictor and remove the uncertainty. But now consider an electron passing through a beam splitter. Here there is no method even in principle of predicting which Everett branch you find yourself in. (Given some reasonable assumptions about locality and such.) The coin has hidden variables like the precise location of your thumb and the exact force your muscles apply to it; if you were smart enough, you could tease a prediction out of them. But an electron has no such hidden properties. Is it not reasonable, then, to say that the 50% chance really is a property of the electron, and not the predictor?

Replies from: pengvado, GuySrinivasan, prase
comment by pengvado · 2009-07-27T07:39:49.870Z · LW(p) · GW(p)

The relevant property of the electron+beamsplitter(+everything else) system is that its wavefunction will be evenly split between the two Everett branches. No chance involved. 50% is how much I care about each branch.

And after performing the experiment but before looking at the result, I can continue using the same reasoning: "I have already decohered, but whatever deterministic decision algorithm I apply now will return the same answer in both branches, so I can and should optimize both outcomes at once." Or I can switch to indexical uncertainty: "I am uncertain about which instance I am, even though I know the state of the universe with certainty." These two methods should be equivalent.

If we ever do find some nondeterministic physical law, then you can have your probability as a fundamental property of particles. Maybe. I'm not sure how one would experimentally distinguish "one stochastic world" from "branch both ways" or from "secure pseudo-random number generator" in the absence of any interference pattern to have a precise theory of; but I'm not going to speculate here about what physicists can or can't learn.

comment by GuySrinivasan · 2009-07-27T07:16:19.813Z · LW(p) · GW(p)

I believe the answer to this question is currently "we don't know". But notice that "the electron" doesn't exist, it's a pattern ("just" a pattern? :)) in the wavefunction. A pattern which happens to occur in lots of places, so we call it an electron.

My intuition, IANAP, is that if anything it is more natural to say the 50% belongs somehow to which branch you find yourself in, not the pattern in the wavefunction we call an electron.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2009-07-27T23:44:09.989Z · LW(p) · GW(p)

Ok, but I don't think that matters for the question of frequentist versus Bayesian. You're still saying that the 50% is a property of something other than your own uncertainty.

Moving the problem to lexical uncertainty seems to me to rely on moving the question in time; you can only do this after you've done the experiment but before you've looked at the measurement. This feels to me like asking a different question.

comment by prase · 2009-07-27T14:14:29.460Z · LW(p) · GW(p)

Finally, the electron is found at some certain polarisation. You just don't know which before actually doing the experiment (same as for the coin) and you can't make in principle (at least according to present model of physics - don't forget that non-local hidden variables are not ruled out) any observation which tells you the result with more certainty in advance (for coin you can). So, the difference is that the future of a classical system can be predicted with unlimited certainty from its present state, while for quantum system not so. This doesn't necessarily mean that the future is not determined. One can adopt the viewpoint (I think that it was even suggested on OB/LW in Eliezer's posts about timeless physics) that future is symmetric to the past - it exists in the whole history of universe, and if we don't know it now, it's our ignorance. I suppose you would agree that not knowing about the electron's past is a matter of our ignorance rather than a property of the electron itself, without regard to whether we are able to calculate it from presently available information, even in principle (i.e. using present theories).

I also think that it has little merit to engage in discussions about terminology and this one tends in that direction. Practically there's no difference between saying that quantum probabilities are "properties of the system" or "of the predictor". Either we can predict, or not, and that's all what matters. Beware of the clause "in principle", as it often only obscures the debate.

Edit: to formulate it a little bit differently, predictability is an instance of regularity in the universe, i.e. our ability to compress the data of the whole history of the universe into some brief set of laws and possibly not so brief set of initial conditions, nevertheless much smaller amount of information that the history of the universe recorded at each point and time instant. As we do not have this huge pack of information and thus can't say to what extent it is compressible, we use theories that are based much on induction, which itself is a particular bias. We don't know even whether the theories we use apply at any time and place, of for any system universally. Frequentist seem to distinguish this uncertainty - which they largely ignore in practice - from uncertainty as a property of the system. So, as I understand the state of affairs, a frequentist is satisfied with a theory (which is a comprimation algorithm applicable to the information about the universe) which includes calling the random number generator at some occasions (e.g. when dealing with dice or electrons), and such induced uncertainty he calls "property of the system". On the other hand, the uncertainty about the theory itself is a different kind of "meta-uncertainty".

The Bayesian approach seems to me more elegant (and Occam-razor friendly) as it doesn't introduce different sorts of uncertainties. It also fits better with the view of physical laws as comprimation algorithms, as it doesn't distinguish between data and theories with regard to their uncertainty. One may just accept that the history of universe needn't be compressible to data available at the moment, and use induction to estimate future states of the world in the same way as one estimates limits of validity of presently formulated physical laws.

comment by AlexaKhan · 2009-07-28T17:49:51.573Z · LW(p) · GW(p)

That's what Jaynes did to achieve his awesome victories: use trained intuition to pick good priors by hand on a per-sample basis.

... as if applying the classical method doesn't require using trained intuition to use the "right" method for a particular kind of problem, which amounts to choosing a prior but doing it implicitly rather than explicitly ...

Our inference is conditional on our assumptions [for example, the prior P(Lambda)]. Critics view such priors as a difficulty because they are `subjective', but I don't see how it could be otherwise. How can one perform inference without making assumptions? I believe that it is of great value that Bayesian methods force one to make these tacit assumptions explicit.

McKay, information theory, learning and inference

Replies from: cousin_it
comment by cousin_it · 2009-08-04T11:57:18.094Z · LW(p) · GW(p)

Frequentist methods often have mathematical justifications, so Bayesian priors should have them too.

comment by janos · 2009-07-27T16:56:36.640Z · LW(p) · GW(p)

Since we're discussing (among other things) noninformative priors, I'd like to ask: does anyone know of a decent (noninformative) prior for the space of stationary, bidirectionally infinite sequences of 0s and 1s?

Of course in any practical inference problem it would be pointless to consider the infinite joint distribution, and you'd only need to consider what happens for a finite chunk of bits, i.e. a higher-order Markov process, described by a bunch of parameters (probabilities) which would need to satisfy some linear inequalities. So it's easy to find a prior for the space of mth-order Markov processes on {0,1}; but these obvious (uniform) priors aren't coherent with each other.

I suppose it's possible to normalize these priors so that they're coherent, but that seems to result in much ugliness. I just wonder if there's a more elegant solution.

Replies from: marks
comment by marks · 2009-07-28T06:40:29.620Z · LW(p) · GW(p)

I suppose it depends what you want to do, first I would point out that the set is in a bijection with the real numbers (think of two simple injections and then use Cantor–Bernstein–Schroeder), so you can use any prior over the real numbers. The fact that you want to look at infinite sequences of 0s and 1s seems to imply that you are considering a specific type of problem that would demand a very particular meaning of 'non-informative prior'. What I mean by that is that any 'noninformative prior' usually incorporates some kind of invariance: e.g. a uniform prior on [0,1] for a Bernoulli distribution is invariant with respect to the true value being anywhere in the interval.

Replies from: janos
comment by janos · 2009-07-28T15:42:44.452Z · LW(p) · GW(p)

The purpose would be to predict regularities in a "language", e.g. to try to achieve decent data compression in a way similar to other Markov-chain-based approaches. In terms of properties, I can't think of any nontrivial ones, except the usual important one that the prior assign nonzero probability to every open set; mainly I'm just trying to find something that I can imagine computing with.

It's true that there exists a bijection between this space and the real numbers, but it doesn't seem like a very natural one, though it does work (it's measurable, etc). I'll have to think about that one.

Replies from: marks
comment by marks · 2009-07-29T04:11:17.605Z · LW(p) · GW(p)

What topology are you putting on this set?

I made the point about the real numbers because it shows that putting a non-informative prior on the infinite bidirectional sequences should be at least as hard as for the real numbers (which is non-trivial).

Usually a regularity is defined in terms of a particular computational model, so if you picked Turing machines (or the variant that works with bidirectional infinite tape, which is basically the same class as infinite tape in one direction), then you could instead begin constructing your prior in terms of Turing machines. I don't know if that helps any.

Replies from: janos
comment by janos · 2009-07-29T06:04:34.689Z · LW(p) · GW(p)

Each element of the set is characterized by a bunch of probabilities; for example there is p_01101, which is the probability that elements x_{i+1} through x_{i+5} are 01101, for any i. I was thinking of using the topology induced by these maps (i.e. generated by preimages of open sets under them).

How is putting a noninformative prior on the reals hard? With the usual required invariance, the uniform (improper) prior does the job. I don't mind having the prior be improper here either, and as I said I don't know what invariance I should want; I can't think of many interesting group actions that apply. Though of course 0 and 1 should be treated symmetrically; but that's trivial to arrange.

I guess you're right that regularities can be described more generally with computational models; but I expect them to be harder to deal with than this (relatively) simple, noncomputational (though stochastic) model. I'm not looking for regularities among the models, so I'm not sure how a computational model would help me.

Replies from: cousin_it, marks
comment by cousin_it · 2009-07-29T07:33:12.810Z · LW(p) · GW(p)

Something about this discussion reminds me of a hilarious text:

Now having no reason to otherwise, I decided to assign each of the 64 sequences a prior probability of 1/64 of occurring. Now, of course, You may think otherwise but that is Your business and not My concern. (I, as a Bayesian, have a tendency to capitalise pronouns but I don't care what You think. Strictly speaking, as a new convert to subjectivist philosophy, I don't even care whether you are a Bayesian. In fact it is a bit of mystery as to why we Bayesians want to convert anybody. But then "We" is in any case a meaningless concept. There is only I and I don't care whether this digression has confused You.) I then set about acquiring some experience with the coin. Now as De Finetti (vol 1 p141) points out, "experience, since experience is nothing more than the acquisition of further information - acts always and only in the way we have just described: suppressing the alternatives that turn out to be no longer possible..." (His italics)

Now of the 64 sequences, 32 end in a head. Therefore, before tossing the coin my prevision of the 6th toss was 32/64. I tossed the coin once and it came up heads. I thus immediately suppressed 32 alternative sequences beginning with a tail (which clearly hadn't occurred) leaving 32 beginning with a head of which 16 ended with a head. Thus my prevision for the 6th toss was now 16/32. (Of course, for a single toss the number of heads can only be 0 or 1 but THINK prevision is not prediction anymore than perversion is predilection.) I then tossed the coin and it came up heads. This immediately eliminated 16 sequences, leaving 16 beginning with 2 heads, 8 of which ended in a head. My prevision of the 6th toss was thus 8/16. I carried on like this, obtaining a head on each of the next three goes and amending my prevision to 4/8, 2/4 and 1/2 which is where I then was after the 5th toss having obtained 5 heads in a row.

The moral of this story seems to be, Assume priors over generators, not over sequences. A noninformative prior over the reals will never learn that the digit after 0100 is more likely to be 1, no matter how much data you feed it.

Replies from: janos
comment by janos · 2009-08-04T14:31:12.310Z · LW(p) · GW(p)

Right, that is a good piece. But I'm afraid I was unclear. (Sorry if I was.) I'm looking for a prior over stationary sequences of digits, not just sequences. I guess the adjective "stationary" can be interpreted in two compatible ways: either I'm talking about sequences such that for every possible string w the proportion of substrings of length |w| that are equal to |w|, among all substrings of length |w|, tends to a limit as you consider more and more substrings (either extending forward or backward in the sequence); this would not quite be a prior over generators, and isn't what I meant.

The cleaner thing I could have meant (and did) is the collection of stationary sequence-valued random variables, each of which (up to isomorphism) is completely described by the probabilities p_w of a string of length |w| coming up as w. These, then, are generators.

Replies from: cousin_it
comment by cousin_it · 2009-08-07T12:11:08.276Z · LW(p) · GW(p)

Janos, I spent some days parsing your request and it's quite complex. Cosma Shalizi's thesis and algorithm seem to address your problem in a frequentist manner, but I can't yet work out any good Bayesian solution.

comment by marks · 2009-08-05T06:00:25.696Z · LW(p) · GW(p)

One issue with say taking a normal distribution and letting the variance go to infinity (which is the improper prior I normally use) is that the posterior distribution distribution is going to have a finite mean, which may not be a desired property of the resulting distribution.

You're right that there's no essential reason to relate things back to the reals, I was just using that to illustrate the difficulty.

I was thinking about this a little over the last few days and it occurred to me that one model for what you are discussing might actually be an infinite graphical model. The infinite bi-directional sequence here are the values of bernoulli-distributed random variables. Probably the most interesting case for you would be a Markov-random field, as the stochastic 'patterns' you were discussing may be described in terms of dependencies between random variables.

Here's three papers I read a little while back on the topic (and related to) something called an Indian Buffet process: (http://www.cs.utah.edu/~hal/docs/daume08ihfrm.pdf) (http://cocosci.berkeley.edu/tom/papers/ibptr.pdf) (http://www.cs.man.ac.uk/~mtitsias/papers/nips07.pdf)

These may not quite be what you are looking for since they deal with a bound on the extent of the interactions, you probably want to think about probability distributions of binary matrices with an infinite number of rows and columns (which would correspond to an adjacency matrix over an infinite graph).

comment by RolfAndreassen · 2009-07-26T18:39:28.576Z · LW(p) · GW(p)

Perhaps we can try an experiment? We have here, apparently, both Bayesians and frequentists; or at a minimum, people knowledgeable enough to be able to apply both methods. Suppose I generate 25 data points from some distribution whose nature I do not disclose, and ask for estimates of the true mean and standard deviation, from a Bayesian and a frequentist? The underlying analysis would also be welcome. If necessary we could extend this to 100 sets of data points, ask for 95% confidence intervals, and see if the methods are well calibrated. (This does probably require some better method of transferring data than blog comments, though.)

As a start, here is one data set:

617.91 16.8539 83.4021 141.504 545.112 215.863 553.168 414.435 4.71129 609.623 117.189 -102.648 647.449 283.57 286.838 710.811 505.826 79.3366 171.816 105.332 540.313 429.298 -314.32 255.93 382.471

It is possible that this task does not have sufficient difficulty to distinguish between the approaches. If so, how can we add constraints to get different answers?

Replies from: marks, byrnema
comment by marks · 2009-07-28T07:23:17.578Z · LW(p) · GW(p)

There's a difficulty with your experimental setup in that you implicitly are invoking a probability distribution over probability distributions (since you represent a random choice of a distribution). The results are going to be highly dependent upon how you construct your distribution over distributions. If your outcome space for probability distributions is infinite (which is what I would expect), and you sampled from a broad enough class of distributions then a sampling of 25 data points is not enough data to say anything substantive.

A friend of yours who knows what distributions you're going to select from, though, could incorporate that knowledge into a prior and then use that to win.

So, I predict that for your setup there exists a Bayesian who would be able to consistently win.

But, if you gave much more data and you sampled from a rich enough set of probability distributions that priors would become hard to specify a frequentist procedure would probably win out.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2009-07-28T16:37:14.047Z · LW(p) · GW(p)

Hmm. I don't know if I'm a very random source of distributions; humans are notoriously bad at randomness, and there are only so many distributions readily available in standard libraries. But in any case, I don't see this as a difficulty; a real-world problem is under no obligation to give you an easily recognised distribution. If Bayesians do better when the distribution is unknown, good for them. And if not, tough beans. That is precisely the sort of thing we're trying to measure!

I don't think, though, that the existence of a Bayesian who can win, based on knowing what distributions I'm likely to use, is a very strong statement. Similarly there exists a frequentist who can win based on watching over my shoulder when I wrote the program! You can always win by invoking special knowledge. This does not say anything about what would happen in a real-world problem, where special knowledge is not available.

Replies from: marks
comment by marks · 2009-07-29T04:02:44.532Z · LW(p) · GW(p)

You can actually simulate a tremendous number of distributions (and theoretically any to an arbitrary degree of accuracy) by doing an approximate inverse CDF applied to a standard uniform random variable see here for example. So the space of distributions from which you could select to do your test is potentially infinite. We can then think of your selection of a probability distribution as being a random experiment and model your selection process using a probability distribution.

The issue is that since the outcome space is the space of all computable probability distributions Bayesians will have consistency problems (another good paper on the topic is here), i.e. the posterior distribution won't converge to the true distribution. So in this particular set up I think Bayesian methods are inferior unless one could devise a good prior over what distributions, I suppose if I knew that you didn't know how to sample from arbitrary probability distributions then if I put that in my prior then I may be able to use Bayesian methods to successfully estimate the probability distribution (the discussion of the Bayesian who knew you personally was meant to be tongue-in-cheek).

In the frequentist case there is a known procedure due to Parzen from the 60's .

All of these are asymptotic results, however, your experiment seems to be focused on very small samples. To the best of my knowledge there aren't many results in this case except under special conditions. I would state that without more constraints on the experimental design I don't think you'll get very interesting results. Although I am actually really in favor of such evaluations because people in statistics and machine learning for a variety of reasons don't do them, or don't do them on a broad enough scale. Anyway if you actually are interested in such things you may want to start looking here, since statistics and machine learning both have the tools to properly design such experiments.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2009-07-29T17:41:29.914Z · LW(p) · GW(p)

The small samples are a constraint imposed by the limits of blog comments; there's a limit to how many numbers I would feel comfortable spamming this place with. If we got some volunteers, we might do a more serious sample size using hosted ROOT ntuples or zipping up some plain ASCII.

I do know how to sample from arbitrary distributions; I should have specified that the space of distributions is those for which I don't have to think for more than a minute or so, or in other words, someone has already coded the CDF in a library I've already got installed. It's not knowledge but work that's the limiting factor. :) Presumably this limits your prior quite a lot already, there being only so many commonly used math libraries.

comment by byrnema · 2009-07-26T18:56:05.597Z · LW(p) · GW(p)

Ha ha -- this is a Bayesian problem drawn from a Bayesian perspective!

Surely a frequentist would have a different perspective and propose a different kind of solution. Instead of designing an experiment to determine which is better, how about extrapolating from the evidence we already have. Humans have made a certain amount of progress in mathematics -- has this mathematics been mainly developed by frequentists or Bayesians?

(Case closed, I think.)

I roughly consider Bayesians the experimental scientists and frequentists the theoretical scientists. Mathematics is theoretical, which is why the frequentists cluster there. Do you disagree with this?

(Nevertheless, the challenge sounds fun.)

Replies from: Nominull
comment by Nominull · 2009-07-26T20:29:02.056Z · LW(p) · GW(p)

You could use the same argument against the use of computers in science - after all, Newton didn't have a computer, and neither did Einstein. Case closed, I think.

Replies from: byrnema
comment by byrnema · 2009-07-26T20:45:21.692Z · LW(p) · GW(p)

This is the comment Nominull was referring to:

Ha ha -- this is a Bayesian problem drawn from a Bayesian perspective!

Surely a frequentist would have a different perspective and propose a different kind of solution. Instead of designing an experiment to determine which is better, how about extrapolating from the evidence we already have. Humans have made a certain amount of progress in mathematics -- has this mathematics been mainly developed by frequentists or Bayesians?

(Case closed, I think.)

I roughly consider Bayesians the experimental scientists and frequentists the theoretical scientists. Mathematics is theoretical, which is why the frequentists cluster there. Do you disagree with this?

(Nevertheless, the challenge sounds fun.)

My response to Nominull: the cases aren't really parallel, but I do need to emphasize that I don't think the Bayesian perspective is wrong; it just hasn't been the perspective, historically, of most mathematicians.

... but, finally, when I think of Baysian mathematics being a new or under-utilised thing, I see an analogy with computers. Perhaps Bayesian theory could be a power-horse for new mathematics. I guess my perspective was that mathematicians will use whichever tools available to them, and they used frequentist theory instead. But perhaps they didn't understand Bayesian tools or it wasn't the time for them yet.

Replies from: wedrifid
comment by wedrifid · 2009-07-27T13:35:07.362Z · LW(p) · GW(p)

This is the comment Nominull was referring to:

Voted the courtesy repost back up to zero. I most likely downvoted the original post for blatant silliness but really, why penalise politeness? In fact, I'd upvote the deleted great grandparent for demonstrating changing one's mind (on the applicability of a particular point), in defiance of rather strong biases against doing that.

I roughly consider Bayesians the experimental scientists and frequentists the theoretical scientists. Mathematics is theoretical, which is why the frequentists cluster there. Do you disagree with this?

I consider frequentist experimental scientists to be potentially competent in what they do. After all, available frequentist techniques are good enough that the significant problems with the application of stastics are in the misuse of frequentist tools, more so than them being used at all. As for theoretical frequentists... I suggest that anyone who makes a serious investigation into developments in probability theory and statistics will not remain a frequentist. I claim that what 'theoretical frequentists' do is orthoganal to theory (but often precisely in line with what academia is really about).

comment by byrnema · 2009-07-26T18:08:53.635Z · LW(p) · GW(p)

I think this was a great post for having both context and links and specifically (rather than generally) questioning assumptions the group hasn't visited in a while (if ever).

comment by AllanCrossman · 2009-07-26T17:22:25.010Z · LW(p) · GW(p)

What does one read to become well versed in this stuff in two days; and how much skill with maths does it require?

Replies from: cousin_it
comment by cousin_it · 2009-07-26T17:28:22.711Z · LW(p) · GW(p)

Ouch! Now I see the two days stuff looks like boasting. Don't worry, all my LW posts up to now have contained stupid mathematical mistakes, and chances are people will find errors in this one too :-)

(ETA: sure enough, Eliezer has found one. Luckily it wasn't critical.)

I have a degree in math and competed at the national level in my teens (both in Russia), but haven't done any serious math since I graduated six years ago. The sources for this post were mostly Wikipedia and Google searches on keywords from Wikipedia.

Replies from: AllanCrossman
comment by AllanCrossman · 2009-07-26T17:41:47.912Z · LW(p) · GW(p)

My comment was an honest question and was not intended as derogatory...

comment by Wei Dai (Wei_Dai) · 2009-07-28T04:31:01.182Z · LW(p) · GW(p)

I'm surprised that nobody has mentioned the Universal Prior yet. Eliezer also wrote a post on it.

comment by Insert_Idionym_Here · 2011-12-03T02:33:44.837Z · LW(p) · GW(p)

... What is it that frequentists do, again? I'm a little out of touch.

comment by Richard_Kennaway · 2009-07-27T09:30:56.390Z · LW(p) · GW(p)

Strong evidence can always defeat strong priors, and vice versa.

Is there anything more to the issue than this?

Replies from: marks
comment by marks · 2009-07-28T06:33:00.159Z · LW(p) · GW(p)

This isn't always the case if the prior puts zero probability weight on the true model. This can be avoided on finite outcome spaces, but for infinite outcome spaces no matter how much evidence you have you may not overcome the prior.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-07-28T11:13:19.187Z · LW(p) · GW(p)

I thought that 0 and 1 were Bayesian sins, unattainable +/- infinity on the log-odds scale, and however strong your priors, you never make them that strong.

Replies from: marks
comment by marks · 2009-07-28T15:49:46.132Z · LW(p) · GW(p)

In finite dimensional parameter spaces sure, this makes perfect sense. But suppose that we are considering a stochastic process X1, X2, X3, .... where Xn is follows a distribution Pn over the integers. Now put a prior on the distribution and suppose that unbeknown to you Pn is the distribution that puts 1/2 probability weight on -n and 1/2 probability weight on n. If the prior on the stochastic process does not put increasing weight on integers with large absolute value, then in the limit the prior puts zero probability weight on the true distribution (and may start behaving strangely quite early on in the process).

Another case is that the true probability model may be too complicated to write down or computationally infeasible to do so (say a Gaussian mixture with 10^(10) mixture components, which is certainly reasonable in a modern high-dimensional database), so one may only consider probability distributions that approximate the true distribution and put zero weight on the true model, i.e. it would be sensible in that case to have a prior that may put zero weight on the true model and you would search only for an approximation.

comment by Cyan · 2009-07-26T22:13:51.784Z · LW(p) · GW(p)

I didn't mean to rehabilitate frequentism! I only meant to point out that calibration is a frequentist optimality criterion, and that it's one that Bayesian posterior intervals can be proved not to have in general.

Replies from: cousin_it
comment by cousin_it · 2009-07-26T22:19:56.685Z · LW(p) · GW(p)

Too late. I have already updated to believe that a theory that demands priors can't be complete. Correct, maybe, but not complete. We should work out an approach that works well on more criteria instead of guarding the truth of what we already know.

If Bayes were the complete answer, Jaynes wouldn't have felt the need to invent maxent or generalize the indifference principle. That may be the correct direction of inquiry.

ETA: this was a response to Cyan saying he didn't mean to rehabilitate frequentism. :-)

Replies from: janos
comment by janos · 2009-07-27T15:55:30.208Z · LW(p) · GW(p)

Updated, eh? Where did your prior come from? :)

Replies from: cousin_it
comment by cousin_it · 2009-07-27T15:56:27.228Z · LW(p) · GW(p)

Overcoming Bias. :-)

comment by [deleted] · 2009-07-26T21:42:12.126Z · LW(p) · GW(p)

I'd like to take advantage of frequentism's return to respectability to ask if anyonw knows where I can get a copy of "An Introduction to the Bootstrap" by Efron and Tibshirani.

It's on Google books, but I don't like reading things through Google books. It's for sale on-line, but it costs a lot and shipping takes a while. My university's library is supposed to have it, but the librarians can't find it. My local library hasn't heard of it.

I hardly know any statistics or probability; I've just been borrowing bits and pieces as I need them without worrying about bayesian vs. frequentism.

There is a little something that's been bothering me in the back of my mind when I see Eliezer waxing poetic about bayesianism. Maybe this is an ignorant question, but here it is:

If bayesians don't believe in a true probability waiting to be approximated, only in probabilities assigned by a mind, how do they justify seeking additional data? The rules require you to react to new data by moving your assigned probability in a certain way, but, without something desirable that you're moving towards, why is it good to have that new data?

Replies from: Cyan
comment by Cyan · 2009-07-26T22:09:49.866Z · LW(p) · GW(p)

If bayesians don't believe in a true probability waiting to be approximated, only in probabilities assigned by a mind, how do they justify seeking additional data? The rules require you to react to new data by moving your assigned probability in a certain way, but, without something desirable that you're moving towards, why is it good to have that new data?

Collecting new data is not justifiable in general -- the cost of the new data may outweigh the benefit to be gained from it. But let's assume that collecting new data has a negligible cost. As a Bayesian, what you desire is the smallest loss possible. For reasonable loss functions, the smaller the region over which your distribution spreads its uncertainty (that is to say, the smaller its variance) the smaller you expect your loss to be. The law of total variance can be interpreted to say that you expect the variance of the posterior distribution to be smaller than the variance of the prior distributions.* So collect more data!

* law of total variance: prior variance = prior expectation of posterior variance + prior variance of posterior mean. This implies that the prior variance is larger than the prior expectation of posterior variance.

Replies from: None
comment by [deleted] · 2009-07-26T22:32:52.257Z · LW(p) · GW(p)

So, more data is good because it makes you more confident? I guess that makes sense, but it still seems strange not to care what you're confident in.

Replies from: Cyan
comment by Cyan · 2009-07-26T22:42:28.306Z · LW(p) · GW(p)

In any real problem there is a context and some prior information. Bayes doesn't give this to you -- you give it to Bayes along with the data and turn the crank on the machinery to get the posterior. The things you're confident about are in the context.

Replies from: None
comment by [deleted] · 2009-07-27T00:27:20.688Z · LW(p) · GW(p)

What about changing your mind?

Replies from: Cyan
comment by Cyan · 2009-07-27T01:15:23.812Z · LW(p) · GW(p)

In theory, if you can change your mind about something, you have uncertainty about it, and your prior distribution should reflect that. In practice, you abstract the uncertainty away by making some simplifying assumptions, do the analysis conditional on your assumptions, and reserve the right to revisit the assumptions if they don't seem adequate.

Replies from: None
comment by [deleted] · 2009-07-27T02:53:16.449Z · LW(p) · GW(p)

I didn't mean to ask how a bayesian changes his or her mind. I meant to ask how the thing you believe in can be in the context in situations where you change your mind based on new evidence.

Replies from: Cyan
comment by Cyan · 2009-07-27T03:06:43.006Z · LW(p) · GW(p)

Let's say I'm weighing some acrylamide powder on an electronic balance. (Gonna make me some polyacrylamide gel!) The balance is so sensitive that small changes in air pressure register in the last two digits. From what I know about air pressure variations from having done this before, I create a model for the data. Also because I've done this before, I can eyeball roughly how much powder I've got on the balance; this determines my prior distribution before reading the balance. Then I observe some data from the balance readout and update my distribution.

Replies from: None
comment by [deleted] · 2009-07-27T08:05:26.710Z · LW(p) · GW(p)

I can't tell without more information whether that's an example of what I mean by "changing your mind." Here's one that I think definitely qualifies:

Let's say you're going to bet on a coin toss. You only have a small amount of information on the coin, and you decide for whatever reason that there's a 51% chance of getting heads. So you're going to bet on heads. But then you realize that there's a way to get more data.

At this point, I'm thinking, "Gee, I hardly know anything about this coin. Maybe I'm better off betting on tails and I just don't know it. I should get that data."

What I think you're saying about bayesians is that a bayesian would say, "Gee, 51% isn't very high. I'd like to be at least 80% sure. Since I don't know very much yet, it wouldn't take much more to get to 80%. I should get that data so I can bet on heads with confidence."

Which sort of makes sense but is also a little strange.

Replies from: Cyan
comment by Cyan · 2009-07-27T15:29:39.976Z · LW(p) · GW(p)

Technical stuff: under the standard assumption of infinite exchangeability of coin tosses, there exists some limiting relative frequency for coin toss results. (This is de Finetti's theorem.)

Key point: I have a probability distribution for this relative frequency (call it f) -- not a probability of a probability.

You only have a small amount of information on the coin, and you decide for whatever reason that there's a 51% chance of getting heads. So you're going to bet on heads. But then you realize that there's a way to get more data.

Here you've said that my probability density for f is dispersed, but slightly asymmetric. I too can say, "Well, I have an awful lot of probability mass on values of f less than 0.5. I should collect more information to tighten this up."

"Gee, 51% isn't very high. I'd like to be at least 80% sure. Since I don't know very much yet, it wouldn't take much more to get to 80%. I should get that data so I can bet on heads with confidence."

This mixes up f on the one hand with my distribution for f on the other. I can certainly collect data until I'm 80% sure that f is bigger than 0.5 (provided that f really is bigger than 0.5). This is distinct from being 80% sure of getting heads on the next toss.

Replies from: None
comment by [deleted] · 2009-07-27T17:14:09.333Z · LW(p) · GW(p)

I guess I just don't understand the difference between bayesianism and frequentism. If I had seen your discussion of limiting relative frequency somewhere else, I would have called it frequentist.

I think I'll go back to borrowing bits and pieces. (Thank you for some nice ones.)

Replies from: Cyan
comment by Cyan · 2009-07-27T18:54:26.983Z · LW(p) · GW(p)

The key difference is that a frequentist would not admit the legitimacy of a distribution for f -- the data are random, so they get a distribution, but f is fixed, although unknown. Bayesians say that quantities that are fixed but unknown get probability distributions that encode the information we have about them.

comment by byrnema · 2009-07-26T17:52:49.818Z · LW(p) · GW(p)

Being a frequentist who hangs out on a Bayesian forum, I've thought about the difference between the two perspectives. I think the dichotomy is analogous to bottom-up verses top-down thinking; neither one is superior to the other but the usefulness of each waxes and wanes depending upon the current state of a scientific field. I think we need both to develop any field fully.

Possibly my understanding of the difference between a frequentist and Bayesian perspective is different than yours (I am a frequentist after all) so I will describe what I think the difference is here. I think the two POVs can definitely come to the same (true) conclusions, but the algorithm/thought-process feels different.

Consider tossing a fair-coin. Everyone observes that on average, heads comes up 50% of the time. A frequentist sees the coin-tossing as a realization of the abstract Platonic truth that the coin has a 50% chance of coming up heads. A Bayesian, in contrast, believes that the realization is the primary thing ... the flipping of the coin yields the property of having 50% probability of coming up heads as you flip it. So both perspectives require the observation of many flips to ascertain that the coin is indeed fair, but the only difference between the two views is that the frequentist sees the "50% probability of being heads" as something that exists independently of the flips. It's something you discover rather than something you create.

Seen this way, it sounds like frequentists are Platonists and Bayesians are non-Platonists. Abstract mathematicians tend to be Platonists (but not always) and they've lent their bias to the field. Smart Bayesians, on the other hand, tend to be more practical and become experimentalists.

There's definitely a certain rankle between Platonists and non-Platonists. Non-platonists think that Platonists are nuts, and Platonists think that the non-Platonists are too literal.

May we consider the hypothesis that this difference is just a difference in brain hard-wiring? When a Platonist thinks about a coin flipping and the probability of getting heads, they really do perceive this "probability" as existing independently. However, what do they mean by "existing independently"? We learn what words mean from experience. A Platonist has experience of this type of perception and knows what they mean. A non-Platonist doesn't know what is meant and thinks the same thing is meant as what everyone means when they say "a table exists". These types of existence are different, but how can a Bayesian understand the Platonic meaning without the Platonic experience?

A Bayesian should just observe what does exist, and what words the Platonist uses, and redefine the words to match the experience. This translation must be done similarly with all frequentist mathematics, if you are a Bayesian.

Replies from: JGWeissman, antibole, PhilGoetz
comment by JGWeissman · 2009-07-26T18:16:07.972Z · LW(p) · GW(p)

Seen this way, it sounds like frequentists are Platonists and Bayesians are non-Platonists.

Counterexample: I have a Platonic view of mathematical truths, but a Bayesian view of probability.

A frequentist sees the coin-tossing as a realization of the abstract Platonic truth that the coin has a 50% chance of coming up heads.

This does not make sense. For any given coin flip, either the fundamental truth is that the coin will come up heads, or the fundamental truth is that the coin will come up tails. The 50% probability represents my uncertainty about the fundamental truth, which is not a property of the coin.

Replies from: byrnema
comment by byrnema · 2009-07-26T18:40:16.419Z · LW(p) · GW(p)

Counterexample: I have a Platonic view of mathematical truths, but a Bayesian view of probability.

That's interesting. I had imagined that people would be one way or the other about everything. Can anyone else provide datapoints on whether they are Platonic about only a subset of things?

... in order to triangulate closer to whether Platonism is "hard-wired", do you find it possible to be non-Platonic about mathematical truths? Can someone who is non-Platonic think about them Platonically -- is it a choice?

For any given coin flip, either the fundamental truth is that the coin will come up heads, or the fundamental truth is that the coin will come up tails. The 50% probability represents my uncertainty about the fundamental truth, which is not a property of the coin.

See, that's just not the way a frequentist sees it. At first I notice, you are defining "fundamental truth" as what will actually happen in the next coin flip. In contrast, it is more natural to me to think of the "fundamental truth" as being what the probability of heads is, as a property of the coin and the flip, since the outcome isn't determined yet. But that's just asking different questions. So if the question is, what is the truth about the outcome of the next flip, we are talking about empirical reality (an experiment) and my perspective will be more Bayesian.

Replies from: Vladimir_Nesov, JGWeissman, MichaelVassar, gjm, GuySrinivasan
comment by Vladimir_Nesov · 2009-07-26T19:48:23.750Z · LW(p) · GW(p)

since the outcome isn't determined yet

The outcome is determined timelessly, by the properties of the coin-tossing setup. It hasn't happened yet. What came before the coin determines the coin, but in turn is determined by the stuff located further and further in the past from the actual coin-toss. It is a type error to speak of when the outcome is determined.

Replies from: byrnema
comment by byrnema · 2009-07-26T20:17:03.287Z · LW(p) · GW(p)

Whether or not the universe is deterministic is not determined yet. Even if you and I both think that a deterministic universe is more logical, we should accept that certain figures of speech will persist. When I said the toss wasn't determined yet, I meant that the outcome of the toss was not known yet by me. I don't see how your correction adds to the discussion except possibly to make me seem naive, like I've never considered the concept of determinism before.

Replies from: Nick_Tarleton, Vladimir_Nesov
comment by Nick_Tarleton · 2009-07-26T20:38:20.111Z · LW(p) · GW(p)

what the probability of heads is, as a property of the coin and the flip

I meant that the outcome of the toss was not known yet by me

Map/territory distinction. As a property of the actual coin and flip, the probability of heads is 0 or 1 (modulo some nonzero but utterly negligible quantum uncertainty); as a property of your state of knowledge, it can be 0.5.

Replies from: byrnema, Vladimir_Nesov
comment by byrnema · 2009-07-26T21:38:35.437Z · LW(p) · GW(p)

This comment helped things come into better focus for me.

A frequentist believes that there is a probability of flipping heads, as a property of the coin and (yes, certainly) the conditions of the flipping. To a frequentist, this probability is independent of whether the outcome is determined or not and is even independent of what the outcome is. Consider the following sequence of flips: H T T

A frequentist believes that the probability of flipping heads was .5 all along right? The first 'H' and the second 'T' and the third 'T' were just discrete realizations of this probability.

The reasons why I've been calling this a Platonic perspective is because I think the critical difference in philosophy is the frequentist idea of this non-empirical "probability' existing independent of realizations. The probability of flipping heads for a set of conditions is .5 whether you actually flip the coins or not. However, frequentists agree you must flip the coin to know that the probability was .5.

You might think this perspective is wrong-headed, and from a strict empirical view where you allow no Platonic entities/concepts, it kind of is. But the question I am really interested in is the following: to what extent is this point of view a choice we can be wrong or right about, or a perspective that some (or most?) people have hard-wired in their physical brain? Further, how can you argue that it isn't useful when it demonstrably has been so useful? Perhaps it facilitates or is necessary for some categories of abstract thought.

Replies from: JGWeissman
comment by JGWeissman · 2009-07-26T21:43:59.236Z · LW(p) · GW(p)

But the question I am really interested in is the following: to what extent is this point of view a choice we can be wrong or right about, or a perspective that most people have hard-wired in their physical brain algorithms?

It could be hard-wired and still be right or wrong.

Replies from: byrnema
comment by byrnema · 2009-07-26T22:13:48.668Z · LW(p) · GW(p)

Correct, generally. But how could a perspective be wrong?

I can think of two ways a perspective can be wrong: either because it (a) asserts a fact about external reality that is not true or (b) yields false conclusions about the external world.

(a) Frequentists don't assert anything extra about the empirical world, they assert the use of (and obstensibly, the "existence" of) something symbolic. From the empiricist perspective, it's not really there. Like a little icon floating above or around the actual thing that your cursor doesn't interact with, so it can't be false in the empirical sense.

(b) It would be fascinating if the frequentist perspective yielded false conclusions,and in such a case, is there any doubt that people would develop and embrace new mathematics that avoided such errors? In fact, we already see this happening where physics at extreme scales seems to defy intuition. If someone wanted to propose a new theory of everything I don't think anyone would ever criticize it on the grounds of not being frequentist. I guess the point here is just that it's useful or not.

Later edit: Ok, I finally get it. Maybe the reason we don't understand physics at the extreme scales is because the frequentist approach was evolved (hard-wired) for understanding intermediate physical scales and it's (apparently) beginning to fail. You guys are using empirical philosophy to try and develop a new brand mathematics that won't have these inborn errors of intuition. So while I argue that frequentism has definitely been productive so far, you argue that it is intrinsically limited based on philosophical principles.

Replies from: JGWeissman
comment by JGWeissman · 2009-07-26T22:41:47.199Z · LW(p) · GW(p)

A perspective can be wrong if it arbitrarily assigns a probability of 1 to an event that has a symmetrical alternative. Read the intro to My Bayesian Enlightenment for Eliezer's description of a frequentist going wrong in this way with respect to the problem of the mathematician with two children, at least one of which is a boy.

Replies from: byrnema
comment by byrnema · 2009-07-27T00:22:09.086Z · LW(p) · GW(p)

No, Bayesian probability and orthodox statistics give exactly the same answers if the context of the problem is the same. The two schools may tend to have different ideas about what is a "natural" context, but any good textbook will always define exactly what the context is so that there is no guessing and no disagreement.

Nevertheless, which event with a symmetrical alternative were you referring to? (You are given that the women said she has at least 1 boy, so it would be correct to assign that probability 1 in the context of a given assumption, obviously when applying the orthodox method.) Both approaches work differently, but they both work.

Replies from: JGWeissman
comment by JGWeissman · 2009-07-27T00:44:16.934Z · LW(p) · GW(p)

Nevertheless, which event with a symmetrical alternative were you referring to?

Given that the women does have a boy and a girl, what is the probability that she would state that at least one of them is a boy? By symmetry, you would expect a priori, not knowing anything about this person's preferences, that in the same conditions, she is equally likely to state that at least one of her children is a girl, so to assign the conditional probability higher than .5 does not make sense, so it is definitely not right for the frequentist Eliezer was talking with to act as though the conditional probability were 1. (The case could be made that the statement is also evidence that the woman has a tendency to say at least once child is a boy rather than that at least one child is a girl. But this is a small effect, and still does not justify assigning a conditional probability of 1.)

I think the frequentist approach could handle this problem if applied correctly, but it seems that frequentist in practice get it wrong because they do not even consider the conditional probability that they would observe a piece of evidence if a theory they are considering is true.

any good textbook will always define exactly what the context is so that there is no guessing and no disagreement.

If you read the article I cited, Eliezer did explain that this was a mangling of the original problem, in which the mathematician made the statement in response to a direct question, so one could reasonably approximate that she would make the statement exactly when it is true.

However, life does not always present us with neat textbook problems. Sometimes, the conditional probabilities are hard to figure out. I prefer the approach that says figure them out anyways to the one that glosses over their importance.

Replies from: byrnema
comment by byrnema · 2009-07-27T01:12:17.185Z · LW(p) · GW(p)

so to assign the conditional probability higher than .5 does not make sense, so it is definitely not right for the frequentist Eliezer was talking with to act as though the conditional probability were 1

In the "correct" formulation of the problem (the one in which the correct answer is 1/3), the frequentist tells us what the mother said as a given assumption; considering the prior <1 probability of this is rendered irrelevant because we are now working in the subset of probability space where she said that.

it seems that frequentist in practice get it wrong because they do not even consider the conditional probability that they would observe a piece of evidence if a theory they are considering is true.

Considering whether a theory is true is science -- I completely agree science has important, necessary Bayesian elements.

Replies from: wedrifid
comment by wedrifid · 2009-07-27T20:24:12.257Z · LW(p) · GW(p)

Considering whether a theory is true is science

Considering whether a theory is true is not science, althought the two are certainly useful to each other.

comment by Vladimir_Nesov · 2009-07-26T20:46:26.687Z · LW(p) · GW(p)

Giving "probably" of actual outcome for the coin flip as ~1 looks like a type error, although it's clear what you are saying. It's more like P(coin is heads|coin is heads), tautologically 1, not really a probability.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-07-26T21:30:28.574Z · LW(p) · GW(p)

Edited to clarify.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-26T22:12:00.506Z · LW(p) · GW(p)

As a property of the actual coin and flip, the probability of heads is 0 or 1 (modulo some nonzero but utterly negligible quantum uncertainty)

This mixes together two different kinds of probability, confusing the situation. There is nothing fuzzy about the events defining the possible outcomes, the fact that there is also indexical uncertainty imposed on your mind while it observes the outcome is from a different problem.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-07-26T22:24:31.144Z · LW(p) · GW(p)

Yeah, it just felt like too much work to add "...randomly sampling from future Everett branches according to the Born probabilities" or the like.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-26T22:29:41.602Z · LW(p) · GW(p)

My point is that most of the time decision-theoretic problems are best handled in a deterministic world.

comment by Vladimir_Nesov · 2009-07-26T20:26:06.570Z · LW(p) · GW(p)

When I said the toss wasn't determined yet, I meant that the outcome of the toss was not known yet by me.

Hence it's your uncertainty, which can as well be handled in deterministic world. And in deterministic world, I don't know how to parse your sentence

it is more natural to me to think of the "fundamental truth" as being what the probability of heads is, as a property of the coin and the flip

comment by JGWeissman · 2009-07-26T19:09:36.634Z · LW(p) · GW(p)

... in order to triangulate closer to whether Platonism is "hard-wired", do you find it possible to be non-Platonic about mathematical truths? Can someone who is non-Platonic think about them Platonically -- is it a choice?

Most of the time I think about math, I do not worry about if it is platonic or not. It was really only in the context of considering my epistemic uncertainty that 2+2=4 that I needed consider the nature of the territory I was mapping, and in this context it did not make sense for the territory to be the physical universe.

In contrast, it is more natural to me to think of the "fundamental truth" as being what the probability of heads is, as a property of the coin and the flip, since the outcome isn't determined yet.

You mean, the outcome has not been determined by you, since you have not observed all the physical properties of coin, the person flipping it, and the environment, and calculated out all the physics that would tell you whether it would land heads or tails. Attaching a probability to the coin is just our way of dealing with the ignorance and lack of computing power that prevents us from finding the exact answer.

Replies from: byrnema
comment by byrnema · 2009-07-26T19:17:30.021Z · LW(p) · GW(p)

What is your point? You iterate the Bayesian perspective, but do you agree that frequentists and Bayesians have different perspectives about this?

I think it boils down to this: you are a frequentist (and I've been using the term Platonist) if you see the 50% probability as a property of the coin and the flip, and you are a Bayesian if you see the 50% probability as just a way of measuring the uncertainty.

(Given your rationale for being Platonic about mathematics, I don't know if you are really a Platonist (in the hard-wired sense).)

Replies from: JGWeissman
comment by JGWeissman · 2009-07-26T19:40:06.478Z · LW(p) · GW(p)

My point is that the view that 50% probability is a fundamental property of the coin is wrong. It is an example of the Mind Projection Fallacy, thinking that because you don't know the result, somehow the universe doesn't either. It is certainly not the case that when asked about the result of a single coin flip, that giving a 50% probability for heads is the best possible answer. One could, in principle, do more investigation, and find that under the current conditions, the coin will come up heads (or tails) with 99% probability, and actually be right 99 times out of a hundred.

I don't like to call this view of the probability as a fundamental property of the coin the frequentist view. It makes more sense to describe their perspective as a the probability being a combined property of the coin and a distribution of conditions in which it could be flipped. From this perspective, the mistake of attaching the probability to the coin is that miss the fact that you are flipping the coin in one particular condition, which will have a definite outcome. The probability comes from uncertainty of which condition from the distribution applies in this case, and of course, limits on computational power.

Replies from: byrnema
comment by byrnema · 2009-07-26T20:27:33.083Z · LW(p) · GW(p)

Are you saying that frequentists are wrong, or just me?

If the former, how can you say that and consider the case closed when frequentists arrive at correct conclusions? What I'm suggesting is that Bayesians are committing the mind projection fallacy when they assert that frequentists are "wrong".

Replies from: JGWeissman
comment by JGWeissman · 2009-07-26T21:29:31.758Z · LW(p) · GW(p)

I am saying that you are wrong, and I am not sure there isn't more to the frequentist view than you are saying, so I am not prepared to figure out if it is right or wrong until I know more about what it is saying.

If the former, how can you say that and consider the case closed when frequentists arrive at correct conclusions?

Like in the Monty Hall problem, where the frequentists will agree to the correct answer after you beat them over the head with a computer simulation?

What I'm suggesting is that Bayesians are committing the mind projection fallacy when they assert that frequentists are "wrong".

Huh? What property of our minds do you think we are projecting onto the territory?

Replies from: byrnema
comment by byrnema · 2009-07-27T02:14:23.876Z · LW(p) · GW(p)

In the Monty Hall problem, intuiton tends to insist on the wrong answer, not valid application of frequentist theory.

Just curious -- is the monty hall solution intuitively obvious to a "Bayesian", or do they also need to work through the (Bayesian) math in order to be convinced?

Huh? What property of our minds do you think we are projecting onto the territory?

Oops. I meant the typical mind fallacy.

Replies from: JGWeissman
comment by JGWeissman · 2009-07-27T02:26:14.312Z · LW(p) · GW(p)

Just curious -- is the monty hall solution intuitively obvious to a "Bayesian", or do they also need to work through the (Bayesian) math in order to be convinced?

For me at least, it is not so much that the solution is intuitively obvious as that setting up the Bayesian math forces me to ask the important questions.

I meant the typical mind fallacy.

Then how do you think we are assuming that others think like us? It seems to me that we notice that others are not thinking like us, and that in this case, the different thinking is an error. I believe that 2+2=4, and if I said that someone was wrong for claiming that 2+2=3, that would not be a typical mind fallacy.

Replies from: byrnema
comment by byrnema · 2009-07-27T03:29:46.201Z · LW(p) · GW(p)

If the conclusions about reality were different, then the 2+2=4 verses 2+2=3 analogy would hold. Instead, you are objecting to the way frequentists approach the problem. (Sometimes, the difference seems to be as subtle as just the way they describe their approach.) Unless you show that they do not as consistently arrive at the correct answer, I think that objecting to their methods is the typical mind fallacy.

Asserting that frequentists are wrong is actually very non-Bayesian, because you have no evidence that the frequentist view is illogical. Only your intuition and logic guides you here. So finally, as two rationalists, we may observe a bona fide difference in what we consider intuitive, natural or logical.

I'm curious about the frequency of "natural" Bayesians and frequentists in the population, and wonder about their co-evolution. I also wonder about their lack of mutual understanding.

Replies from: JGWeissman
comment by JGWeissman · 2009-07-27T04:29:16.127Z · LW(p) · GW(p)

From Probability is in the Mind:

You have a coin. The coin is biased. You don't know which way it's biased or how much it's biased. Someone just told you, "The coin is biased" and that's all they said. This is all the information you have, and the only information you have.

You draw the coin forth, flip it, and slap it down.

Now - before you remove your hand and look at the result - are you willing to say that you assign a 0.5 probability to the coin having come up heads?

The frequentist says, "No. Saying 'probability 0.5' means that the coin has an inherent propensity to come up heads as often as tails, so that if we flipped the coin infinitely many times, the ratio of heads to tails would approach 1:1. But we know that the coin is biased, so it can have any probability of coming up heads except 0.5."

The frequentists get this exactly wrong, ruling out the only the correct answer given their knowledge of the situation.

The article goes on to describe scenarios in which having different partial knowledge to the situation leads to different probabilities. The frequentist perspective doesn't merely lead to the wrong answer for these scenarios, it fails to even produce a coherent analysis. Because there is no single probability attached to the event itself. The probability really is a property of the mind analyzing that event, to the extent that it is sensitive to the partial knowledge of that mind.

Replies from: byrnema
comment by byrnema · 2009-07-27T05:33:39.410Z · LW(p) · GW(p)

I like the response of Constant2:

The competent frequentist would presumably not be befuddled by these supposed paradoxes. Since he would not be befuddled (or so I am fairly certain), the "paradoxes" fail to prove the superiority of the Bayesian approach.

Eliezer responded with:

Not the last two paradoxes, no. But the first case given, the biased coin whose bias is not known, is indeed a classic example of the difference between Bayesians and frequentists.

and in the post he wrote

The frequentist perspective doesn't merely lead to the wrong answer for these scenarios, it fails to even produce a coherent analysis.

But the frequentist does have a coherent analysis for solving this problem. Because we're not actually interested in the long-term probability of flipping heads (of which all anyone can say is that it is not .5) but the expected outcome of a single flip of a biased coin. This is an expected value calculation, and I'll even apply your idea about events with symmetric alternatives. (So I do not have to make any assumptions about the shape of the distribution of possible biases.)

I will calculate my expected value using that the coin is biased towards heads or it is biased towards tails with equal probability. Let p be the probability that the coin flips to the biased orientation (i.e., p>.5).

  • The probability of heads is p with probability of 0.5. The probability of tails in this case is (1-p)*0.5.
  • The probability of heads is (1-p) with probability of 0.5. The probability of tails in this case is (p)*0.5.

Thus, the expected value of heads is p .5+(1-p) 0.5 = 0.5.

So there's no befuddlement, only a change in random variables from the long-term expectation of the outcome of many flips to the long-term expectation of whether heads or tails is preferred and a single flip. Which we should expect, since the random variable we are really being asked about has changed with the different contexts.

Replies from: JGWeissman
comment by JGWeissman · 2009-07-27T06:10:10.937Z · LW(p) · GW(p)

You just pushed aside your notion of an objective probability and calculated a subjective probability reflecting your partial information. Congratulations, you are a Bayesian.

Replies from: byrnema
comment by byrnema · 2009-07-27T12:30:27.809Z · LW(p) · GW(p)

I applied completely orthodox frequentist probability.

I had predicted your objection would be that expected value is an application of Bayes' theorem, but I was prepared to argue that orthodox probability does include Bayes' theorem. It is one of the pillars of any introductory probability textbook.

A problem isn't "Bayesian" or "frequentist". The approach is. Frequentists take the priors as given assumptions. The assumptions are incorporated at the beginning as part of the context of the problem, and we know the objective solution depends upon (and is defined within) a given context. A Bayesian in contrast, has a different perspective and doesn't require formalizing the priors as given assumptions. Apparently they are comfortable with asserting that the priors are "subjective". As a frequentist, I would have to say that the problem is ill-posed (or under-determined) to the extent that the priors/assumptions are really subjective.

Suppose that I tell you I am going to pick up a card randomly and will ask you the probability of whether it is the ace of hearts. Your correct answer would be 1/52, even if I look at the card myself and know with probability 0 or 1 that the card is the ace of hearts. Frequentists have no problem with this "subjectivity", they understand it as different probabilities for different contexts. This is mainly a response to this comment, but is relevant here.

Yet again, the misunderstanding has arisen because of not understanding what is meant by the probability is "in" the cards. In this way, Bayesian's interpret the frequentist's language too literally. But what does a frequentist actually mean? Just that the probability is objective? But the objectivity results from the preferred way of framing the problem ... I'm willing to consider and have suggested the possibility that this "Platonic probability" is an artifact of a thought process that the frequentist experiences empirically (but mentally).

comment by MichaelVassar · 2009-07-27T06:14:09.698Z · LW(p) · GW(p)

I'm Platonistic in general I suppose, but I see Bayesianism as subjectively objective as a Platonistic truth.

comment by gjm · 2009-07-26T20:55:34.806Z · LW(p) · GW(p)

Can anyone else provide datapoints [...]

I am a Platonist about mathematics by inclination, though I strongly suspect that this inclination is one that I should resist taking too seriously. I am a Bayesian about proability (at least in the following sense: it seems to me that the Bayesian approach subsumes the others, when they are applied correctly). I am mostly Bayesian about statistics, but don't see any reason why you shouldn't compute confidence intervals and unbiased estimators if you want to. I don't think "Platonist" and "frequentist" are at all the same thing, so I don't see any of the above as indicating that I'm (inclined to be) Platonist about some things but not about others.

[...] the fundamental truth [...]

This seems to have prompted a debate about whether The Fundamental Truth is one about the general propensities of the coin, or one about what will happen the next time it's flipped. I don't see why there should be exactly one Fundamental Truth about the coin; I'd have thought there would be either none or many depending on what sort of thing one wishes to count as a "fundamental truth".

Anyway: imagine a precision robot coin-flipper. I hope it's clear that with such a device one could arrange that the next million flips of the coin all come up heads, and then melt it down. So whatever "fundamental truth" there might be about What The Coin Will Do has to be relative to some model of what's going to be done to it. The point of coin-flipping is that it's a sort of randomness magnifier: small variations in what you do to it make bigger differences to what it does, so a small patch of possibility-space gets turned into a somewhat-uniform sampling of a larger patch (caution: Liouville, volume conservation, etc.). And the "fundamental truth" about the coin that you're appealing to is that, plus what it implies about its ability to turn kinda-sorta-slightly-random-ish coin flipping actions into much more random-ish outcomes. To turn that into an actual expectation of (more or less) independent p=1/2 Bernoulli trials, you need to add some assumption about how people actually flip coins, and then the magic of physics means that a wide range of such assumptions all lead to very similar-looking conclusions about what the outcomes are likely to look like.

In other words: an accurate version of the frequentist way of looking at the coin's behaviour starts with some assumption (wherever it happens to come from) about how coins actually get flipped, mixes that with some (not really probabilistic) facts about the coin, and ends up with a conclusion about what the coin is likely to do when flipped, which doesn't depend too sensitively on that assumption we made.

Whereas a Bayesian way of looking at it starts with some assumption (wherever it happens to come from) about what happens when coins get flipped, mixes that with some (not really probabilistic) facts about what the coin has been observed to do and perhaps a bit of physics, and ends up with a conclusion about what the coin is likely to do when flipped in the future, which doesn't depend too sensitively on that assumption we made.

Clearly the philosophical differences here are irreconcilable...

comment by GuySrinivasan · 2009-07-26T19:25:03.339Z · LW(p) · GW(p)

As a property of the coin and the flip and the environment and the laws of physics, the probability of heads is either 0 or 1. Just because you haven't computed it doesn't mean the answer becomes a superposition of what you might compute, or something.

What you want is something like the result of taking a natural generalization of the exact situation - if the universe is continuous and the system is chaotic enough "round to some precision" works - and then computing the answer in this parameterized space of situations, and then averaging over the parameter.

The problem is that "natural generalization" is pretty hard to define.

comment by antibole · 2009-07-27T22:19:58.833Z · LW(p) · GW(p)

Being a Platonist and a frequentist aren't the same thing, but they correlate because they're both errors in thinking.

The objection to frequentism is that it builds the answer into the solution so the problem actually changes from the original real world problem. This is fine as long as you can test discrepancies between theory and practice, but that's not always going to possible.

comment by PhilGoetz · 2009-07-27T16:12:36.126Z · LW(p) · GW(p)

"A Bayesian, in contrast, believes that the realization is the primary thing ... the flipping of the coin yields the property of having 50% probability of coming up heads as you flip it."

Thanks for trying to explain the difference, but I have no idea what this means.

Replies from: byrnema
comment by byrnema · 2009-07-27T17:02:12.256Z · LW(p) · GW(p)

What I was thinking about was this: Bayesians and frequentists both agree that if a fair coin is tossed n times (where n is very large) then a string of heads and tails will result and the probability of heads is .5 in some way related to the fact that the number of heads divided by n will approach .5 for large n.

In my mind, the frequentist perspective is that the .5 probability of getting heads exists first, and then the string of heads and tails realize (i.e., make a physical manifestation of) this abstract probability lurking in the background. As though there is a bin of heads and tails somewhere with exactly a 1:1 ratio and each flip picks randomly from this bin. The Bayesian perspective is that there is nothing but the string of heads and tails -- only the string exists, there's no abstract probability that the string is a realization of. No picking from a bin in the sky. Inspecting the string, a Bayesian can calculate the 0.5 probability ... so the 0.5 probability results from the string. So according to me, the philosophical debate boils down to: what comes first, the probability or the string?

I definitely get the impression that the Bayesians in this thread are skeptical of this description of the difference, and seem to prefer describing the difference of the Bayesian view as considering probability a measure of your uncertainty. However, probability is also taught as a measure of uncertainty in classical probability, so I'm skeptical of this dichotomy. (In favor of my view, the name "frequentist" comes from the observation that they believe in a notion of "frequency" -- i.e., that there's a hypothetical distribution "out there" that observed data is being sampled from.)

Perhaps the difference in whether the correct approach is subjective or objective better gets to the heart of the difference. I am leaning towards this hypothesis because I can see how a frequentist can confuse something being objective with that something having an independent "existence".

Replies from: bdwolfhound
comment by bdwolfhound · 2009-08-09T14:57:32.608Z · LW(p) · GW(p)

I have a little difficulty with the notion that the probable outcome of a coin toss is the result of the toss, rather like the collapse of a quantum probability into reality when observed. Looking at the coin before the toss, surely three probabilities may be objectively observed - H, T or E, and the likelihood of the coin coming to rest on its edge dismissed.

Since the coin MUST then end up H or T ; the sum of both probabilities is 1, both outcomes are a priori equally likely and have the value1/2 before the toss. Whether one chooses to believe that the a priori probabilities have actual existence is a metaphysical issue.