Michael Jordan dissolves Bayesian vs Frequentist inference debate [video lecture]

post by Academian · 2011-08-30T01:12:08.851Z · LW · GW · Legacy · 12 comments

Contents

12 comments

UC Berkeley professor Michael Jordan, a leading researcher in machine learning, has a great reduction of the question "Are your inferences Bayesian or Frequentist?". The reduction is basically "Which term are you varying in the loss function?". He calls this the "decision theoretic perspective" on the debate, and uses this terminology well in keeping with LessWrong interests.

I don't have time to write a top-level post about this (maybe someone else does?), but I quite liked the lecture, and thought I should at least post the link!

http://videolectures.net/mlss09uk_jordan_bfway/

The discussion gets much clearer starting at the 10:11 slide, which you can click on and skip to if you like, but I watched the first 10 minutes anyway to get a sense of his general attitude.

Enjoy! I recommend watching while you eat, if it saves you time and the food's not too distracting :)

12 comments

Comments sorted by top scores.

comment by Jack · 2011-08-30T01:21:53.086Z · LW(p) · GW(p)

I will watch this despite being somewhat disappointed the video is not by former NBA superstar and Chicago Bull Michael Jordan.

Replies from: asr, orthonormal, DanielLC
comment by asr · 2011-09-01T06:36:45.755Z · LW(p) · GW(p)

At Berkeley, he is sometimes referred to as "The 'Michael Jordan' of Machine Learning."

Replies from: arundelo
comment by orthonormal · 2011-09-01T00:35:27.290Z · LW(p) · GW(p)

Well, when you think about it properly, the case for Bayesianism really is a slam dunk.

comment by DanielLC · 2011-08-30T06:14:01.655Z · LW(p) · GW(p)

You're disappointed that it's done by someone who actually knows what he's talking about instead?

Replies from: Desrtopa
comment by Desrtopa · 2011-08-30T06:26:18.079Z · LW(p) · GW(p)

If the former basketball star made a video dissolving the Bayesian/Frequentist inference debate, I would expect either a really clever interpretation of a video that's meant to be about something else, or an update of tremendous proportions.

Replies from: gwern
comment by gwern · 2011-08-30T14:27:43.847Z · LW(p) · GW(p)

of tremendous proportions.

Of Futurama proportions, you mean.

comment by Manfred · 2011-08-30T01:40:55.283Z · LW(p) · GW(p)

Dissappointed. Also, I've seen that video linked somewhere else around here. Still interesting though.

Anyhow, the dichotomy he makes may work for some field/subfield - I don't really know. But it doesn't work for a lot of differences between perspectives on statistics.

Replies from: jsteinhardt
comment by jsteinhardt · 2011-09-03T04:56:25.042Z · LW(p) · GW(p)

Can you elaborate here at all? I feel bad for appealing to authority here, but Mike is widely considered the leader of the field of statistical ML, so it is a priori unlikely to me that his dichotomy is limited to a single subfield. It sounds like you think I should update away from his beliefs, and I would like to if he is indeed wrong, but you haven't provided much evidence for me so far.

Replies from: Manfred
comment by Manfred · 2011-09-03T07:03:12.010Z · LW(p) · GW(p)

Fortunately, someone else has already done the work for me :)

http://lesswrong.com/r/discussion/lw/7ck/frequentist_vs_bayesian_breakdown_interpretation/

So Mike seems to be talking about (3) - whether to use "bayesian" or "frequentist" decision-making methods. However, the distinction I see (and use) most often is something like (2) - interpreting probabilities as reflecting a state of incomplete information (bayesian) or as reflecting a fact about the external world (frequentist).

Replies from: jsteinhardt
comment by jsteinhardt · 2011-09-03T13:26:47.787Z · LW(p) · GW(p)

Thanks.