The Academic Epistemology Cross Section: Who Cares More About Status?

post by Matt_Simpson · 2009-11-15T19:37:18.935Z · LW · GW · Legacy · 15 comments

Contents

15 comments

Bryan Caplan writes:

Almost all economic models assume that human beings are Bayesians...  It is striking, then, to realize that academic economists are not Bayesians.  And they're proud of it!

This is clearest for theorists.  Their epistemology is simple: Either something has been (a) proven with certainty, or (b) no one knows - and no intellectually respectable person will say more... 

Empirical economists' deviation from Bayesianism is more subtle.  Their epistemology is rooted in classical statistics.  The respectable researcher comes to the data an agnostic, and leaves believing "whatever the data say."  When there's no data that meets their standards, they mimic the theorists' snobby agnosticism.  If you mention "common sense," they'll scoff.  If you remind them that even classical statistics assumes that you can trust the data - and the scholars who study it - they harumph.

Robin Hanson offers an explanation:

I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds.  If so, academic beliefs are secondary – the important thing is to clearly show respect to those who make impressive displays like theorems or difficult data analysis.  And the obvious way for academics to use their beliefs to show respect for impressive folks is to have academic beliefs track the most impressive recent academic work.

...beliefs must stay fixed until an impressive enough theorem or data analysis comes along that beliefs should change out of respect for that new display.  It also won’t do to keep beliefs pretty much the same when each new study hardly adds much evidence – that wouldn’t offer enough respect to the new display.

I wonder, what does this look like in the cross section?  In other words, relative to other academic disciplines, which have the strongest tendency to celebrate difficult work but ignore sound-yet-unimpressive work?  My hunch is that economics, along with most other social sciences, would be the worst offenders, while the fields closer to engineering will be on the other end of the spectrum.  Engineers should be more concerned with truth since whatever they build has to, you know, work.  What say you?  More importantly, anyone have any evidence?

15 comments

Comments sorted by top scores.

comment by taw · 2009-11-15T23:20:30.581Z · LW(p) · GW(p)

I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds.

What does this theory predict relative to theory that people are interested in quality teaching and research, and they use reputation as a not terribly reliable proxy for it, as quality is too hard to measure for most people?

Replies from: Douglas_Knight, Matt_Simpson
comment by Douglas_Knight · 2009-11-16T05:40:39.675Z · LW(p) · GW(p)

What does this theory predict relative to theory that people are interested in quality teaching and research, and they use reputation as a not terribly reliable proxy for it, as quality is too hard to measure for most people?

People in the US do not use prestige as a proxy for teaching, or at least that is quite inconsistent with their other claims. Everyone agrees that large research schools are bad at teaching and that at least some small schools are better. But very few people turn down Harvard to go to Williams, so they seem to admit to having other priorities than teaching.

There is more to learning than teaching, namely the classmates. It may be a coordination issue: the good students want to be together, but it doesn't matter much where, so there are multiple equilibria: in France the undergrads want to go to different schools than the grad students. (Robin's theory seems to predict that this shouldn't happen, but not terribly strongly.) ETA: also, in the US, liberal arts schools and state research universities largely flip prestige between the (undergrad) students and the faculty.

(Yes, it makes sense that journalists and grad students should look to prestige as a proxy for research.)

Replies from: taw
comment by taw · 2009-11-16T20:59:35.332Z · LW(p) · GW(p)

Everyone agrees that large research schools are bad at teaching and that at least some small schools are better.

"Everyone agrees" huh? Do you have any evidence for that? As far as I can tell correlation between prestige, research quality, and teaching quality is highly positive in Polish universities' computer science (that's the only kind I know closely, for everything else I would just guess their quality from their prestige).

Replies from: LauraABJ
comment by LauraABJ · 2009-11-16T21:17:08.463Z · LW(p) · GW(p)

I would say there is a general positive correlation between teaching quality, research quality, and prestige, with some exceptions for smaller schools that specifically focus on quality undergraduate education (like Princeton). But don't be fooled by a college saying that its classes are better because its smaller- you actually need to attend both classes and compare. People can be very proud that their school has 'great' lectures in what I would consider high school level biology, simply because their professor is 'fun.'

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-11-20T03:53:01.275Z · LW(p) · GW(p)

But don't be fooled by a college saying that its classes are better because its smaller ... People can be very proud that their school has 'great' lectures in what I would consider high school level biology, simply because their professor is 'fun.'

It seems to me that you are mainly objecting to people caring about teaching quality than disagreeing with their assessment. Maybe people are fools to care about class size and student evaluations, but they appear to care, unless I'm confusing consoling lies with actual advice.

Yes curriculum matters, but that is much more predicted by student quality than professor quality: two kinds of prestige diverge.

comment by Matt_Simpson · 2009-11-16T02:34:04.957Z · LW(p) · GW(p)

See the rest of the quote. Also from Robin's post:

Relative to the Bayesians that academic economic theorists typically assume populate the world, real academics over-react or under-react to evidence, as needed to show respect for impressive academic displays. This helps assure the customers of academia that by affiliating with the most respected academics, they are affiliating with very impressive minds.

Replies from: taw
comment by taw · 2009-11-16T03:54:42.207Z · LW(p) · GW(p)

These look more like classical statistics vs Bayesian statistics than anything status-related.

I haven't seen any science run in Bayesian way, academic, commercial, or whatnot, and I have no idea how it would really look like, in spite of its theoretical appeal.

comment by gwern · 2009-11-16T15:50:08.418Z · LW(p) · GW(p)

...beliefs must stay fixed until an impressive enough theorem or data analysis comes along that beliefs should change out of respect for that new display. It also won’t do to keep beliefs pretty much the same when each new study hardly adds much evidence – that wouldn’t offer enough respect to the new display.

If we believe that the sciences are systematically irrational, then isn't this the rational thing to do? To wait for convincing, irrefutable evidence, and after a certain point treat confirmatory evidence as adding nothing?

If scientists are herd-followers and affiliating (both socially and due to institutional pressures), then after X studies showing a link between HIV and AIDS, say, study X+1 adds nothing because the conductors know exactly what they're supposed to get and have no incentive to show the opposite unless they have irrefutably strong HIV!=AIDS evidence.

For perfect Bayesians, even murky or weak evidence is something that shifts one's beliefs; but in the real world, murky or weak evidence of something against common wisdom just makes you look ideologically driven or a young turk who wants publicity (any publicity at all). Knowing this, scientists will avoid weak evidence which is unpopular, which means that only those who are ideologically driven or attention-seekers will publish, which reinforces why other scientists will avoid weak unpopular evidence, in a feedback loop. So only very strong evidence will break through the noise of irrationality.

Replies from: RobinHanson
comment by RobinHanson · 2009-11-16T18:38:28.926Z · LW(p) · GW(p)

This is the standard "herding" hypothesis, that public behavior ignore private signals once public signals have become lopsided enough.

Replies from: gwern
comment by gwern · 2009-11-17T00:16:59.923Z · LW(p) · GW(p)

Alas, there is nothing new under the sun. I'm guessing the herding hypothesis also says that only very strong private signals can override the public ones too. So, if this is an old hypothesis well-known to you, why would you then lament the herding? If herding is the case, then not updating (much) after a certain point gives you better results than continuing to update, doesn't it? And if it does, then wouldn't that 'win' and be the rational thing to do given the circumstances?

(Alternate question: if not-updating is rational, why resort to social signalling explanations for the not-updating? Social signalling may explain how the herding starts and perpetuates itself, but there's no need to drag it in as an explanation for not-updating as well.)

Replies from: jimmy
comment by jimmy · 2009-11-17T19:28:04.273Z · LW(p) · GW(p)

This looks like the "Science vs Bayes" distinction to me.

Science works hugely better than random crackpottery, but is also very far from optimal.

If you can't trust yourself to update on evidence, then go with science. If you can (you're here, aren't you?) then updating will leave you better off.

You can always limit yourself to updating in all but the most obvious cases that science misses, and doing marginally better.

Replies from: gwern
comment by gwern · 2009-11-17T22:11:02.073Z · LW(p) · GW(p)

You can always limit yourself to updating in all but the most obvious cases that science misses, and doing marginally better.

No doubt that this is what many scientists do - 'this is what I really think, but I'll admit it's not generally accepted'. But I'd put the emphasis on updating only in the obvious cases and otherwise trusting in science, because how many areas of science can one really know well enough to do better than the subject-area consensus?

comment by RobinHanson · 2009-11-16T01:03:04.829Z · LW(p) · GW(p)

Academic engineers can be useful, but so can social scientists, if they so choose. The point is that academics have other pressures besides being useful, and this can apply to engineers as well as social scientists. Non-academic engineers and economists must both be useful somehow to someone, but that is a different matter.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2009-11-16T02:47:37.239Z · LW(p) · GW(p)

Do you think the effect of the 'other pressures' academics feel is the same for all disciplines? Or are there other factors that increase or decrease that effect?

Replies from: RobinHanson
comment by RobinHanson · 2009-11-16T03:00:31.762Z · LW(p) · GW(p)

It is unlikely to be exactly the same, but it seems hard to measure the differences. Fields in which academics are more often hired not for their prestige but for their directly useful knowledge tend to be less prestigious fields I think, so I'd guess that might be one clue, but a weak one.