Extended Quote on the Institution of Academia
post by Ben Pace (Benito) · 2018-03-01T02:58:11.159Z · LW · GW · 23 commentsContents
23 comments
From the top-notch 80,000 Hours podcast, and their recent interview with Holden Karnofsky (Executive Director of the Open Philanthropy Project).
What follows is an short analysis of what academia does and doesn't do, followed by a few discussion points by me at the end. I really like this frame, I'll likely use it in conversation in the future.
Robert Wiblin: What things do you think you’ve learned, over the last 11 years of doing this kind of research, about in what situations you can trust expert consensus and in what cases you should think there’s a substantial chance that it’s quite mistaken?
Holden Karnofsky: Sure. I mean I think it’s hard to generalize about this. Sometimes I wish I would write down my model more explicitly. I thought it was cool that Eliezer Yudkowsky did that in his book, Inadequate Equilibria. I think one thing that I especially look for, in terms of when we’re doing philanthropy, is I’m especially interested in the role of academia and what academia is able to do. You could look at corporations, you can understand their incentives. You can look at Governments, you can sort of understand their incentives. You can look at think-tanks, and a lot of them are just like … They’re aimed directly at Governments, in a sense. You can sort of understand what’s going on there.
Academia is the default home for people who really spend all their time thinking about things that are intellectual, that could be important to the world, but that there’s no client who is like, “I need this now for this reason. I’m making you do it.” A lot of the times, when someone says, “Someone should, let’s say, work on AI alignment or work on AI strategy or, for example, evaluate the evidence base for bed nets and deworming, which is what GiveWell does … ” A lot of the time, my first question, when it’s not obvious where else it fits, is would this fit into academia?
This is something where my opinions and my views have evolved a lot, where I used to have this very simplified, “Academia. That’s like this giant set of universities. There’s a whole ton of very smart intellectuals who knows they can do everything. There’s a zillion fields. There’s a literature on everything, as has been written on Marginal Revolution, all that sort of thing.” I really never know when to expect that something was going to be neglected and when it wasn’t, and it takes a giant literature review to figure out which is which.
I would say I’ve definitely evolved on that. I, today, when I think about what academia does, I think it is really set up to push the frontier of knowledge, the vast majority, and I think especially in the harder sciences. I would say the vast majority of what is going on in academic is people are trying to do something novel, interesting, clever, creative, different, new, provocative, that really pushes the boundaries of knowledge forward in a new way. I think that’s really important obviously and great thing. I’m really, incredibly glad we have institutions to do it.
I think there are a whole bunch of other activities that are intellectual, that are challenging, that take a lot of intellectual work and that are incredibly important and that are not that. They have nowhere else to live. No one else can do them. I’m especially interested, and my eyes especially light up, when I see an opportunity to … There’s an intellectual topic, it’s really important to the world but it’s not advancing the frontier of knowledge. It’s more figuring out something in a pragmatic way that is going to inform what decision makers should do, and also there’s no one decision maker asking for it as would be the case with Government or corporations.
To give examples of this, I mean I think GiveWell is the first place where I might have initially expected that there was going to be development economics was going to tell us what the best charities are. Or, at least, tell us what the best interventions are. Tell us is bed nets, deworming, cash transfers, agricultural extension programs, education improvement programs, which ones are helping the most people for the least money. There’s really very little work on this in academia.
A lot of times, there will be one study that tries to estimate the impact of deworming, but very few or no attempts to really replicate it. It’s much more valuable to academics to have a new insight, to show something new about the world then to try and nail something down. It really got brought home to me recently when we were doing our Criminal Justice Reform work and we wanted to check ourselves. We wanted to check this basic assumption that it would be good to have less incarceration in the US.
David Roodman, who is basically the person that I consider the gold standard of a critical evidence reviewer, someone who can really dig on a complicated literature and come up with the answers, he did what, I think, was a really wonderful and really fascinating paper, which is up on our website, where he looked for all the studies on the relationship between incarceration and crime, and what happens if you cut incarceration, do you expect crime to rise, to fall, to stay the same? He really picked them apart. What happened is he found a lot of the best, most prestigious studies and about half of them, he found fatal flaws in when he just tried to replicate them or redo their conclusions.
When he put it all together, he ended up with a different conclusion from what you would get if you just read the abstracts. It was a completely novel piece of work that reviewed this whole evidence base at a level of thoroughness that had never been done before, came out with a conclusion that was different from what you naively would have thought, which concluded his best estimate is that, at current margins, we could cut incarceration and there would be no expected impact on crime. He did all that. Then, he started submitting it to journals. It’s gotten rejected from a large number of journals by now [laughter]. I mean starting with the most prestigious ones and then going to the less.
Robert Wiblin: Why is that?
Holden Karnofsky: Because his paper, it’s really, I think, it’s incredibly well done. It’s incredibly important, but there’s nothing in some sense, in some kind of academic taste sense, there’s nothing new in there. He took a bunch of studies. He redid them. He found that they broke. He found new issues with them, and he found new conclusions. From a policy maker or philanthropist perspective, all very interesting stuff, but did we really find a new method for asserting causality? Did we really find a new insight about how the mind of a perpetrator works? No. We didn’t advance the frontiers of knowledge. We pulled together a bunch of knowledge that we already had, and we synthesized it. I think that’s a common theme is that, I think, our academic institutions were set up a while ago, and they were set up at a time when it seemed like the most valuable thing to do was just to search for the next big insight.
These days, they’ve been around for a while. We’ve got a lot of insights. We’ve got a lot of insights sitting around. We’ve got a lot of studies. I think a lot of the times what we need to do is take the information that’s already available, take the studies that already exist, and synthesize them critically and say, “What does this mean for what we should do? Where we should give money, what policy should be.”
I don’t think there’s any home in academia to do that. I think that creates a lot of the gaps. This also applies to AI timelines where it’s like there’s nothing particularly innovative, groundbreaking, knowledge frontier advancing, creative, clever about just… It’s a question that matters. When can we expect transformative AI and with what probability? It matters, but it’s not a work of frontier advancing intellectual creativity to try to answer it.
A very common theme in a lot of the work we advance is instead of pushing the frontiers of knowledge, take knowledge that’s already out there. Pull it together, critique it, synthesize it and decide what that means for what we should do. Especially, I think, there’s also very little in the way of institutions that are trying to anticipate big intellectual breakthroughs down the road, such as AI, such as other technologies that could change the world. Think about how they could make the world better or worse, and what we can do to prepare for them.
I think historically when academia was set up, we were in a world where it was really hard to predict what the next scientific breakthrough was going to be. It was really hard to predict how it would affect the world, but it usually turned out pretty well. I think for various reasons, the scientific landscape maybe changing now where it’s … I think, in some ways, there are arguments it’s getting easier to see where things are headed. We know more about science. We know more about the ground rules. We know more about what cannot be done. We know more about what probably, eventually can be done.
I think it’s somewhat of a happy coincidence so far that most breakthroughs have been good. To say, I see a breakthrough on the horizon. Is that good or bad? How can we prepare for it? That’s another thing academia is really not set up to do. Academia is set up to get the breakthrough. That is a question I ask myself a lot is here’s an intellectual activity. Why can’t it be done in academia? These days, my answer is if it’s really primarily of interest to a very cosmopolitan philanthropist trying to help the whole future, and there’s no one client and it’s not frontier advancing, then I think that does make it pretty plausible to me that there’s no one doing it. We would love to change that, at least somewhat, by funding what we think is the most important work.
Robert Wiblin: Something that doesn’t quite fit with that is that you do see a lot of practical psychology and nutrition papers that are trying to answer questions that the public have. Usually done very poorly, and you can’t really trust the answers. But, it’s things like, you know, “Does chocolate prevent cancer?” Or, some nonsense … a small sample paper like that. That seems like it’s not pushing forward methodology, it’s just doing an application. How does that fit into to this model?
Holden Karnofsky: Well, I mean, first up, it’s a generalization. So, I’m not gonna say it’s everything. But, I will also say, that stuff is very low prestige.
And, I think it tends … so first off, I mean, A: that work, it’s not the hot thing to work on, and for that reason, I think, correlated with that you see a lot of work that isn’t … it’s not very well funded, it’s not very well executed, it’s not very well done, it doesn’t tell you very much. The vast majority of nutrition studies out there are just … you know, you can look at even a sample report we did on carbs and obesity that Luke Muehlhauser did, it just … these studies are just … if someone had gone after them a little harder with the energy and the funding that we go after some of the fundamental stuff, they could have been a lot more informative.
And then, the other thing is, that I think you will see even less of, is good critical evidence reviews. So, you’ll see a study … so, you’re right, you’ll see a study that’s, you know, “Does chocolate more disease?” Or whatever, and sometimes that study will use established methods, and it’s just another data-point. But, the part about taking what’s out there and synthesizing it all, and saying, “There’s a thousand studies, here are the ones that are worth looking at. Here are their strengths, here are their weaknesses.”
There are literature reviews, but I don’t think they’re a very prestigious thing to do, and I don’t they’re done super great. And so, I think, for example, some of the stuff GiveWell does, it’s like they have to reinvent a lot of this stuff, and they have to do a lot of the critical evidence reviews ’cause they’re not already out there.
The most interesting parts of this to me were:
- Since reading Inadequate Equilibria, I've mostly thought of science through the lens of coordination failures; however this new framing is markedly more positive, which instead of talking about failures talks about the successes (Old: "Academia is the thing that fails to do X" vs New: "Academia is the thing that is good at Y, but only Y"). As well as helping me model academia more fruitfully, I honestly suspect that this framing will be more palatable to people I present it to.
- To state it in my own words: this model of science says the institution is good - not at all kinds of intellectual work, but specifically the subset that is 'discovering new ideas'. This is to be contrasted with synthesis of old ideas into policy recommendations, or replication of published work (for any practical purpose).
- For example within science it is useful to have more data about which assumptions are actually true in a given model, yet I imagine that in this frame, no individual researcher is incentivised to do anything but publish the next new idea, and so nobody does the replications either. (I know, predicting a replication crisis is very novel of me.)
- This equilibria model suggests to me that we're living in a world where the individual who can pick up the most value is not the person coming up with new ideas, but the person who can best turn current knowledge into policy recommendations.
- That is, the 80th percentile person at discovering new ideas will not create as much value as the 50th percentile person at synthesising and understanding a broad swathe of present ideas.
- My favourite example of such a work is Scott's Marijuana: Much More Than You Wanted to Know, which finds that the term that should capture most of the variance in your model (of the effects of legalisation) is how much marijuana affects driving ability.
- Also in this model of science, we should distinguish 'value' from 'competitiveness within academia', which is in fact the very thing you would be trading away in order to do this work.
Some questions for the comments:
- What is the main thing that this model doesn't account for / over counts? That is, what is the big thing this model forgets that science can't do; alternatively, what is the big thing that this model says science can do, that it can't?
- Is the framing about the main place an intellectual can have outsized impact correct? That is, is the marginal researcher who does synthesis of existing knowledge in fact the most valuable, or is it some other kind of researcher?
23 comments
Comments sorted by top scores.
comment by Qiaochu_Yuan · 2018-03-01T19:01:03.630Z · LW(p) · GW(p)
The framing of academia as primarily attempting to optimize for novelty sounds right to me. This is the biggest beef I have with academic mathematics, where novelty never feels to me, personally, like the interesting thing to strive for. I'd much rather distill; that's the actual skill I've been practicing, and unfortunately it's both badly needed and basically not rewarded at all.
Replies from: alkjash↑ comment by alkjash · 2018-03-01T22:08:22.157Z · LW(p) · GW(p)
I don't see this as an unnoticed problem in academia. Tenure helps with this problem, and prominent mathematicians usually spend years distilling decades of progress into books. If anything, there's an abundant wealth of way too many new math books that very few people read. Similarly, survey papers are appreciated these days: there are dedicated journals for expository material, and they usually get an enormous number of citations.
It is true that graduate students are not taught or incentivized to do distilling work, but it's not obvious to me where the problem lies.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2018-03-01T23:29:11.730Z · LW(p) · GW(p)
Well, for starters, I wish I'd been allowed to write a PhD thesis consisting of distillation instead of original research. As far as I know this was never an option and I'm annoyed about that.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2018-03-12T21:52:59.507Z · LW(p) · GW(p)
Many masters degrees have thesis and non-thesis tracks. What if PhD programs had a track for searchers (equivalent to currently existing PhDs) and separate tracks for replicators and distillers? They could even have different (by still positive) stereotypical virtues (like creativity, rigor, and clarity, respectively).
comment by Jan_Kulveit · 2018-03-01T23:44:43.142Z · LW(p) · GW(p)
- I think some nuance can be useful. Contemporary science is quite good at pushing boundaries in existing research directions, somewhat less at creating new directions. (Once a field becomes larger than some critical size, it is ok.)
- Academia (not science itself) in fact does a lot of synthesizing, distillation and aggregation work. Usually in the form of creating courses and writing textbooks. Some of the inadequacies are
- Usually this happens at some later stage of researcher's careers, people are not trained to do it
- Knowledge up to something like "mainstream graduate lever course" is usually distilled and refactored to some user-friendly format somewhere. Beyond that, it gets much worse.
- New fields fare even worse than in science
- None of this is usually done with policy in mind
(Btw this model leads to an actionable suggestion - if you want more aggregation work done in a field by academia, cause it to be taught somewhere.)
I think one important thing which science and academic institutions do is provide spaces/jobs where people can have time to think about whatever they want, speak with smart people, and have open conversations. (From some anecdotal evidence I would guess this is more the case at some less prestigious places than at the top places where the daemons of competition rule over people without tenure with all their might.)
comment by habryka (habryka4) · 2018-03-05T21:57:45.222Z · LW(p) · GW(p)
Promoted to curated for the following reasons:
- Brings in an interesting outside perspective that I hadn't properly considered before
- Does a good job at extracting and distilling valuable insights from a very long audio-interview
- The comments have good discussion
- I think the question of understanding the institution of academia is a very important one, and one where I can imagine a lot of valuable progress being made on LW. And this post gives some good framing and contrast to existing discussions of the topic.
In general, I think that book-reviews and excerpts from podcasts are a pretty good way to create valuable posts, with maybe a lower level of effort than writing things from scratch, and so have a guess that the world would be better if more people would do it.
comment by Said Achmiz (SaidAchmiz) · 2018-03-01T03:14:03.314Z · LW(p) · GW(p)
First, this is an excellent post, both in terms of the content proper, and also because I deeply appreciate transcriptions of interesting podcasts / videos / etc. (which I never listen to or watch).
Second, this:
… no individual researcher is incentivised to do anything but publish the next new idea, and so nobody does the replications either.
… is entirely consistent with my own (limited, but instructive) experiences in academia / research (in the field of HCI—where just 3% of all papers are replications, and where ‘novel’ is a sine qua non of publishability).
Is the framing about the main place an intellectual can have outsized impact correct? That is, is the marginal researcher who does synthesis of existing knowledge in fact the most valuable, or is it some other kind of researcher?
In my opinion, and in the field(s) with which I am familiar: yes, this framing is entirely correct. Synthesis is tremendously valuable, and there is far too little of it, and that fact is unlikely to change anytime within academia itself. Systematizing, synthesizing, replicating, etc., absolutely is the royal road to outsized impact.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2018-03-01T03:46:11.550Z · LW(p) · GW(p)
Thanks! I realise now that it appears I've taken credit for someone else's work - as you can see in the link at the top, the folks at 80,000 Hours actually transcribe their own podcasts.
(That said, I did sit and listen to this whole section with the transcript, and fixed a number of small errors.)
comment by cousin_it · 2018-03-01T08:42:51.277Z · LW(p) · GW(p)
This is mostly about altruistic goals, right? For self-interested goals I expect decision makers are already paying researchers the correct price for literature reviews, and it's just low. Or do we have evidence that it's irrationally low?
Replies from: Kaj_Sotala, Benito, ChristianKl↑ comment by Kaj_Sotala · 2018-03-01T15:12:17.149Z · LW(p) · GW(p)
Can you give an example of self-interested goals in the sense that you mean here?
To me thinking about this in those terms makes me assume that "self-interested goals = caring about things that help you get re-elected", which would put the correct price for literature reviews around zero. And while "self-interested decision-makers rationally ignore scientific information as useless for them, except when cherry-picking results that fit their agenda" would fit your formulation of "the correct price for literature reviews is low", I'm not sure if that's the thing you had in mind.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2018-03-12T21:58:44.549Z · LW(p) · GW(p)
I'm not sure if it's what cousin_it had in mind, but here's an example: Rather than visiting a doctor again for the same bad advice on how to treat my plantar fasciitis, I paid this guy for (what is essentially) a literature review of the current state of the scientific evidence as to the relative effectiveness of available treatments.
↑ comment by Ben Pace (Benito) · 2018-03-01T14:16:59.849Z · LW(p) · GW(p)
(I feel likely to make a mistake in my reasoning here, but #BetterTriedAndFailedThatNotTriedAtAll)
Given the model where scientists are not trained in synthesis but generation, it doesn't seem clear (to me) that there's a standard training programme for this sort of work, nor qualifications, so I don't know that e.g. businesses would be in a good position to hire for it.
The model probably also predicts that solving it would require moving from the current Nash equilibria of not having good professors in this nor having good students trying to learn it, to a situation where you simultaneously have both (because a training programme will not grow/sustain without both).
Replies from: cousin_it↑ comment by ChristianKl · 2018-03-03T05:41:55.079Z · LW(p) · GW(p)
Politicians seem to overvalue the services that political consultants provide to them when the academic research literature suggests that spending money that way has far less impact than politicians and the general public belief.
As a result the Democratic party under Obama focused their resources very badly, lost a lot of seats and then Hillary lost the election to Trump.
The Republican's did invest resources into the Tea Party which seems to work better than TV ads over longer timeframes but that also not because of literature reviews that suggest they should put the money there.
comment by ChristianKl · 2018-03-03T10:32:09.299Z · LW(p) · GW(p)
If you look at the field of nutrition you still have researchers measuring obesity through BMI which is a crappy metric. The systems seems to fail at developing better metrics. It feels to me like part of the problem is that most researchers focus on epistemology and ignore ontology as a topic to be concerned about.
Part of why the DSM-V is so messy is that psychologists don't take ontology seriously. Developing better ontology for a field seems to go poorly in large parts of academia.
When it comes to the world "novel" it's worth to clarify what it means. There are a lot of new insights that don't seem interesting to an existing academic discourse and thus have no place in it.
Sydney Brenner who got a Nobel Price for being instrumental in establishing molecular biology as a discipline said in an interview that he thinks it would be very hard to do something similar in today's academia.
comment by MondSemmel · 2018-03-03T17:51:34.121Z · LW(p) · GW(p)
Thanks for this post - contrasting the models from EY and Holden seemed useful to me.
I was somewhat confused at one paragraph of yours:
"From reading Inadequate Equilibria, I mostly thought of science through the lens of coordination failures, and this framing was markedly more positive than the one I'd previously had ("Academia is the thing that fails to do X" vs "Academia is the thing that is good at Y, but only Y"). As well as helping me model academia more fruitfully, I honestly suspect that this framing will be more palatable [...]."
-> Throughout that paragraph, I was never sure whether by the two mentions of "this framing" you meant the one by Holden or by EY. After rereading it several times, I think you mean Holden's framing is more positive?
(Also, I'm not sure which part in the parentheses corresponds to which framing - is the 'fails to do X' framing EY's or Holden's?)
Replies from: ESRogs↑ comment by ESRogs · 2018-03-06T01:01:15.196Z · LW(p) · GW(p)
Not Ben Pace, but I'm pretty confident that this is what he meant:
and this framing was markedly more positive than the one I'd previously had
Holden's framing was more positive than EY's framing.
"Academia is the thing that fails to do X" vs "Academia is the thing that is good at Y, but only Y"
EY's framing: "Academia is the thing that fails to do X"
Holden's framing: "Academia is the thing that is good at Y, but only Y"
Replies from: Benito↑ comment by Ben Pace (Benito) · 2018-03-06T10:09:18.877Z · LW(p) · GW(p)
Yup, and thanks for speaking up MondSemmel - I'll try to clean up the OP [Edit: have made an edit].
comment by habryka (habryka4) · 2019-11-29T20:47:06.289Z · LW(p) · GW(p)
This post gave me a really concrete model of academia and its role in society, in a way that I've extensively built on since then for a lot of my thinking on LessWrong, but also the broader problem of how to distill and combine knowledge for large groups of people.
comment by AnthonyC · 2018-03-30T15:07:46.111Z · LW(p) · GW(p)
Parts of this kind of work sound analogous to the kind of work consulting and market research and tech scouting firms do. I work at such a company with a technology focus (my own background is in materials science and physics). Basically my job is to review literature, talk to the people and companies inventing things, and distill it down to, "Here's what's happening, here's the timeline it's happening on, here's our best estimates as to how much impact it will have on each other thing we could think of, here are the assumptions driving it, and we've condensed/crystallized it into as few pages or paragraphs as possible but are happy to talk in more detail as needed." Our individual clients are companies, or investors, or governments, who don't have incentive enough to each do it all themselves, but collectively they are all willing to each spend a little on it getting done.
I'm not saying the same types of companies are well suited to the specific goals outlined in the OP. But I *do* think the skill sets involved overlap, and you might want to look at those kinds of companies for people who know how to go about answering these kinds of questions in ways that also get various stakeholders interested enough to contribute actual money. Once done, and once you reach a wide enough audience, such companies' brands and predictions also start to form anchoring points that everyone in a field at least recognizes and respects.
Similarly, the hard but still important work of filling in critical details, or filling in a common groundwork, for any intellectual or technological problem with eventual real-world impact, often happens not in universities but in consortia involving companies, governments, national labs, universities, and start-ups. It's a massive coordination challenge, though, to do it well.
comment by DanB · 2018-03-02T22:51:34.820Z · LW(p) · GW(p)
Holden is a smart guy, but he's also operating under a severe set of political constraints, since his organization depends so strongly on its ability to raise funds. So we shouldn't make too much of the fact that he thinks academia is pretty good - obviously he's going to say that.
Replies from: ESRogs, ESRogs↑ comment by ESRogs · 2018-03-03T04:17:34.423Z · LW(p) · GW(p)
since his organization depends so strongly on its ability to raise funds
This doesn't quite seem like an accurate description of the situation, given that his org is trying to give away billions of dollars. Don't disagree that it's in his interest to choose his words carefully though.