Why philosophy of science?

post by Richard_Ngo (ricraz) · 2020-11-07T11:10:02.273Z · LW · GW · 2 comments
During my last few years working as an AI researcher, I increasingly came to appreciate the distinction between what makes science successful and what makes scientists successful. Science works because it has distinct standards for what types of evidence it accepts, with empirical data strongly prioritised. But scientists spend a lot of their time following hunches which they may not even be able to articulate clearly, let alone in rigorous scientific terms - and throughout the history of science, this has often paid off. In other words, the types of evidence which are most useful in choosing which hypotheses to prioritise can differ greatly from the types of evidence which are typically associated with science. In particular, I’ll highlight two ways in which this happens.

First is scientists thinking in terms of concepts which fall outside the dominant paradigm of their science. That might be because those concepts are too broad, or too philosophical, or too interdisciplinary. For example, machine learning researchers are often inspired by analogies to evolution, or beliefs about human cognition, or issues in philosophy of language - which are all very hard to explore deeply in a conventional machine learning paper! Often such ideas are mentioned briefly in papers, perhaps in the motivation section - but there’s not the freedom to analyse them with the level of detail and rigour that is required for making progress on tricky conceptual questions.

Secondly, scientists often have strong visions for what their field could achieve, and long-term aspirations for their research. These ideas may make a big difference to what subfields or problems those researchers focus on. In the case of AI, some researchers aim to automate a wide range of tasks, or to understand intelligence, or to build safe AGI. Again, though, these aren’t ideas which the institutions and processes of the field of AI are able to thoroughly discuss and evaluate - instead, they are shared and developed primarily in informal ways.

Now, I’m not advocating for these ideas to be treated the same as existing scientific research - I think norms about empiricism are very important to science’s success. But the current situation is far from ideal. As one example, Rich Sutton’s essay on the bitter lesson in AI was published on his blog, and then sparked a fragmented discussion on other blogs and personal facebook walls. Yet in my opinion this argument about AI, which draws on his many decades of experience in the field, is one of the most crucial ideas for the field to understand and evaluate properly. So I think we need venues for such discussions to occur in parallel with the process of doing research that conforms to standard publication norms.

One key reason I’m currently doing a PhD in philosophy is because I hope that philosophy of science can provide one such venue for addressing important questions which can’t be explored very well within scientific fields themselves. To be clear, I’m not claiming that this is the main focus of philosophy of science - there are many philosophical research questions which, to me and most scientists, seem misguided or confused. But the remit of philosophy of science is broad enough to allow investigations of a wide range of issues, while also rewarding thorough and rigorous analysis. So I’m excited about the field’s potential to bring clarity and insight to the high-level questions scientists are most curious about, especially in AI. Even if this doesn’t allow us to resolve those questions directly, I think it will at least help to tease out different conceptual possibilities, and thereby make an important contribution to scientific - and human - progress.

2 comments

Comments sorted by top scores.

comment by adamShimi · 2020-11-07T22:53:44.040Z · LW(p) · GW(p)

I'm slightly disappointed, because I thought you were going to write a long post on philosophy of science applied to AI. But this one is still a pretty good post.

About the content, I completely agree with you. Even more, I think there is a role of philosophy of science that you don't mention, and that is actually important in AI: grounding the kind of knowledge produced by computer science.

I mean, you're using the word science here to talk about AI, but computer science, despite its name, is not a natural science in the classic sense of the term. Studying computation is not the same thing as studying lightning: the latter is an actual physical phenomenon, whereas the former is an abstraction. Despite some attempts, research in computer science doesn't follow the scientific method. Even in the empirical study of, let's say neural networks, the object of study are built and tweaked as we go along, not a pre-existing phenomenon.

But computation is not just an abstraction in the Pure Mathematics sense of the world either. Most researchers in computer science I know (me included) think that our research, however abstract it might be, actually tells something about the physical world. So computer science is not a natural science, in the sense that studying computation doesn't fall easily under the scientific method; but it's also not a domain of pure mathematics, because it pretends to say something about the world we live in.

One way to think about it is that computer science studies which feats of engineering are possible, and which aren't. But I'm not completely satisfied with this, because it does lack the component that computation or learning in ML do seem to capture something fundamental happening in the physical world.

Why do I think that this question is of similar importance to the ones that you're mentioning in your post? Because knowing what kind of knowledge a field of research produces, and what it can be useful for, is fundamental to using the fruits of this research. Even more for the field including AI and AI Safety, knowing what the research we produce actually means might make the difference between having the guarantees we want, and just fooling ourselves into thinking that we have them.

Replies from: AnthonyC
comment by AnthonyC · 2020-11-09T18:58:17.228Z · LW(p) · GW(p)

But I'm not completely satisfied with this, because it does lack the component that computation or learning in ML do seem to capture something fundamental happening in the physical world.

Why do I think that this question is of similar importance to the ones that you're mentioning in your post? Because knowing what kind of knowledge a field of research produces, and what it can be useful for, is fundamental to using the fruits of this research.

It seems to me that computer science is ontologically prior to physics, but not by as much as mathematics, in kinda the same way that statistical mechanics and thermodynamics are (but maybe a little further up the chain of abstaction). The laws of thermodynamics hold in a large class of possible universes with a wide range of possible physical laws, but very far from all possible universes with all possible laws. If they study something physical, maybe it is something about constraints on the general mathematical character of physical law in our universe; that seems to be how the implications cash out for quantum information theory, too.