Philosophy that can be "taken seriously by computer scientists"

post by lukeprog · 2011-12-27T02:39:13.616Z · LW · GW · Legacy · 15 comments

I've long held CMU's philosophy department in high regard. One of their leading lights, Clark Glymour, recently published a short manifesto, which Brian Leiter summed up as saying that "the measure of value for philosophy departments is whether they are taken seriously by computer scientists."

Selected quote from Glymour's manifesto:

Were I a university administrator facing a contracting budget, I would not look to eliminate biosciences or computer engineering. I would notice that the philosophers seem smart, but their writings are tediously incestuous and of no influence except among themselves, and I would conclude that my academy could do without such a department... But not if I found that my philosophy department retrieved a million dollars a year in grants and fellowships, and contained members whose work is cited and used in multiple subjects, and whose faculty taught the traditional subject well to the university’s undergraduates.

Also see the critique here, but I'd like to have Glymour working on FAI.

15 comments

Comments sorted by top scores.

comment by JonathanLivengood · 2011-12-29T01:51:53.624Z · LW(p) · GW(p)

Full disclosures below. *

I agree with much of Glymour's manifesto, but I think the passage quoted would have been better left on the cutting-room floor. One reason is given in the critique you link: lots of philosophy gets grants and citations and employment in diverse areas around the academy and elsewhere. Not all of it gets noticed in science or furthers a scientific project, even broadly construed. For example, John Hawthorne just won a multi-million dollar grant to do work in epistemology of religion, and a couple of years ago, Alfred Mele won a multi-million dollar grant to do more work on free will. I doubt that Glymour thinks either of these projects has the virtues of the work of his CMU colleagues. But by the "grant-winning" standard, administrators should love this sort of philosophy. By a sales or readership standard, administrators ought to be encouraging more pop-culture and philosophy schlock.

Another reason is given by Glymour in the same manifesto:

A real use of philosophy departments is to provide shelter for such thinkers [who are, at least initially, outsiders to the science of the day, people who will take up questions that may have been made invisible to scientists because of disciplinary blinkers], and in the long run they may be the salvation of philosophy as an academic discipline.

So, a good use for philosophy departments is to shelter iconoclastic thinkers who are not going to be either understood or appreciated by contemporary scientists. How are such people going to be successful grant-winners? I can see how they might successfully publish within philosophy, given a certain let-every-flower-bloom attitude in philosophy. And I can see how some philosophers might end up convincing some scientists to take their work seriously enough to fund it ... eventually. But surely, some of Glymour's iconoclasts will be missed or ignored in the grant-giving process. Better, I think, to have some places for people to think whatever they want to think and be supported in that thinking so that they do not have to panic about meeting the basic necessities of life. If that means having to put up with literary criticism, then so be it.

  • Disclosures. I did my dissertation under Peter Spirtes, and I've taken many enjoyable classes with Clark Glymour. I think Clark is an excellent person, and he is one of my philosophical heroes, although I don't think I do a very good job of emulating him.
comment by cousin_it · 2011-12-27T16:08:52.704Z · LW(p) · GW(p)

The manifesto has a nice paragraph where Glymour lists the contributions of many mathematical philosophers. This might be relevant to UDT:

Philosophers and statisticians alike want to posit probabilities over sentences, but how would that work with a language adequate to science and mathematics, say first order logic? Haim Gaifman told us, and worked out the implications for what is and what is not learnable.

comment by lukeprog · 2011-12-27T16:34:28.516Z · LW(p) · GW(p)

Yup. This is why I was so surprised in January 2011 that Less Wrong had never before mentioned formal philosophy, which is the branch of philosophy most relevant to the open research problems of Friendly AI. See, for example, Self-Reference and the Acyclity of Rational Choice or Reasoning with Bounded Resources and Assigning Probabilities to Arithmetical Statements.

comment by cousin_it · 2011-12-27T17:10:27.484Z · LW(p) · GW(p)

Thanks for the links. I just read those two papers and they don't seem to be saying anything new to me :-(

comment by JonathanLivengood · 2011-12-29T01:22:34.028Z · LW(p) · GW(p)

In your linked piece, you were talking about formal epistemology. Here you say "formal philosophy." Is that a typo, or do you think that formal epistemology exhausts formal philosophy? (I would hope not the latter, since lots of formal work gets done in philosophy outside epistemology!)

comment by lukeprog · 2011-12-29T04:25:17.434Z · LW(p) · GW(p)

Formal epistemology is a subfield within formal philosophy, probably the largest.

comment by JonathanLivengood · 2011-12-29T06:13:42.705Z · LW(p) · GW(p)

Larger than logic? Hmm ... maybe you're thinking about "formal philosophy" in a way that I am unfamiliar with.

comment by Will_Newsome · 2011-12-28T20:04:49.615Z · LW(p) · GW(p)

This is pretty much unrelated but do you think maybe you could write a short post about the relevance of algorithmic probability for human rationality? There's this really common error 'round these parts where people say a hypothesis (e.g. God, psi, etc) is a prior unlikely because it is a "complex" hypothesis according to the universal prior. Obviously the "universal prior" says no such thing, people are just taking whatever cached category of hypotheses they think are more probable for other unmentioned reasons and then labeling that category "simple", which might have to do with coding theory but has nothing to do with algorithmic probability. Considering this appeal to simplicity is one of the most common attempted argument stoppers it might benefit the local sanity waterline to discourage this error. Fewer "priors", more evidence.

ETA: I feel obliged to say that though algorithmic probability isn't that useful for describing humans' epistemic states, it's very useful for talking about FAI ideas; it's basically a tool for transforming indexical information about observations into logical information about programs and also proofs thanks to the Curry--Howard isomorphism, which is pretty cool, among other reasons it's cool.

comment by cousin_it · 2011-12-28T20:43:06.055Z · LW(p) · GW(p)

I already have a post about that. Unfortunately I screwed up the terminology and was rightly called on it, but the point of the post is still valid.

comment by Will_Newsome · 2011-12-28T20:52:44.605Z · LW(p) · GW(p)

Thanks. I actually found your amendment more enlightening. Props again for your focus on the technical aspects of rationality, stuff like that is the saving grace of LW.

comment by torekp · 2011-12-28T03:19:56.208Z · LW(p) · GW(p)

As Luke recently pointed out,

As Bellman (1961) said, "the very construction of a precise mathematical statement of a verbal problem is itself a problem of major difficulty."

But many verbal restatements of verbal problems often, even typically, precede and facilitate the construction of this golden mathematical trophy. These portions of philosophy, which are the bulk of it, might easily fail to impress the computer scientists. But without them, progress in formal philosophy would be slower.

comment by vallinder · 2011-12-27T18:23:36.768Z · LW(p) · GW(p)

For those interested, the CMU philosophy department organizes an annual summer school in logic and formal epistemology.

comment by Bruno_Coelho · 2011-12-28T01:10:41.883Z · LW(p) · GW(p)

Its interesting why some of the humanities -- and particulary areas of philosophy -- are constantly defending their research program or the value of the discipline as a whole. Aparently, the folks of other segments of academia want see something useful. But it's not so sad, in some cases the dialog can happen, for example, in formal epistemology, the tentative to mix Bayesianism with conceptual analysis, trying to formalize concepts like 'coherence'.

comment by Cthulhoo · 2011-12-27T09:19:06.512Z · LW(p) · GW(p)

the measure of value for philosophy departments is whether they are taken seriously by computer scientists

I would roughly generalize to "scientists". There is the need of people armed with both the tools of philosophy and science to discuss the meaning of many discoveries of the 20th/21st century: usually scientists are too narrowly focused and philosopher are not sufficiently well prepared. Nice to know that there are some exceptions (trusting you on this, I till have to go through the links).

comment by NCoppedge · 2012-02-13T18:36:54.372Z · LW(p) · GW(p)

My upcoming book, 1-Page-Classics gives examples of a kind of "reduced" Bayesianism in the form of a one-pager called "Traditional Claims" and another called "Modal Realism."

The book might also be interesting for virtue ethics, in the form of abbreviations of the famous scroll "The Mandate of Heaven," Confucius' "Analects or Analectus," and Lao Tzu's "Tao Te Ching."

I also abbreviate Epictetus' "Enchiridion" in a creative fashion, and "Republic of Plato" includes an excellent form of sophist criticism to that project (poetry, the ring of Giges, etc.).