Posts
Comments
If the AI actually ends up with strong evidence for a scenario it assigned super-exponential improbability, the AI reconsiders its priors and the apparent strength of evidence rather than executing a blind Bayesian update, though this part is formally a tad underspecified.
I would love to have a conversation about this. Is the "tad" here hyperbole or do you actually have something mostly worked out that you just don't want to post? On a first reading (and admittedly without much serious thought -- it's been a long day), it seems to me that this is where the real heavy lifting has to be done. I'm always worried that I'm missing something, but I don't see how to evaluate the proposal without knowing how the super-updates are carried out.
Really interesting, though.
Ah, I see that I misread. Somehow I had it in my head that you were talking about the question on the philpapers survey specifically about scientific realism. Probably because I've been teaching the realism debate in my philosophy of science course the last couple of weeks.
I am, however, going to disagree that I've given a too strong characterization of scientific realism. I did (stupidly and accidentally) drop the phrase "... is true or approximately true" from the end of the second commitment, but with that in place, the scientific realist really is committed to our being able to uniquely determine by evidence which of several literal rivals we ought to believe to be true or approximately true. Weakening to "most cases" or "many cases" deflates scientific realism significantly. Even constructive empiricists are going to believe that many scientific theories are literally true, since many scientific theories do not say anything about unobservable entities.
Also, without the "in every case," it is really hard to make sense of the concern realists have about under-determination. If realists thought that sometimes they wouldn't have good reasons to believe some one theory to be true or approximately true, then they could reply to real-life under-determination arguments (as opposed to the toy examples sometimes offered) by saying, "Oh, this is an exceptional case."
Anyway, the kinds of anti-realist who oppose scientific realism almost never deny that tables exist. (Though maybe they should for reasons coming out of material object metaphysics.)
We do pretty well, actually (pdf). (Though I think this is a selection effect, not a positive effect of training.)
I'm guessing that you don't really know what anti-realism in philosophy of science looks like. I suspect that most of the non-specialist philosophers who responded also don't really know what the issues are, so this is hardly a knock against you. Scientific realism sounds like it should be right. But the issue is more complicated, I think.
Scientific realists commit to at least the following two theses:
(1) Semantic Realism. Read scientific theories literally. If one theory says that space-time is curved and there are no forces, while the other says that space-time is flat and there are primitive forces (so the two have exactly the same observational consequences in all cases), then the realist says that at most one of the two is true.
(2) Epistemic Realism. In every case, observation and experimentation can provide us with good epistemic (as opposed to pragmatic) reasons to believe that what some single theory, read literally, says about the world.
Denying either of these leads to some form of anti-realism, broadly construed. Positivists, instrumentalists, and pragmatists deny (1), as Einstein seems to have done in at least two cases. Constructive empiricists deny (2) in order to keep a commitment to (1) while avoiding inflationary metaphysics. Structural realists deny one or both of these commitments, meaning that they are anti-realists in the sense of the question at stake.
Ah, gotcha.
Are the meetings word of mouth at this point, then? When is the next meeting planned?
I have had some interest, but I never managed to attend any of the previous meetups. I don't know if I will find time for it in the future.
That question raises a bunch of interpretive difficulties. You will find the expression sine qua non, which literally means "without which not," in some medieval writings about causation. For example, Aquinas rejects mere sine qua non causality as an adequate account of how the sacraments effect grace. In legal contexts today, that same expression denotes a simple counterfactual test for causation -- the "but for" test. One might try to interpret the phrase as meaning "indispensable" when Aquinas and other medievals use it and then deflate "indispensable" of its counterfactual content. However, if "indispensable" is supposed to lack counterfactual significance, then the same non-counterfactual reading could, I think, be taken with respect to that passage in Hume. I don't know if the idea shows up earlier. I wouldn't be surprised to find that it does.
I'll say it again: there is no point in criticising philosophy unless you have (1) a better way of (2) answering the same questions.
Criticism could come in the form of showing that the questions shouldn't be asked for one reason or another. Or criticism could come in the form of showing that the questions cannot be answered with the available tools. For example, if I ran into a bunch of people trying to trisect an arbitrary angle using compass and straight-edge, I might show them that their tools are inadequate for the task. In principle, I could do that without having any replacement procedure. And yet, it seems that I have helped them out.
Such criticism would have at least the following point. If people are engaged in a practice that cannot accomplish what they aim to accomplish, then they are wasting resources. Getting them to redirect their energies to other projects -- perhaps getting them to search for other ways to satisfy their original aims (ways that might possibly work) -- would put their resources to work.
That is being generous to Hume, I think. The counterfactual account in Hume is an afterthought to the first of his two (incompatible) definitions of causation in the Enquiry:
Similar objects are always conjoined with similar. Of this we have experience. Suitably to this experience, therefore, we may define a cause to be an object, followed by another, and where all the objects similar to the first are followed by objects similar to the second. Or in other words where, if the first object had not been, the second never had existed.
As far as I know, this is the only place where Hume offers a counterfactual account of causation, and in doing so, he confuses a counterfactual account with a regularity account. Not promising. Many, many people have tried to find a coherent theory of causation in Hume's writings: he's a regularity theorist, he's a projectivist, he's a skeptical realist, he's a counterfactual theorist, he's an interventionist, he's an inferentialist ... or so various interpreters say. On and on. I think all these attempts at interpreting Hume have been failures. There is no Humean theory to find because Hume didn't offer a coherent account of causation.
Wow! Thanks for the Good Thinking link. Now I won't have to scan it myself.
Yes, that's the letter!
It might help if you told us which of the thousands of varieties of Bayesianism you have in mind with your question. (I would link to I.J. Good's letter on the 46656 Varieties of Bayesians, but the best I could come up with was the citation in Google Scholar, which does not make the actual text available.)
In terms of pure (or mostly pure) criticisms of frequentist interpretations of probability, you might look at two papers by Alan Hajek: fifteen arguments against finite frequentism and fifteen arguments against hypothetical frequentism.
In terms of Bayesian statistics, you might take a look at a couple of papers by Dennis Lindley: an older paper on The Present Position in Bayesian Statistics and a newer one on The Philosophy of Statistics.
Lindley gives a personalist Bayesian account. If you want "objective Bayes," you might take a look at this paper by James Berger. (The link actually has a bunch of papers, some of them discussing Berger's paper, which is the first in the set.)
You might also find Bradley Efron's paper Why Isn't Everyone a Bayesian? useful. And on that note, I'll just say that the presupposition of your question (that Bayesianism is straightforwardly superior to frequentism in all or most all cases) is more fraught than you might think.
Oh, the under-specification! ;)
Seems to me that curing cancer swamps out everything else in the story. Supposing that World War 2 was entirely down to Hitler, the casualties came to about 60-80 million. By comparison, back of the envelope calculations suggest that around 1.7 million people die from cancer each year* in the U.S., E.U., and Japan taken together. See the CDC numbers and the Destatis numbers (via Google's public data explorer) for the numbers that I used to form a baseline for the 1.7 million figure.
That means that within a generation or two, the cancer guy would have saved as many lives (to speak with the vulgar) as the Hitler guy would have killed. Plus, the cancer guy would have improved quality of life for a lot more people than that. Maybe we have to go another couple of generations to balance life years if the Hitler casualties are all young and the cancer savings are all old. But under the assumption that a solution to cancer is very unlikely without the cancer guy, the right decision seems clearly to be to steer the trolley left.
* Things get more complicated if we suppose that the Hitler guy will bring about a new world war and attempted genocide, which might involve full-on nuclear war, rather than a sort of repeat of real-Hitler's consequences. I am choosing to understand the Hitler guy as being responsible for 60 or 80 million deaths -- or make the number a bit larger, like 100 million, if you like.
Interesting piece. I was a bit bemused by this, though:
In fact Plato wrote to Archimedes, scolding him about messing around with real levers and ropes when any gentleman would have stayed in his study or possibly, in Archimedes’ case, his bath.
Problematically for the story, Plato died around 347 BCE, and Archimedes wasn't born until 287 BCE -- sixty years later.
I'm not sure what you count as violence, but if you look at the history of the suffrage movement in Britain, you will find that while the movement started out as non-violent, it escalated to include campaigns of window-breaking, arson, and other destruction of property. (Women were also involved in many violent confrontations with police, but it looks like the police always initiated the violence. To what degree women responded in kind and whether that would make their movement violent is unclear to me.) The historians usually describe the vandalism campaigns as violent, militarism, or both, though maybe you meant to restrict attention to violence against persons. Of course, the women agitating for the vote suffered much more violence than they inflicted.
When a certain episode of Pokemon contained contained a pattern of red and blue flashes capable of inducing epilepsy, 685 children were taken to hospitals, most of whom had seen the pattern not on the original Pokemon episode but on news reports showing the episode which had induced epilepsy.
At the very least, this needs a citation or two, since the following sources cast doubt on the story as presented:
And CSI's account, which includes the following:
At about 6:51, the flashing lights filled the screens. By 7:30, according to the Fire-Defense agency, 618 children had been taken to hospitals complaining of various symptoms.
News of the attacks shot through Japan, and it was the subject of media reports later that evening. During the coverage, several stations replayed the flashing sequence, whereupon even more children fell ill and sought medical attention. The number affected by this “second wave” is unknown.
And then goes on to argue that the large number of cases was due to mass hysteria.
Do you have worked out numbers (in terms of community participation, support dollars, increased real-world violence, etc.) comparing the effect of having the censorship policy and the effect of allowing discussions that would be censored by the proposed policy? The request for "Consequences we haven't considered" is hard to meet until we know with sufficient detail what exactly you have considered.
My gut thinks it is unlikely that having a censorship policy has a smaller negative public relations effect than having occasional discussions that violate the proposed policy. I know that I am personally much less okay with the proposed censorship policy than with having occasional posts and comments on LW that would violate the proposed policy.
What is the evidence that 2 is out? Suppose there are five available effective means to some end. If I take away one of them, doesn't that reduce the availability of effective means to that end? Is the idea supposed to be that the various means are all so widely available that overall availability of means to the relevant end is not affected by eliminating (or greatly reducing) availability of one of them? Seems contentious to me. Moreover, what you say after the claim that 2 is out seems rather to support the claim that 2 is basically correct: poison, bombs, and knives are either practically less effective for one reason or another (hard to use, hard to practice, less destructive -- in the case of knives) or practically less available for one reason or another (poisons not widely available).
A more interesting number for the gun control debate is the percentage of households with guns. That number in the U.S. has been declining -- pdf, but it is still very high in comparison with other developed nations.
However, exact comparisons of gun ownership rates internationally are tricky. The data is often sparse or non-uniform in the way it is collected. The most consistent comparisons I could find -- and I'd love to see more recent data -- were from the 1989 and 1992 International Crime Surveys. The numbers are reported in this paper on gun ownership, homicide, and suicide -- pdf. These data are old, but in 1989, about 48% of U.S. households had a firearm of some kind, compared with 29% of Canadian households. However, the numbers for handguns specifically were very different. In 1989, only 5% of Canadian households had a handgun, compared with 28% of U.S. households.
That's a good point. Looks like an oversight on my part. I was probably overly focused on the formal side that aims to describe normatively correct reasoning. (Even doing that, I missed some things, e.g. decision theory.) I hope to write up a more detailed, concrete, and positive proposal in the next couple of days. I will include at least one -- and probably two -- courses that look at failures of good reasoning in that recommendation.
I don't want to have a dispute about words. What I mean when I talk about the logic curriculum in my department, I have in mind the broader term. The entry-level course in logic does have some probability/statistics content, already. There isn't a sub-program in logic, like a minor or anything, that has a structural component for anyone to fight about. I would like to see more courses dedicated to probability and induction from a philosophical perspective. But if I get that, then I'm not going to fight about the word "logic." I'd be happy to take a more generic label, like CMU's logic, computation, and methodology.
Because I see those things as part of logic. As I see it, logic as typically taught in mathematics and philosophy departments from 1950 on dropped at least half of what logic is supposed to be about. People like Church taught philosophers to think that logic is about having a formal, deductive calculus, not about the norms of reasoning. I think that's a mistake. So, in reforming the logic curriculum, I think one goal should be to restore something that has been lost: interest in norms of reasoning across the board.
I definitely agree that evolutionary stories can become non-explanatory just-so stories. The point of my remark was not to give the mechanism in detail, though, but just to distinguish the following two ways of acquiring causal concepts:
(1) Blind luck plus selection based on fitness of some sort. (2) Reasoning from other concepts, goals, and experience.
I do not think that humans or proto-humans ever reasoned their way to causal cognition. Rather, we have causal concepts as part of our evolutionary heritage. Some reasons to think this is right include: the fact that causal perception (pdf) and causal agency attributions emerge very early in children; the fact that other mammal species, like rats (pdf), have simple causal concepts related to interventions; and the fact that some forms of causal cognition emerge very, very early even among more distant species, like chickens.
Since causal concepts arise so early in humans and are present in other species, there is current controversy (right in line with the thesis in your OP) as to whether causal concepts are innate. That is one reason why I prefer the Adam thought experiment to babies: it is unclear whether babies already have the causal concepts or have to learn them.
EDIT: Oops, left out a paper and screwed up some formatting. Some day, I really will master markdown language.
That seems like a very different question than, say, how humans actually came by their tendency to attribute causation. For the question about human attributions, I would expect an evolutionary story: the world has causal structure, and organisms that correctly represent that structure are fitter than those that do not; we were lucky in that somewhere in our evolutionary history, we acquired capacities to observe and/or infer causal relations, just as we are lucky to be able to see colors, smell baking bread, and so on.
What you seem to be after is very different. It's more like Hume's story: imagine Adam, fully formed with excellent intellectual faculties but with neither experience nor a concept of causation. How could such a person come to have a correct concept of causation?
Since we are now imagining a creature that has different faculties than an ordinary human (or at least, that seems likely, given how automatic causal perception in launching cases is and how humans seem to think about their own agency), I want to know what resources we are giving this imaginary Adam. Adam has no concept of causation and no ability to perceive causal relations directly. Can he perceive spatial relations directly? Temporal relations? Does he represent his own goals? The goals of others? ...
When you ask (in your koan) how the process of attributing causation gets started, what exactly are you asking about? Are you asking how humans actually came by their tendency to attribute causation? Are you asking how an AI might do so? Are you asking about how causal attributions are ultimately justified? Or what?
What do you think about debates about which axioms or rules of inference to endorse? I'm thinking here about disputes between classical mathematicians and varieties of constructivist mathematicians), which sometime show themselves in which proofs are counted as legitimate.
I am tempted to back up a level and say that there is little or no dispute about conditional claims: if you give me these axioms and these rules of inference, then these are the provable claims. The constructivist might say, "Yes, that's a perfectly good non-constructive proof, but only a constructive proof is worth having!" But then, in a lot of moral philosophy, you have the same sort of agreement. Given a normative moral theory and the relevant empirical facts, moral philosophers are unlikely to disagree about what actions are recommended. The controversy is at the level of which moral theory to endorse. At least, that's the way it looks to me.
Though it doesn't yet exist, if such a course sounds as helpful to you as it does to me, then you could of course try to work with CFAR and other interested parties to try to develop such a course.
I am interested. Should I contact Julia directly or is there something else I should do in order to get involved?
Also, since you mention Alexander's book, let me make a shameless plug here: Justin Sytsma and I just finished a draft of our own introduction to experimental philosophy, which is under contract with Broadview and should be in print in the next year or so.
Good point.
I didn't realize the link I gave was not viewable: apologies for that. Also, wow. That PHYS 123 "page" is really embarrassingly bad.
I was going to say that the problem from the instructor's point of view is deciding whether the student really has the necessary background, but Desrtopa is probably right that some sort of testing system could be set up.
In one sense, I agree that there shouldn't be any gating. It is overly-paternalistic. Students should be allowed to risk taking advanced classes as long as they don't gripe about their failures later. But on the other hand, the actual result that I see in my classes is that many -- and here I mean maybe as many as half -- of the students in upper-division courses are not prepared to do philosophy at that level. They don't know how to engage in discussion appropriately or productively; they don't know how to write clearly or criticize arguments effectively; etc. If they only affected themselves, I could put up with it. But they don't affect only themselves, they affect the other students as well.
The content is informal logic: discourse analysis, informal fallacies (like ad hominem, ad populum, etc.). Depending on who teaches it, there might be some simple syllogistic logic or some translation problems.
I like the idea of requiring logic along with the intro course. I'll keep that one in mind.
I strongly agree with your comment. What concrete steps would you take to fix the problem? Are there specific classes you would add or things you would emphasize in existing classes? Are there specific classes that you would remove or things you would de-emphasize in existing classes?
You might be right that I'm reading too much into what you've written. However, I suspect (especially given the other comments in this thread and the comments on the reddit thread) that the reading "Philosophy is overwhelmingly bad and should be killed with fire," is the one that readers are most likely to actually give to what you've written. I don't know whether there is a good way to both (a) make the points you want to make about improving philosophy education and (b) make the stronger reading unlikely.
I'm curious: if you couldn't have your whole mega-course (which seems more like the basis for a degree program than the basis for a single course, really), what one or two concrete course offerings would you want to see in every philosophy program? I ask because while I may not be able to change my whole department, I do have some freedom in which courses I teach and how I teach them. If you are planning to cover this in more detail in upcoming posts, feel free to ignore the question here.
Also, I did understand what you were up to with the Spirtes reference, I just thought it was funny. I tried to imagine what the world would have had to be like for me to have been surprised by finding out that Spirtes was the lead author on Causation, Prediction, and Search, and that made me smile.
The head of your dissertation committee was a co-author with Glymour on the work that Pearl built on with Causality.
I was, in fact, aware of that. ;)
In the grand scheme of things, I may have had an odd education. However, it's not like I'm the only student that Glymour, Spirtes, Machery, and many of my other teachers have had. Basically every student who went through Pitt HPS or CMU's Philosophy Department had the same or deeper exposure to psychology, cognitive science, neuroscience, causal Bayes nets, confirmation theory, etc. Either that, or they got an enormous helping of algebraic quantum field theory, gauge theory, and other philosophy of physics stuff.
You might argue that these are very unusual departments, and I am inclined to agree with you. But only weakly. If you look at Michigan or Rutgers, you find lots of people doing excellent work in decision theory, confirmation theory, philosophy of physics, philosophy of cognitive science, experimental philosophy, etc. A cluster of schools in the New York area -- all pretty highly ranked -- do the same things. So do schools in California, like Stanford, UC Irvine, and UCSD. My rough estimate is that 20-25% of all philosophical education at schools in Leiter's Top 25 is pretty similar to mine. Not a majority, but not a small chunk, either, given how much of philosophy is devoted to ethics. That is, of course, just an educated guess. I don't have a data-driven analysis of what philosophical training looks like, but then neither do you. Hence, I think we should be cautious about making sweeping claims about what philosophical training looks like. It might not look the way you think it looks, and from the inside, it doesn't seem to look the way you say it looks. Data are needed if we want to say anything with any kind of confidence.
Term logic is my only mention of Aristotle.
Your pre-1980s causation link goes to a subsection of the wiki on causality, which subsection is on Aristotle's theory of causation. The rest of the article is so ill-organized that I couldn't tell which things you meant to be pointing to. So, I defaulted to "Whatever the link first takes me to," which was Aristotle. Maybe you thought it went somewhere else or meant to be pointing to something else?
Anyway, I know I have a tendency only to criticize, where I should also be flagging agreement. I agree with a lot of what you're saying here and elsewhere. Don't forget that you have allies in establishment philosophy.
Provocative article. I agree that philosophers should be reading Pearl and Kahneman. I even agree that philosophers should spend more time with Pearl and Kahneman (and lots of other contemporary thinkers) than they do with Plato and Kant. But then, that pretty much describes my own graduate training in philosophy. And it describes the graduate training (at a very different school) received by many of the students in the department where I now teach. I recognize that my experience may be unusual, but I wonder if philosophy and philosophical training really are the way you think they are.
Bearing in mind that my own experiences may be quite unusual, I present some musings on the article nonetheless:
(1) You seem to think that philosophical training involves a lot of Aristotelian ideas (see your entries for "pre-1980 theories of causation" and "term logic"). In my philosophical education, including as an undergraduate, I took two courses that were explicitly concerned with Aristotle. Both of them were explicitly labeled as "history of philosophy" courses. Students are sometimes taught bits of Aristotelian (and Medieval) syllogistic, but those ideas are never, so far as I know, the main things taught in logic (as opposed to history) courses. In the freshman-level logic course that I teach, we build a natural deduction system up through first-order logic (with identity), plus a bit of simplified axiomatic set theory (extensionality, an axiom for the empty set instead of the axiom of comprehension, pairing, union, and power set), and a bit of probability theory for finite sample spaces (since I'm not allowed to assume that freshmen have had calculus). We cover Aristotle's logic in less than one lecture, as a note on categorical sentences when we get to first-order logic. And really, we only do that because it is useful to see that "Some Ss are Ps" is the negation of "No Ss are Ps," before thinking about how to solve probability problems like finding the probability of at least one six in three tosses of a fair die. Critical thinking courses are almost always service courses directed at non-philosophers.
(2) You seem to think that philosophers do a lot of conceptual analysis, rather than empirical work. In my own philosophy education, I was told that conceptual analysis does not work and that with perhaps the exception of Tarski's analysis of logical consequence, there have been no successful conceptual analyses of philosophically interesting concepts. Moreover, I had several classes -- classes where the concern was with how people think (either in general or about specific things) -- where we paid attention to contemporary psychology, cognitive science, and neuroscience. In fact, restricting attention to material assigned in philosophy classes I have taken, you would find more Kahneman and Tversky than you would Plato or Kant. And you would also find a lot of other psychologists and cognitive scientists, including Gopnik, Cheng, Penn, Povinelli, Sloman, Wolff, Marr, Gibson, Damasio, and so on and so forth. Graduate students in my department are generally distrustful of their own intuitions and look for empirical ways to get at concepts (when they even care about concepts). For example, one excellent student in my department, Zach Horne, has been thinking a bit about the analysis of knowledge (which is by no means the central problem in contemporary epistemology), but he's attacking the problem via experiments involving semantic integration. And I've done my own experimental work on the analysis of knowledge, though the experiments were not as clever.
(3) You seem to think that philosophy before 1980 (why that date??) is not sufficiently connected to actual science to be worth reading, and that this is mostly what philosophers read. Both are, I think, incorrect claims.
With respect to the first claim, there is lots of philosophical work before 1980 that is both closely engaged with contemporaneous science and amazingly useful to read. Take a look at Carnap's article on "Testability and Meaning," or his book on The Logical Foundations of Probability. Read through Reichenbach's book on The Direction of Time. These books definitely repay close reading. All of Russell's work was written before 1980 -- since he died in 1970! Wittgenstein's later work is enormously useful for preventing unnecessary disputes about words, but it was written before 1980. This shouldn't be surprising. After all, lots of scientific, mathematical, and statistical work from before 1980 is well worth reading today. Lots of the heuristics and biases literature from the '70s is still great to read. Savage's Foundations of Statistics is definitely worth reading today. As is lots of material from de Finetti, Good, Turing, Wright, Neyman, Simon, and many others. Feynman's The Character of Physical Law was a lecture series delivered in 1960. Is it past its expiration date? It's not the place to go for cutting edge physics, but I would highly recommend it as reading for an undergraduate. I might assign a chunk of it in my undergraduate philosophy of science course next semester. (Unless you convince me it's a really, really bad idea.) Why think that philosophical work ages worse than scientific work?
With respect to the second claim, you might be right with respect to undergraduate education. On the other hand, undergraduate physics education isn't a whole lot better (if any), is it? But with respect to graduate training, it seems to me that if one is interested in contemporary problems, rather than caring about the history of ideas, one reads primarily contemporary philosophers. In a typical philosophy course on causation, I would guess you read more of David Lewis than anyone. But that's not so bad, since Lewis' ideas are very closely connected to Pearl on the one hand and the dominant approaches to causal inference in statistics on the other. The syllabus and reading lists for the graduate seminar on causation that I am just wrapping up teaching are here, in case you want to see the way I approach teaching the topic. I'll just note that in my smallish seminar (about eight people -- six enrolled for credit) two people are writing on decision theory, two are writing on how to use causal Bayes nets to do counterfactual reasoning, and one is writing on the contextual unanimity requirement in probabilistic accounts of causation. Only one person is doing what might be considered an historical project.
Rather than giving a very artificial cut-off date, it seems to me we ought to be reading good philosophy from whenever it comes. Sometimes, that will mean reading old-but-good work from Bacon or Boole or (yes) Kant or Peirce or Carnap. And that is okay.
(4) You seem to endorse Glymour's recommendation that philosophy departments be judged based on the external funding they pull in. On the other hand, you say there should be less philosophical work (or training at least) on free will. As I pointed out the first time you mentioned Glymour's manifesto, there is more than a little tension here, since work on free will (which you and I and probably Glymour don't care about) does get external funding. (In any event, this is more than a little odd, since it typically isn't the way funding of university departments works in the humanities, anyway, where most funding is tied to teaching rather than to research and where most salaries are pathetically small in comparison with STEM counterparts.) Where I really agree with Glymour is in thinking that philosophy departments ought to be shelter for iconoclasts. But in that case, philosophy should be understood to be the discipline that houses the weirdos. We should then keep a look-out for good ideas coming from philosophy, since those rare gems are often worth quite a lot, but we also shouldn't panic when the discipline looks like it's run by a bunch of weirdos. In fact, I think this is pretty close to being exactly what contemporary philosophy actually is as a discipline.
I'm sure I could say a lot more, but this comment is already excessively long. Perhaps the take-away should be this. Set aside the question of how philosophy is taught now. I am receptive to teaching philosophy in a better way. I want the best minds to be studying and doing philosophy. (And if I can't get that, then I would at least like the best minds to see that there is value in doing philosophy even if they decide to spend their effort elsewhere.) If I can pull in the best people by learning and teaching more artificial intelligence or statistics or whatever, I'm game. I teach a lot of that now, but even if I didn't, I hope I would be more interested in inspiring people to learn and think and push civilization forward than in business as usual.
EDIT: I guess markdown language didn't like my numbering scheme. (I really wish we had a preview window for comments.)
Even that's not quite right. There is a tie for 5th place between Harvard and Pitt. The fact that Harvard is listed before Pitt appears to be due to lexicographical order.
That's an interesting point. How precise do you think we have to be with respect to feedbacks in the climate system if we are interested in an existential risk question? And do you have other uncertainties in mind or just uncertainties about feedbacks?
The first thing I thought on reading your reply was that insofar as the evidence supports positive feedbacks, the evidence also supports the claim that there is existential risk from climate change. But then I thought maybe we need to know more about how far away the next equilibrium is -- assuming there is one. If we are in or might reach a region where temperature feedback is net positive and we run away to a new equilibrium, how far away will the equilibrium be? Is that the sort of uncertainty you had in mind?
I really don't understand the row for climate change. What exactly is meant by "inference" in the data column? I don't know what you want to count as data, but it seems to me that the data with respect to climate change include increasingly good direct measurements of temperature and greenhouse gas concentrations over the last hundred years or so, whatever goes into the basis of relevant physical and chemical theories (like theories of heat transfer, cloud formation, solar dynamics, and so forth), and measurements of proxies for temperature and greenhouse gas concentrations in the distant past (maybe this is what "inference" is supposed to mean?).
I also don't understand the "?" under probability distribution. Are the probability distributions at stake here distributions over credences? If so, then they can be estimated for most any scientist, at least. Are the distributions over frequencies? Then frequencies of what? I suspect we could estimate distributions for lots of climate related things, like severe storms or droughts or record high temperatures. I would be somewhat surprised if such distributions have not already been estimated by climate scientists. Is the issue about calibration? Then the answer seems to be a qualified yes. Groups like the IPCC give probabilistic statements based on their climate models. The climate models could be checked at least on past predictions, e.g. by looking at what the models from 2000 predicted for the period 2001-2011. We might not get a very good sense of how well calibrated the models are, but if the average temperature for each month, say, is a separate datum, then we could check the models by seeing how many of the months fall into the claimed 95% confidence bands, for example. (And just putting down confidence bands in the models should tell you that the climate scientists think that the distribution can be estimated for some sense of probability.)
Yeah, I still think you're talking past one another. Wasserman's point is that something being a 95% confidence interval deductively entails that it has the relevant kind of frequentist coverage. That can no more fail to be true than 2+2 can stop being 4. The null, then, ought to be simply that these are really 95% confidence intervals, and the data then tell against that null by undermining a logical consequence of the null. The data might be excellent evidence that these aren't 95% confidence intervals. Of course, figuring out exactly why they aren't is another matter. Did the physicists screw up? Were their sampling assumptions wrong? I would guess that there is a failure of independence somewhere in the example, but again, I haven't read the paper carefully or really looked at the data.
Anyway, I still don't see what's wrong with Wasserman's reply. If they don't have 95% coverage, then they aren't 95% confidence intervals.
So, is your point that we often don't know when a purportedly 95% confidence interval really is one? Or that we don't know when the assumptions are satisfied for using confidence intervals? Those seem like reasonable complaints. I wonder what Wasserman would have to say about those objections.
I suspect you're talking past one another, but maybe I'm missing something. I skimmed the paper you linked and intend to come back to it in a few weeks, when I am less busy, but based on skimming, I would expect the frequentist to say something like, "You're showing me a finite collection of 95% confidence intervals for which it is not the case that 95% of them cover the truth, but the claim is that in the long run, 95% of them will cover the truth. And the claim about the long run is a mathematical fact."
I can see having worries that this doesn't tell us anything about how confidence intervals perform in the short run. But that doesn't invalidate the point Wasserman is making, does it? (Serious question: I'm not sure I understand your point, but I would like to.)
The point depends on differences between confidence intervals and credible intervals.
Roughly, frequentist confidence intervals, but not Bayesian credible intervals, have the following coverage guarantee: if you repeat the sampling and analysis procedure over and over, in the long-run, the confidence intervals produced cover the truth some percentage of the time corresponding to the confidence level. If I set a 95% confidence level, then in the limit, 95% of the intervals I generate will cover the truth.
Bayesian credible intervals, on the other hand, tell us what we believe (or should believe) the truth is given the data. A 95% credible interval contains 95% of the probability in the posterior distribution (and usually is centered around a point estimate). As Gelman points out, Bayesians can also get a kind of frequentist-style coverage by averaging over the prior. But in Wasserman's cartoon, the target is a hard-core personalist who thinks that probabilities just are degrees of belief. No averaging is done, because the credible intervals are just supposed to represent the beliefs of that particular individual. In such a case, we have no guarantee that the credible interval covers the truth even occasionally, even in the long-run.
Take a look here for several good explanations of the difference between confidence intervals and credible intervals that are much more detailed than my comment here.
I don't see how this applies to ciphergoth's example. In the example under consideration, the person offering you the bet cannot make money, and the person offered the bet cannot lose money. The question is, "For which of two events would you like to be paid some set amount of money, say $5, in case it occurs?" One of the events is that a fair coin flip comes up heads. The other is an ordinary one-off occurrence, like the election of Obama in 2012 or the sun exploding tomorrow.
The goal is to elicit the degree of belief that the person has in the one-off event. If the person takes the one-off event when given a choice like this, then we want to say (or de Finetti wanted to say, anyway) that the person's prior is greater than 1/2. If the person says, "I don't care, let me flip a coin," like ciphergoth's interlocutor did, then we want to say that the person has a prior equal to 1/2. There are still lots of problems, since (among other things) in the usual personalist story, degrees of belief have to be infinitely precise -- corresponding to a single real number -- and it is not clear that when a person says, "Oh, just flip a coin," the person has a degree of belief equal to 1/2, as opposed to an interval-valued degree of belief centered on 1/2 or something like that.
But anyway, I don't see how your point makes contact with ciphergoth's.
The point applies well to evidentialists but not so well to personalists. If I am a personalist Bayesian -- the kind of Bayesian for which all of the nice coherence results apply -- then my priors just are my actual degrees of belief prior to conducting whatever experiment is at stake. If I do my elicitation correctly, then there is just no sense to saying that my prior is bullshit, regardless of whether it is calibrated well against whatever data someone else happens to think is relevant. Personalists simply don't accept any such calibration constraint.
Excluding a research report that has a correctly elicited prior smacks of prejudice, especially in research areas that are scientifically or politically controversial. Imagine a global warming skeptic rejecting a paper because its author reports having a high prior for AGW! Although, I can see reasons to allow this sort of thing, e.g. "You say you have a prior of 1 that creationism is true? BWAHAHAHAHA!"
One might try to avoid the problems by reporting Bayes factors as opposed to full posteriors or by using reference priors accepted by the relevant community or something like that. But it is not as straightforward as it might at first appear how to both make use of background information and avoid idiosyncratic craziness in a Bayesian framework. Certainly the mathematical machinery is vulnerable to misuse.
I just want to know why he's only betting $50.
You could be right, but I am skeptical. I would like to see evidence -- preferably in the form of bibliometric analysis -- that practicing scientists who use frequentist statistical techniques (a) don't make use of background information, and (b) publish more successfully than comparable scientists who do make use of background information.
That depends heavily on what "the method" picks out. If you mean that the machinery of a null hypothesis significance test against a fixed-for-all-time significance level of 0.05, then I agree, the method doesn't promote good practice. But if we're talking about frequentism, then identifying the method with null hypothesis significance testing looks like attacking a straw man.
Fair? No. Funny? Yes!
The main thing that jumps out at me is that the strip plays on a caricature of frequentists as unable or unwilling to use background information. (Yes, the strip also caricatures Bayesians as ultimately concerned with betting, which isn't always true either, but the frequentist is clearly the butt of the joke.) Anyway, Deborah Mayo has been picking on the misconception about frequentists for a while now: see here and here, for examples. I read Mayo as saying, roughly, that of course frequentists make use of background information, they just don't do it by writing down precise numbers that are supposed to represent either their prior degree of belief in the hypothesis to be tested or a neutral, reference prior (or so-called "uninformative" prior) that is supposed to capture the prior degree of evidential support or some such for the hypothesis to be tested.
Rather than consulting Wikipedia, the SEP article on consequentialism is probably the best place to start for an overview.