Philosophy Needs to Trust Your Rationality Even Though It Shouldn't

post by lukeprog · 2012-11-29T21:00:29.400Z · LW · GW · Legacy · 169 comments

Contents

  Notes
None
169 comments

Part of the sequence: Rationality and Philosophy

Philosophy is notable for the extent to which disagreements with respect to even those most basic questions persist among its most able practitioners, despite the fact that the arguments thought relevant to the disputed questions are typically well-known to all parties to the dispute.

Thomas Kelly

The goal of philosophy is to uncover certain truths... [But] philosophy continually leads experts with the highest degree of epistemic virtue, doing the very best they can, to accept a wide array of incompatible doctrines. Therefore, philosophy is an unreliable instrument for finding truth. A person who enters the field is highly unlikely to arrive at true answers to philosophical questions.

Jason Brennan

 

After millennia of debate, philosophers remain heavily divided on many core issues. According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics, 35-27 on empiricism vs. rationalism, and 57-27 on physicalism vs. non-physicalism.

Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1

Why are physicists, biologists, and psychologists more prone to reach consensus than philosophers?2 One standard story is that "the method of science is to amass such an enormous mountain of evidence that... scientists cannot ignore it." Hence, religionists might still argue that Earth is flat or that evolutionary theory and the Big Bang theory are "lies from the pit of hell," and philosophers might still be divided about whether somebody can make a moral judgment they aren't themselves motivated by, but scientists have reached consensus about such things.

In its dependence on masses of evidence and definitive experiments, science doesn't trust your rationality:

Science is built around the assumption that you're too stupid and self-deceiving to just use [probability theory]. After all, if it was that simple, we wouldn't need a social process of science... [Standard scientific method] doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.

Sometimes, you can answer philosophical questions with mountains of evidence, as with the example of moral motivation given above. But or many philosophical problems, overwhelming evidence simply isn't available. Or maybe you can't afford to wait a decade for definitive experiments to be done. Thus, "if you would rather not waste ten years trying to prove the wrong theory," or if you'd like to get the right answer without overwhelming evidence, "you'll need to [tackle] the vastly more difficult problem: listening to evidence that doesn't shout in your ear."

This is why philosophers need rationality training even more desperately than scientists do. Philosophy asks you to get the right answer without evidence that shouts in your ear. The less evidence you have, or the harder it is to interpret, the more rationality you need to get the right answer. (As likelihood ratios get smaller, your priors need to be better and your updates more accurate.)

Because it tackles so many questions that can't be answered by masses of evidence or definitive experiments, philosophy needs to trust your rationality even though it shouldn't: we generally are as "stupid and self-deceiving" as science assumes we are. We're "predictably irrational" and all that.

But hey! Maybe philosophers are prepared for this. Since philosophy is so much more demanding of one's rationality, perhaps the field has built top-notch rationality training into the standard philosophy curriculum?

Alas, it doesn't seem so. I don't see much Kahneman & Tversky in philosophy syllabi — just light-weight "critical thinking" classes and lists of informal fallacies. But even classes in human bias might not improve things much due to the sophistication effect: someone with a sophisticated knowledge of fallacies and biases might just have more ammunition with which to attack views they don't like. So what's really needed is regular habits training for genuine curiosity, motivated cognition mitigation, and so on.

(Imagine a world in which Frank Jackson's famous reversal on the knowledge argument wasn't news — because established philosophers changed their minds all the time. Imagine a world in which philosophers were fine-tuned enough to reach consensus on 10 bits of evidence rather than 1,000.)

We might also ask: How well do philosophers perform on standard tests of rationality, for example Frederick (2005)'s CRT? Livengood et al. (2010) found, via an internet survey, that subjects with graduate-level philosophy training had a mean CRT score of 1.32. (The best possible score is 3.)

A score of 1.32 isn't radically different from the mean CRT scores found for psychology undergraduates (1.5), financial planners (1.76), Florida Circuit Court judges (1.23), Princeton Undergraduates (1.63), and people who happened to be sitting along the Charles River during a July 4th fireworks display (1.53). It is also noticeably lower than the mean CRT scores found for MIT students (2.18) and for attendees to a LessWrong.com meetup group (2.69).

Moreover, several studies show that philosophers are just as prone to particular biases as laypeople (Schulz et al. 2011; Tobia et al. 2012), for example order effects in moral judgment (Schwitzgebel & Cushman 2012).

People are typically excited about the Center for Applied Rationality because it teaches thinking skills that can improve one's happiness and effectiveness. That excites me, too. But I hope that in the long run CFAR will also help produce better philosophers, because it looks to me like we need top-notch philosophical work to secure a desirable future for humanity.3

 

Next post: Train Philosophers with Pearl and Kahneman, not Plato and Kant

Previous post: Intuitions Aren't Shared That Way

 

 

Notes

1 Clearly, many philosophers have advanced versions of motivational internalism that are directly contradicted by these results from psychology. However, we don't know exactly which version of motivational internalism is defended by each survey participant who said they "accept" or "lean toward" motivational internalism. Perhaps many of them defend weakened versions of motivational internalism, such as those discussed in section 3.1 of May (forthcoming).

2 Mathematicians reach even stronger consensus than physicists, but they don't appeal to what is usually thought of as "mountains of evidence." What's going on, there? Mathematicians and philosophers almost always agree about whether a proof or an argument is valid, given a particular formal system. The difference is that a mathematician's premises consist in axioms and in theorems already strongly proven, whereas a philosopher's premises consist in substantive claims about the world for which the evidence given is often very weak (e.g. that philosopher's intuitions).

3 Bostrom (2000); Yudkowsky (2008); Muehlhauser (2011).

169 comments

Comments sorted by top scores.

comment by IlyaShpitser · 2012-11-29T21:07:20.785Z · LW(p) · GW(p)

A minor (but important) nitpick:

[Standard scientific method] doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.

Science sets up experiments not just because it does not trust you, but because even if you were a perfect Bayesian, you could not determine cause effect relationships just from using Bayes theorem a lot.

Replies from: lukeprog, Eliezer_Yudkowsky, thomblake
comment by lukeprog · 2012-11-30T03:59:00.635Z · LW(p) · GW(p)

Sure. A good clarification.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-11-30T18:33:40.000Z · LW(p) · GW(p)

Right! Besides just Bayes's Theorem, you'd also need Occam's Razor as a simplicity prior over causal structures. And, to drive the probability of a causal structure high enough, confidence that you'd observed in sufficient detail to drive down the probability of extra confounding or intervening variables.

Since the latter part is sometimes difficult though not theoretically impossible to achieve in fields like medicine, a randomized experiment in which you trust that your random numbers will probably have the Markov condition relative to other background variables, can more quickly give you confidence about some directions on causal arrows when the combination of effect size and sample size is large enough. Naturally, all of this is a mere special case of Bayesian reasoning on possible causal structures where (1) you start out very confident that some random numbers are conditionally independent of all their non-descendants in the graph, and (2) you start out very confident that your randomized experimental procedure causally connects to a single descendant node in that graph (the independent variable).

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-11-30T22:07:28.932Z · LW(p) · GW(p)

(a) You don't need to observe confounders to learn structure from data. In fact, sometimes you don't need any standard conditional independence at all. (Luke gave me the impression SI wasn't very interested in that point -- maybe it should be).

(b) Occam's razor / faithfulness gives you enough to learn the structure of statistical models, not causal ones. You need additional assumptions to equate the statistical models you learn with causal models. Bayesian networks are not causal models. Causality is not about conditional independence, it is about counterfactual invariance, that is causality expresses what changes or stays the same after a hypothetical 'wiggle.'

There is no guarantee that even given Occam's razor and faithfulness being true that the graph you obtain is such that if I wiggle a parent, the child will change. To verify your causal assumptions, you have to run an experiment, or no scientist will believe your graph is causal. This is what real causal discovery papers do, for example:

http://www.sciencemag.org/content/308/5721/523.abstract

Here they learned a protein signaling network, but then implemented an experiment where they changed the protein level of a parent via an RNA molecule, and verified that the child changed, but parent of a parent did not change.


I am sure you can set up a Bayesian story for this entire enterprise, if you wanted. But, firstly, this Bayesian story would not be expressed purely in probability theory but in the language that can express counterfactual invariance and talk about experiments (for example language of potential outcomes or do(.)). And secondly, giving something a Bayesian story is sort of equivalent to re-expressing some complicated program as a vi macro. Could be done (vi is turing-complete!) but why? People don't write practical code in vi macros.

Replies from: Eliezer_Yudkowsky, lukeprog, pengvado
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-01T01:01:02.365Z · LW(p) · GW(p)

This sounds like we're talking past each other somehow. Your point (a) is not clear to me - I was saying that to learn a sufficiently high-probability causal model from non-intervention data, you need to have observed the data in sufficient detail to rule out confounders (except at some low probability) (via simplicity priors, which otherwise can't drive down the probability of an untestable invisible confounder by all that far). This can certainly be done in principle, e.g. if you put the system under a microscope with a higher resolution than the system, and verified there were only X kinds of stuff in it and no others.

Your point (b) sounds just plain wrong to me. If you have a simplicity prior over causal models, and you can derive testable probable predictions from causal models, then you can do Bayesian updating and get a posterior over causal models. Substituting the word "flammable fizzbins" for "causal models" in the preceding sentence will produce another true sentence. I think you mean something different by "Bayesian" and "Occam's Razor" than I do.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-12-01T06:10:06.737Z · LW(p) · GW(p)

By (a) I mean that you can sometimes get the true graph exactly even without having to observe confounders. Actually this was sort of known already (see the FCI algorithm, or even the IC* algorithm in Pearl's book), but we can do a lot better than that. For example, if we have the true graph:

a -> b -> c -> d, with a <- u1 -> c, and a <- u2 -> d, where we do not observe u1,u2, and u1,u2 are very complicated, then we can figure out the true graph exactly by independence type techniques without having to observe u1 and u2. Note: the marginal distribution p(a,b,c,d) that came from this graph has no conditional independences at all (checkable by d-separation on a,b,c,d), so typical techniques fail.


(b) is I guess "a subtle issue" -- but my point is about careful language use and keeping causal and statistical issues clear and separate.

A "Bayesian network" (or "belief network" -- I don't like the word Bayesian here because it is confusing the issue, you can use frequentist techniques with belief networks if you wanted, in fact a lot of folks do) is a joint distribution that factorizes as a DAG. That's it. Nothing about causality. If there is a joint density representing a causal process where a is a direct cause of b is a direct cause of c, then this joint density will factorize with respect to both

a -> b -> c

and

a <- b <- c

but only the former graph is causal, the latter is not. Both graphs form a "Bayesian network" with the joint density (since the density factorizes with respect to both graphs), but only one graph is a causal graph. If you want to talk about causal models, in addition to saying that there is a Markov factorization you also need to say something else -- something that makes parents into direct causes. Usually people say something like:

for every x, p(x | pa(x)) = p(x | do(pa(x))), or mention the g-formula, or the truncated factorization of do(.), or "the causal Markov condition."

But this is something that (a) you need to say explicitly, and (b) involves language beyond standard probability theory because there is a do(.), and (c) is controversial to some people. What is do(.)? It refers to a hypothetical experiment/intervention.


If all you are learning is a graph that gives you a Markov factorization you have no business making claims about interventions -- interventions are a separate magisterium. You can assume that the unknown graph from which the data came is causal -- but you need to say this explicitly, this assumption will be controversial to some people, and by making that assumption you are I think committing yourself to the use of interventionist/potential outcome language (just to describe what it means for a data generating graph to be causal).

I have no problems with you doing Bayesian updating and getting posteriors over causal models -- I just wanted to get more precision on what a causal model is. A causal model is not a density factorizing with respect to a DAG -- that's a statistical model. A causal model makes assertions that relate hypothetical experiments like p(x | do(pa(x))) with observed data like p(x | pa(x)). So your Bayesian updating is operating in a world that contains more than just probability theory (which is a theory of standard joint densities, without the mention of do(.) or hypothetical experiments). You can in fact augment probability theory with a logical description of interventions, see for example this paper:

http://www.jair.org/papers/paper648.html


If your notion of causal model does not relate do(.) to observed data, then I don't know what you mean by a causal model. It's certainly not what I mean by it.

Replies from: Eliezer_Yudkowsky, Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-01T19:29:20.006Z · LW(p) · GW(p)

Well, this is very rapidly getting us into complex territory that future decision-theory posts will hopefully explore, but a very brief answer would be that I am unwilling to define anything fundamental in terms of do() operations because our universe does not contain any do() operations, and counterfactuals are not allowed to be part of our fundamental ontology because nothing counterfactual actually exists and no counterfactual universes are ever observed. There are quarks and electrons, or rather amplitude distributions over joint quark and lepton fields; but there is no do() in physics.

Causality seems to exist, in the sense that the universe seems completely causally structured - there is causality in physics. On a microscopic level where no "experiments" ever take place and there are no uncertainties, the microfuture is still related to the micropast with a neighborhood-structure whose laws would yield a continuous analogue of D-separation if we became uncertain of any variables.

Counterfactuals are human hypothetical constructs built on top of high-level models of this actually-existing causality. Experiments do not perform actual interventions and access alternate counterfactual universes hanging alongside our own, they just connect hopefully-Markov random numbers into a particular causal arrow.

Another way of saying this is that a high-level causal model is more powerful than a high-level statistical model because it can induct and describe switches, as causal processes, which behave as though switching arrows around, and yields predictions for this new case even when the settings of the switches haven't been observed before. This is a fancypants way of saying that a causal model lets you throw a bunch of rocks at trees, and then predict what happens when you throw rocks at a window for the first time.

Replies from: Wei_Dai, IlyaShpitser, thomblake
comment by Wei Dai (Wei_Dai) · 2012-12-01T20:32:02.654Z · LW(p) · GW(p)

As an additional data point, I also still do not have a very good understanding of your ideas about causality (although I did note earlier that it seems rather different from Pearl's (which are similar to Ilya's)). I also note that nobody else seems to have a good understanding of your ideas, at least not enough to try to build upon them either here on LW or on the decision theory mailing list or try to explain them to me when I asked.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-01T21:17:11.753Z · LW(p) · GW(p)

Interesting. Sorry to bother you further, but can I ask you to quote a particular sentence or paragraph above that seems unclear? Or was the above clear, but it implies other questions that aren't clear, or the motivations aren't clear?

Replies from: Benja, Wei_Dai
comment by Benya (Benja) · 2012-12-02T00:46:38.569Z · LW(p) · GW(p)

As a third data point, I used to be very confused about your ideas about causality, but your recent writing has helped a lot. To make embarassingly clear how very wrong I've been able to be, some years ago when you'd told us about TDT but not given details, I thought you had a fully worked-out and justified theory about how a decision agent could use causal graphs to model its uncertainty about the output of platonic computations, and use do() on its own output to compute the utility of different courses of action, and I got very frustrated when I simply couldn't figure out how to fill in the details of that...

...hmm. (I should probably clarify: when I say "use causal graphs to reason about", I don't mean in the 'trivial' sense you are actually using where the platonic computations cause other things but are themselves uncaused in the model; I mean some sort of system where different computations and/or logical facts about computations form a non-degenerate graph, and where do() severs one node somewhere in the middle of that graph from its parents.) "And", I was going to say, "when you finally did tell us more, I had a strong oh moment when you said that you still weren't able to give a completely satisfying theory/justification, but were reasonably satisfied with the version you had. But I still continued to think that my picture of what you had been trying to do had been correct, only you didn't have a fully worked-out theory of it, either." The actual quote that turned into this memory of things seems to be,

Note that this does not solve the remaining open problems in TDT (though Nesov and Dai may have solved one such problem with their updateless decision theory). Also, although this theory goes into much more detail about how to compute its counterfactuals than classical CDT, there are still some visible incompletenesses when it comes to generating causal graphs that include the uncertain results of computations, computations dependent on other computations, computations uncertainly correlated to other computations, computations that reason abstractly about other computations without simulating them exactly, and so on.

But there's also this:

The three-sentence version is: Factor your uncertainty over (impossible) possible worlds into a causal graph that includes nodes corresponding to the unknown outputs of known computations; condition on the known initial conditions of your decision computation to screen off factors influencing the decision-setup; compute the counterfactuals in your expected utility formula by surgery on the node representing the logical output of that computation.

And later:

Those of you who've read the quantum mechanics sequence can extrapolate from past experience that I'm not bluffing.

Huh. In retrospect I can see how this matches my current understanding of what you're doing, but comparing this to what I wrote in the first paragraph above (before searching for that post), it's actually surprisingly nonobvious where the difference is between what you wrote back then and what I wrote just now to explain the way in which I had horribly misunderstood you...

Anyway. As for what you wrote in the great-grandparent, I had to read it slowly, but most of it makes perfect sense to me; the last paragraph I'm not quite as sure about, but there too I think I understand what you mean.

There is, however, one major point on which I currently feel confused. You seem to be saying that causal reasoning should be seen as a very fundamental principle of epistemology, and on your list of open problems, you have "Better formalize hybrid of causal and mathematical inference." But it seems to me that if you just do inference about logical uncertainty, and the mathematical object you happen to be interested in is a cellular automaton or the PDE giving the time evolution of some field theory, then your probability distribution over the state at different times will necessarily happen to factor in such a way that it can be represented as a causal model. So why treat causality as something fundamental in your epistemology, and then require deep thinking about how to integrate it with the rest of your reasoning system, rather than treating it as an efficient way to compress some probability distributions, which then just automatically happens to apply to the mathematical objects representing our actual physics? (At this point, I ask this question not as a criticism, but simply to illustrate my current confusion.)

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-12-02T04:33:22.732Z · LW(p) · GW(p)

So why treat causality as something fundamental in your epistemology, and then require deep thinking about how to integrate it with the rest of your reasoning system, rather than treating it as an efficient way to compress some probability distributions, which then just automatically happens to apply to the mathematical objects representing our actual physics?

Because causality is not about efficiently encoding anything. A causal process a -> b -> c is equally efficiently encoded via c -> b -> a.

But it seems to me that if you just do inference about logical uncertainty, and the mathematical object you happen to be interested in is a cellular automaton or the PDE giving the time evolution of some field theory, then your probability distribution over the state at different times will necessarily happen to factor in such a way that it can be represented as a causal model.

This is not true, for lots of reasons, one of them having to do with "observational equivalence." A given causal graph has many different graphs with which it agrees on all observable constraints. All these other graphs are not causal. The 3 node chain above is one example.

Replies from: Benja
comment by Benya (Benja) · 2012-12-02T08:28:15.909Z · LW(p) · GW(p)

Sorry, I understand the technical point about causal graphs you are refering to, but I do not understand the argument you're trying to make with it in this context.

Suppose it's the year 2100, and we have figured out the true underlying laws of physics, and it turns out that we run on a cellular automaton, and we have some very large and energy-intensive instruments that allow us to set up experiments where we can precisely set up the states of individual primitive cells. Now we want to use probabilistic reasoning to examine the time evolution of a cluster of such cells if we have only probabilistic information about the boundary conditions. Since this is a completely ordinary cellular automaton, we can describe it using a causal model, where the state of a cell at time t+1 is caused by its own state and the state of its neighbours at time t.

In this case, causality is really fundamentally there in the laws of physics (in a discrete analog of what we suspect for our actual laws of physics). And though you can't reach in from the outside of the universe, it's possible to imagine scenarios where you could do the equivalent of do() on some of the cells in your experiment, though it wouldn't really be done by acausally changing what happens in the universe -- one way to imagine it is that your experiment runs only in a two-dimensional slice surrounded by a "vacuum" of cells in a "zero" state, and you can reach in through that vacuum to change one of the cells in the two-dimensional grid.

But when it comes to how to model this inside a computer, it seems that you can reach all the conclusions you need by "ordinary" probabilistic reasoning: For example, you could start with say a uniform joint probability distribution over the state of all cells in your experiment at all times; then you condition on the fact that they fulfill the laws of physics, i.e. the time evolution rule of the cellular automaton; then you condition again on what you know about the boundary conditions, e.g. the fact that your experimental apparatus reaches in through the third dimension at some point to change the state of some cells. It's extraordinarily inefficient to represent the joint distribution as a giant look-up table of probabilities, but I do not see what inferences you want but are going to lose by doing the calculations that way.

(All of this holds even if the true laws happen to be deterministic in only one direction in time, so that in your experiment you can distinguish a -> b -> c from c -> b -> a by reaching in through the third dimension at time b.)

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-12-03T21:33:09.648Z · LW(p) · GW(p)

It depends on granularity. If you are talking about your game of life world on the level of the rules of the game, that is equivalent to talking about our Universe on the level of the universal wave function. In both cases there are no more agents with actuators and no more do(.), as a result. That is, it's not that your factorization will be causal, it's that there is no causality.

But if you are taking a more granular view of your game of life world, similar to the macroscopic view of our Universe, where there are agents that can push and prod their environment, then suddenly talking about do(.) becomes useful for getting things done (just like it is useful to talk about addition or derivatives). On this macroscopic level, there is causality, but then your statement about all factorizations being causal is false (due to obvious examples involving reversing causal chains, for example).

comment by Wei Dai (Wei_Dai) · 2012-12-01T23:19:27.133Z · LW(p) · GW(p)

On second thought, the main problem may not be lack of clarity but that your ideas about causality are too speculative and people either lack confidence that your research program (try to reduce Pearl's do()-based causality to lower-level "causality in physics") is the right one, or do not see how to proceed.

Both apply for me but the former is perhaps more relevant at this point. Basically I'm not sure that "do()-based causality" will actually end up playing a role in the ultimate "correct" decision theory (I guess if there is lack of clarity, it's why you think that it will), and in the mean time there are other problems that definitely need to be solved and also seem more approachable.

(To explain why I think "do()-based causality" may not end up playing a role, it seems plausible that in an AI or at least decision theory (I wanted to say theoretical decision theory but that seems redundant :), cognition about "high-level causality" just ends up being handled as a special case by a more general algorithm, similar to how an AI programmed to maximize expected utility wouldn't specifically need to be hand-coded with natural language processing if it was running on a sufficiently powerful computer.)

ETA: BTW, can you comment on whether my understanding in this comment was correct, and whether they still apply to Eliezer_2012?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-02T01:15:53.954Z · LW(p) · GW(p)

You realize I'm arguing against do()-based causality? If not, I was very much unclearer than I thought.

I have never tried to reduce causal arrows to similarity; Barbour does, I don't. I take causality to be, or be the epistemic conjugate of, something physical and real which was involved in manufacturing this oddly-well-modeled-by-causality universe that we actually live in. They are presently primitive in my model; I have not yet reduced them, except in the obvious sense that they are also formal mathematical relations between points, i.e., causal relations are a special case of logical relations (and yet we still live in a causal universe rather than a merely logical one). I do indeed reduce consciousness to computation and computation to causality, though there's a step here involving magical reality-fluid about which I am still confused - I have no idea why or what it means for a causal process to be more or less real, either as a result of having more or less Born measure, being instantiated in many places, or for any other reason.

Replies from: Wei_Dai, IlyaShpitser
comment by Wei Dai (Wei_Dai) · 2012-12-02T02:24:28.012Z · LW(p) · GW(p)

You realize I'm arguing against do()-based causality? If not, I was very much unclearer than I thought.

Maybe it's just me not updating fast enough. My impression is that when you talked about causality prior to today, you usually mentioned Pearl and never said you disagreed with him on anything, so I assumed you wanted to keep his do()-based causality and just add a layer below it. Were you always against do()-based causality or did you change your mind at some point?

I have never tried to reduce causal arrows to similarity; Barbour does, I don't.

Hmm, re-reading Timeless Causality, I don't see how I could have learned that the idea belongs to Barbour and that you disagree with him. It sure sounds like it was your idea.

causal relations are a special case of logical relations (and yet we still live in a causal universe rather than a merely logical one)

Why should we care about causality as decision theorists, if we have decision theories that can deal with logical universes in general, and causal relations are just a special case of logical relations?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-03T03:42:21.807Z · LW(p) · GW(p)

Hmm, re-reading Timeless Causality, I don't see how I could have learned that the idea belongs to Barbour and that you disagree with him. It sure sounds like it was your idea.

This sounds like a high-priority problem, but actually I don't see any reference to reduction-to-similarity in Timeless Causality, although there's a lot in Barbour's book about it. What do you mean by "mind reduces to computation which reduces to causal arrows which reduces to some sort of similarity relationship between configurations"? Unless this is just in the sense that causal mechanisms are logical relations?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-03T12:14:58.448Z · LW(p) · GW(p)

I interpreted this paragraph as sugesting that causality reduces to similarity, but given your latest clarifications, I guess what you actually had in mind was that causality tends to produce similarity and so we can infer causality from similarity.

When two regions of spacetime are timelike separated, we cannot deduce any direction of causality from similarities between them; they could be similar because one is cause and one is effect, or vice versa. But when two regions of spacetime are spacelike separated, and far enough apart that they have no common causal ancestry assuming one direction of physical causality, but would have common causal ancestry assuming a different direction of physical causality, then similarity between them... is at least highly suggestive.

Previously, I thought you considered causality to be a higher level concept rather than a primitive one, similar to "sound waves" or "speech" as opposed to say "particle movements". That sort of made sense except that I didn't know why you wanted to make causality an integral part of decision theory. Now you're saying that you consider causality to be primitive and a special kind of logical relations, which actually makes less sense to me, and still doesn't explain why you want to make causality an integral part of decision theory. It makes less sense because if we consider the laws of physics as logical relations, they don't have a direction. As you said, "Time-symmetrical laws of physics didn't seem to leave room for asymmetrical causality." I don't see how you get around this problem if you take causality to be primitive. But the bigger problem is that (at the risk of repeating myself too many times) I don't understand your motivation for studying causality, because if I did I'd probably spend more time thinking about it mysef and understand your ideas about it better.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-03T17:59:57.388Z · LW(p) · GW(p)

I'm trying to think like reality. If causality isn't a special kind of logic, why is everything in the known universe made out of (a continuous analogue of) causality instead of logic in general? Why not Time-Turners or a zillion other possibilities?

Replies from: Wei_Dai, DaFranker
comment by Wei Dai (Wei_Dai) · 2012-12-03T19:41:06.270Z · LW(p) · GW(p)

If causality isn't a special kind of logic, why is everything in the known universe made out of (a continuous analogue of) causality instead of logic in general?

Wait, if causality is a special kind of logic, how does that help answer the question? Don't we still have to answer why the universe is made of this kind of logical instead of some other?

Why not Time-Turners or a zillion other possibilities?

I don't understand how lack of Time-Turners makes you think causality is a special kind of logic or why you want to incorporate causality into decision theory (which is still my bigger question). Similar questions could be asked about other features of the universe:

  • Why does the universe have 3 spatial dimensions instead of a zillion other possibilities?
  • Why doesn't the laws of physics allow information to be destroyed (i.e., never maps 2 different states at time t to the same state at time t+1)?

But we're not concerned about these questions at the level of decision theory, since it seems possible to have a decision theory that works with an arbitrary number of dimensions, and with both kinds of laws of physics. Similarly, I don't see why we can't have a "causality-agnostic" decision theory that works in universes both with and without Time-Turners.

comment by DaFranker · 2012-12-03T18:11:14.486Z · LW(p) · GW(p)

I think the point was more about whether causality should be thought of as a fundamental part of the rules, like this, or whether it's more useful to think of causality as an abstraction that (ahem, excuse the term) "emerges" from the fundamentals when we try to identify patterns in said fundamentals.

Somewhat akin to how "meaning" exists in a computer program despite none of the bits fundamentally meaning anything, I think. My thoughts are becoming more and more confused as I type, though, which makes me wish I had an environment suitable to better concentration.

comment by IlyaShpitser · 2012-12-15T22:45:22.802Z · LW(p) · GW(p)

You realize I'm arguing against do()-based causality?

Ok, I would like to state for the record that I no longer understand what you mean when you say "factor something as a causal graph" (which may well mean no one else on this site understands either). Basically everything you ever wrote on the subject of causality or causal graphs (other than exposition of standard material) is now a complete mystery to me. In particular, I don't understand what sorts of graphs are in your paper on the Newcomb's problem, or why those graphs justify you to make any sorts of conclusions about Newcomb's problem.

Graph models are overloaded, there are lots of different models that all have the same graph. You have to explain what you mean if you use graphs.

comment by IlyaShpitser · 2012-12-01T23:08:39.596Z · LW(p) · GW(p)

I would be interested in reading about this. A few points:

(a) I agree that causality is a "useful fiction" (like real numbers or derivatives).

(b) If you are going to be writing posts about "causal diagrams" you need to be clear about what you mean. Usually by causal diagrams people mean Pearl's stuff, or closely related stuff (agnostic causal models, minimal causal models, etc.) All these models are defined via either do(.) or stronger notation. If you do not mean that by causal diagrams, that's fine! But please explain what you do mean to avoid confusing people. You have a paper on TDT that seems to use causal diagrams. Which ones did you mean in there?

edit: I should say that if your project has "defining actual cause" as a special case, it's probably a black hole from which no one returns (it's the analytic philosophy version of the P/NP problem).

edit 2: I think the derivation of "do(.)" ought to be not dissimilar to the derivation of "+", if you worry about induction problems. "+" is a mathematical fiction very useful for representing regularities with handling objects, "do(.)" is a mathematical fiction very useful for representing regularities involved with algorithms with actuators running around.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-02T01:11:45.703Z · LW(p) · GW(p)

If causality is' useful fiction, it's conjugate to some useful nonfiction; I should like to know what the latter is.

I don't think Pearl's diagrams are defined via do(). I think I disagree with that statement even if you can find Pearl making it. Even if do() - as shorthand for describing experimental procedures involving switches on arrows - does happen to be a procedure you can perform on those diagrams, that's a consequence of the definition, it is not actually part of the representation of the actual causal model. You can write out causal models, and they give predictions - this suffices to define them as hypotheses.

More importantly: How can you possibly make the truth-condition be a correspondence to counterfactual universes that don't actually exist? That's the point of my whole epistemology sequence - truth-conditions get defined relative to some combination of physical reality that actually exists, and valid logical consequences pinned down by axioms. So yes, I would definitely derive do() rather than have it being primitive, and I wouldn't ever talk about the truth-condition of causal models relative to a do() out there in the environment - we talk about the truth-condition of causal models relative to quarks and electrons and quantum fields, to reality.

I'm a bit worried (from some of his comments about causal decision theory) that Pearl may actually believe in free will, or did when he wrote the first edition of Causality. In reality nothing is without parents, nothing is physically uncaused - that's the other problem with do().

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-12-02T04:38:37.492Z · LW(p) · GW(p)

I don't think Pearl's diagrams are defined via do(). I think I disagree with that statement even if you can find Pearl making it.

Well, the author is dead, they say.

There are actually two separate causal models in Pearl's book: "causal Bayesian networks" (chapter 1), and "functional models" aka "non-parametric structural equation models" (chapter 7). These models are not the same, in fact functional models are a lot stronger logically (that is they make many more assumptions).

The first is defined via do(.), you can check the definition. The second can be defined either via a set of functions, or via a set of axioms. The two definitions are, I believe, equivalent. The axiomatic approach is valuable in statistics, where we often cannot exhibit the functions that make up the model, and must resort to enumerating assumptions. If you want to take the axiomatic approach you need a language stronger than do(.). In particular you need to be able to express counterfactual statements of the form "I have a headache. Would I have a headache had I taken an aspirin one hour ago?" Pearl's model in chapter 7 actually makes assumptions about counterfactuals like that. If you think talking about counterfactual worlds that don't actually exist is dubious, then you join a large chorus of folks who are critical of Pearl's functional models.

If you want to learn more about different kinds of causal models people look at, and the criticisms of models that make assumptions on counterfactuals, the following is a good read:

http://events.iq.harvard.edu/events/sites/iq.harvard.edu.events/files/wp100.pdf


Some folks claim that a model is not causal unless it assumes consistency, which is an axiom stating that if for a person u, we intervene on X and set it to a value x that naturally occurs in u, then for any Y in u, the value of Y given that intervention is equal to the value of Y in that same person had we not intervened on X at all. Or, concisely:

Y(x,u) = Y(u), if X(u) = x

or even more concisely:

Y(X) = Y

This assumption is actually counterfactual. Without this assumption it's not possible to do causal inference.

comment by thomblake · 2012-12-03T17:36:29.460Z · LW(p) · GW(p)

Reading this whole thread, I'm interested to know what your thoughts on causality are. Do you have existing posts on the subject that I should re-read? I was under the impression you pretty much agreed with Pearl, but now that seems not to be the case.

By the way, Pearl certainly wasn't arguing from a "free will" perspective - rather, I think he'd agree with "there is no do() in physics" but disagree that "there is causality in physics".

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-01T21:06:00.426Z · LW(p) · GW(p)

a -> b -> c -> d, with a <- u1 -> c, and a <- u2 -> d, where we do not observe u1,u2, and u1,u2 are very complicated, then we can figure out the true graph exactly by independence type techniques without having to observe u1 and u2. Note: the marginal distribution p(a,b,c,d) that came from this graph has no conditional independences at all (checkable by d-separation on a,b,c,d), so typical techniques fail.

Irrelevant question: Isn't (b || d) | a, c?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-12-01T22:42:26.609Z · LW(p) · GW(p)

No, because b -> c <-> a <-> d is an open path if you condition on c and a.

Replies from: Eliezer_Yudkowsky
comment by lukeprog · 2012-12-02T01:31:49.099Z · LW(p) · GW(p)

Luke gave me the impression SI wasn't very interested in that point

How? I find myself very interested in this point, just not enough to schedule a lecture about it in the next month, since we have a lot of other things going on, and we're out of town, and so on.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-12-02T02:24:48.044Z · LW(p) · GW(p)

Fair enough, retracted. Sorry!

comment by pengvado · 2012-11-30T23:55:38.963Z · LW(p) · GW(p)

On your account, how do you learn causal models from observing someone else perform an experiment? That doesn't involve any interventions or counterfactuals. You only see what actually happens, in a system that includes a scientist.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-12-01T00:11:33.644Z · LW(p) · GW(p)

That depends what you mean by an "experiment." If you divide a set of patients into a control group and a test group, and then have the test group smoke a pack of cigarettes per day, that is an "experiment" to me, one that is represented by an intervention (because we are forcing the test group to smoke regardless of what they would naturally want to do).

Observing that the test group is much more likely to develop cancer would lead me to conclude that the graph

smoking -> cancer

is a causal graph rather than merely a statistical graph.


If we do not perform the above experiment due to ethical reasons, but instead use observational data on smokers, we have to worry about confounders, like Fisher did. We also have to worry, because we are implicitly linking that data with counterfactual situations (what would have happened if those guys we observed were forced to smoke). This linking isn't "free," there are assumptions operating in the background. Assumptions expressed in a language that can talk about counterfactual situations.

comment by thomblake · 2012-11-29T21:46:36.616Z · LW(p) · GW(p)

I'm so glad you post here.

comment by nigerweiss · 2012-11-29T20:32:12.765Z · LW(p) · GW(p)

Another extremely serious problem is that there is next to no particularly effective effort in philosophical academia to disregard confused questions, and to move away from naive linguistic realism. The number of philosophical questions of the form 'is x y' that can be resolved by 'depends on your definition of x and y' is deeply depressing. There does not seem to be a strong understanding of how important it is to remember that not all words correspond to natural, or even (in some cases) meaningful categories.

Replies from: Mitchell_Porter, RobbBB, bryjnar, Peterdjones
comment by Mitchell_Porter · 2012-11-29T21:01:39.668Z · LW(p) · GW(p)

Please list as many examples of these questions as you can muster. (I mean questions, seriously discussed by philosophers, which you claim can be resolved in this way.)

Replies from: nigerweiss, Rune
comment by nigerweiss · 2012-11-29T21:48:29.601Z · LW(p) · GW(p)

Any discussion of what art is. Any discussion of whether or not the universe is real. Any conversation about whether machines can truly be intelligent. More specifically, the ship of Theseus thought experiment and the related sorites paradox are entirely definitional, as is Edmund Gettier's problem of knowledge. The (appallingly bad, by the way) swamp man argument by Donald Davidson hinges entirely on the belief that words actually refer to things. Shades of this pop up in Searle's Chinese room and other bad thought experiments.

I could go on, but that would require me to actually go out and start reading philosophy papers, and goodness knows I hate that,

Replies from: Bugmaster, RobbBB, siodine, Mitchell_Porter
comment by Bugmaster · 2012-11-30T04:08:27.722Z · LW(p) · GW(p)

Your examples include:

(1) Any discussion of what art is.
(2) Any discussion of whether or not the universe is real.
(3) Any conversation about whether machines can truly be intelligent.

I agree that the answers to these questions depend on definitions, but then, so does the answer to the question, "how long is this stick ?". Depending on your definition, the answer may be "this many meters long", "depends on which reference frame you're using", "the concept of a fixed length makes no sense at this scale and temperature", or "it's not a stick, it's a cube". That doesn't mean that the question is inherently confused, only that you and your interlocutor have a communication problem.

That said, I believe that questions (1) and (3) are, in fact, questions about humans. They can be rephrased as "what causes humans to interpret an object or a performance as art", and "what kind of things do humans consider to be intelligent". The answers to these questions would be complex, involving multi-modal distributions with fuzzy boundaries, etc., but that still does not necessarily imply that the questions are confused.

Which is not to say that confused questions don't exist, or that modern philosophical academia isn't riddled with them; all I'm saying is that your examples are not convincing.

Replies from: JackV, BerryPick6, nigerweiss
comment by JackV · 2012-11-30T11:37:31.703Z · LW(p) · GW(p)

I agree that the answers to these questions depend on definitions

I think he meant that those questions depend ONLY on definitions.

As in, there's a lot of interesting real world knowledge that goes in getting a submarine to propel itself, but that now we know that, have, people asking "can a submarine swim" is only interesting in deciding "should the English word 'swim' apply to the motion of a submarine, which is somewhat like the motion of swimming, but not entirely". That example sounds stupid, but people waste a lot of time on the similar case of "think" instead of "swim".

Replies from: Bugmaster
comment by Bugmaster · 2012-11-30T16:59:51.550Z · LW(p) · GW(p)

Ok, that's a good point; inserting the word "only" in there does make a huge difference.

I also agree with BerryPick6 on this sub-thread.

comment by BerryPick6 · 2012-11-30T12:03:33.709Z · LW(p) · GW(p)

"What causes humans to interpret an object or a performance as art" and "What is art?" may be seen as two entirely different questions to certain philosophers. I'm skeptical that people who frequent this site would make such a distinction, but we aren't talking about LWers here.

Replies from: Peterdjones
comment by Peterdjones · 2012-11-30T12:19:22.692Z · LW(p) · GW(p)

People whoe frequent this site already do make parallel distinctions about more LW-friendly topics. For instance, the point of the Art of Rationality is that there is a right way to do thinking and persuading, which is not to say that Reason "just is" whatever happens to persuade or convince people, since people can be persuaded by bad arguments. If that can be made to work, then "it's hanging in a gallery, but it isn't art" can be made to work.

ETA:

That said, I believe that questions (1) and (3) are, in fact, questions about humans.

Rationality is about humans, in a sense, too. The moral is that being "about humans" doens't imply that the search for norms or real meanings, or genuine/pseudo distinctions is fruitless.

Replies from: Bugmaster
comment by Bugmaster · 2012-11-30T16:57:59.112Z · LW(p) · GW(p)

Agreed, but my point was that questions about humans are questions about the Universe (since humans are part of it), and therefore they can be answerable and meaningful. Thus, you could indeed come up with an answer that sounds something like, "it's hanging in a gallery, but our model predicts that it's only 12.5% art".

But I agree with BerryPick6 when he says that not all philosophers make that distinction.

comment by nigerweiss · 2012-11-30T08:46:38.347Z · LW(p) · GW(p)

I agree that the answers to these questions depend on definitions, but then, so does the answer to the question, "how long is this stick ?"

There's a key distinction that I feel you may be glossing over here. In the case of the stick question, there is an extremely high probability that you and the person you're talking to, though you may not be using exactly the same definitions, are using definitions that are closely enough entangled with observable features of the world be broadly isomorphic.

In other words, there is a good chance that, without either of you adjusting your definitions, you and the neurotypical human you're talking to are likely to be able to come up with some answer that both of you will find satisfying, and will allow you to meaningfully predict future experiences.

With the three examples I raised, this isn't the case. There are a host of different definitions, which are not closely entangled with simple, observable features of the world. As such, even if you and the person you're talking to have similar life experiences, there is no guarantee that you will come to the same conclusions, because your definitions are likely to be personal, and the outcome of the question depends heavily upon those definitions.

Furthermore, in the three cases I mentioned, unlike the stick, if you hold a given position, it's not at all clear what evidence could persuade you to change your mind, for many possible (and common!) positions. This is a telltale sign of a confused question.

Replies from: Bugmaster
comment by Bugmaster · 2012-11-30T17:03:58.777Z · LW(p) · GW(p)

There are a host of different definitions, which are not closely entangled with simple, observable features of the world.

I believe that at least two of those definitions could be something like, "what kinds of humans would consider this art ?", or "will machines ever pass the Turing test". These questions are about human actions which express human thoughts, and are indeed observable features of the world. I do agree that there are many other, more personal definitions that are of little use.

comment by Rob Bensinger (RobbBB) · 2012-11-30T01:29:28.249Z · LW(p) · GW(p)

I think we need a clearer idea of what we mean by a 'bad' thought experiment. Sometimes thought experiments are good precisely because they make us recognize (sometimes deliberately) that one of the concepts we imported into the experiment is unworkable. Searle's Chinese room is a good example of this, since it (and a class of similar thought experiments) helps show that our intuitive conceptions of the mental are, on a physicalist account, defective in a variety of ways. The right response is to analyze and revise the problem concepts. The right response is not to simply pretend that the thought experiment was never proposed; the results of thought experiments are data, even if they're only data about our own imaginative faculties.

comment by siodine · 2012-11-29T22:08:14.121Z · LW(p) · GW(p)

My first thought was "every philosophical thought experiment ever" and to my surprise wikipedia says there aren't that many thought experiments in philosophy (although, they are huge topics of discussion). I think the violinist experiment is uniquely bad. The floating man experiment is another good example, but very old.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-11-30T01:24:42.847Z · LW(p) · GW(p)

What's your objection to the violinist thought experiment? If you're a utilitarian, perhaps you don't think the waters here are very deep. It's certainly a useful way of deflating and short-circuiting certain other intuitions that block scientific and medicinal progress in much of the developed world, though.

Replies from: siodine
comment by siodine · 2012-11-30T16:07:12.357Z · LW(p) · GW(p)

From SEP:

Judith Thomson provided one of the most striking and effective thought experiments in the moral realm (see Thomson, 1971). Her example is aimed at a popular anti-abortion argument that goes something like this: The foetus is an innocent person with a right to life. Abortion results in the death of a foetus. Therefore, abortion is morally wrong. In her thought experiment we are asked to imagine a famous violinist falling into a coma. The society of music lovers determines from medical records that you and you alone can save the violinist's life by being hooked up to him for nine months. The music lovers break into your home while you are asleep and hook the unconscious (and unknowing, hence innocent) violinist to you. You may want to unhook him, but you are then faced with this argument put forward by the music lovers: The violinist is an innocent person with a right to life. Unhooking him will result in his death. Therefore, unhooking him is morally wrong.

However, the argument, even though it has the same structure as the anti-abortion argument, does not seem convincing in this case. You would be very generous to remain attached and in bed for nine months, but you are not morally obliged to do so.

The thought experiment depends on your intuitions or your definition of moral obligations and wrongness, but the experiment doesn't make these distinctions. It just pretends that everyone has same intuition and as such the experiment should remain analogous regardless (probably because Judith didn't think anyone else could have different intuitions), and so then you have all these other philosophers and people arguing about this minutia and adding on further qualifications and modifications to the point where that they may as well be talking about actual abortion.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-11-30T20:10:48.429Z · LW(p) · GW(p)

The thought experiment functions as an informal reductio ad absurdum of the argument 'Fetuses are people. Therefore abortion is immoral.' or 'Fetuses are conscious. Therefore abortion is immoral.' That's all it's doing. If you didn't find the arguments compelling in the first place, then the reductio won't be relevant to you. Likewise, if you think the whole moral framework underlying these anti-abortion arguments is suspect, then you may want to fight things out at the fundaments rather than getting into nitty-gritty details like this. The significance of the violin thought experiment is that you don't need to question the anti-abortionist's premises in order to undermine the most common anti-abortion arguments; they yield consequences all on their own that most anti-abortionists would find unacceptable.

That is the dialectical significance of the above argument. It has nothing to do with assuming that everyone found the original anti-abortion argument plausible. An initially implausible argument that's sufficiently popular may still be worth analyzing and refuting.

comment by Mitchell_Porter · 2012-11-30T00:26:58.860Z · LW(p) · GW(p)

I am unimpressed by your examples.

Can we first agree that some questions are not dissolved by observing that meanings are conventional? If I run up to you and say "My house is on fire, what should I do?", and you tell me "The answer depends, in part, on what you mean by 'house' and 'fire'...", that will not save my possessions from destruction.

If I take your preceding comment at face value, then you are telling me

  • there is nothing to think about in pondering the nature of art, it's just a matter of definition
  • there is nothing to think about regarding whether the universe exists, it's just a matter of definition
  • there's no question of whether artificial intelligence is the same thing as natural intelligence, it's just a matter of definition

and that there's no "house-on-fire" real issue lurking anywhere behind these topics. Is that really what you think?

Replies from: nigerweiss
comment by nigerweiss · 2012-11-30T00:45:16.099Z · LW(p) · GW(p)

Well, I'm sorry. Please fill out a conversational complaint form and put it in the box, and an HR representative will mail you a more detailed survey in six to eight weeks.

I agree entirely that meaningful questions exist, and made no claim to the contrary. I do not believe, however, that as an institution, modern philosophy is particularly good at identifying those questions.

In response to your questions,

  • Yes, absolutely.

  • Yes, mostly. There are different kinds of existence, but the answer you get out will depend entirely on your definitions.

  • Yes, mostly. There are different kinds of possible artificial intelligence, but the question of whether machines can -truly- be intelligent depends exclusively upon your definition of intelligence.

As a general rule, if you can't imagine any piece of experimental evidence settling a question, it's probably a definitional one.

Replies from: Mitchell_Porter, John_Maxwell_IV
comment by Mitchell_Porter · 2012-11-30T03:44:51.260Z · LW(p) · GW(p)

The true nature of art, existence, and intelligence are all substantial topics - highly substantial! In each case, like the physical house-on-fire, there is an object of inquiry independent of the name we give it.

With respect to art - think of the analogous question concerning science. Would you be so quick to claim that whether something is science is purely a matter of definition?

With respect to existence - whether the universe is real - we can distinguish possibilities such as: there really is a universe containing billions of light-years of galaxies full of stars; there is a brain in a vat being fed illusory stimuli, with the real world actually being quite unlike the world described by known physics and astronomy; and even solipsistic metaphysical idealism - there is no matter at all, just a perceiving consciousness having experiences.

If I ponder whether the universe is real, I am trying to choose between these and other options. Since I know that the universe appears to be there, I also know that any viable scenario must contain "apparent universe" as an entity. To insist that the reality of the universe is just a matter of definition, you must say that "apparent universe" in all its forms is potentially worthy of the name "actual universe". That's certainly not true to what I would mean by "real". If I ask whether the Andromeda galaxy is real, I mean whether there really is a vast tract of space populated with trillions of stars, etc. A data structure providing a small part of the cosmic backdrop in a simulated experience would not count.

With respect to intelligence - I think the root of the problem here is that you think you already know what intelligence in humans is - that it is fundamentally just computation - and that the boundary between smart computation and dumb computation is obviously arbitrary. It's like thinking of a cloud as "water vapor". Water vapor can congregate on a continuum of scales from invisibly small to kilometers in size, and a cloud is just a fuzzy naive category employed by humans for the water vapor they can see in the sky.

Intelligence, so the argument goes, is similarly a fuzzy naive category employed by humans for the computation they can see in human behavior. There would be some truth to that analysis of the concept... except that, in the longer run, we may find ourselves wanting to say that certain highly specific refinements of the original concept are the only reasonable ways of making it precise. Intelligence implies something like sophisticated insight; so it can't apply to anything too simple (like a thermostat), and it can't apply to algorithms that work through brute force.

And then there is the whole question of consciousness and its role in human intelligence. We may end up wishing to say that there is a fundamental distinction between conscious intelligence - sophisticated cognition which employs genuine insight, i.e. conscious insight, conscious awareness of salient facts and relations - and unconscious intelligence - where the "insight" is really a matter of computational efficiency. The topic of intelligence is the one where I would come closest to endorsing your semantic relativism, but that's only because in this case, the "independent object of inquiry" appears to include heterogeneous phenomena (e.g. sophisticated conscious cognition, sophisticated unconscious cognition, sophisticated general problem-solving algorithms), and how we end up designating those phenomena once we obtain a mature understanding of their nature, might be somewhat contingent after all.

comment by John_Maxwell (John_Maxwell_IV) · 2012-11-30T09:07:11.056Z · LW(p) · GW(p)

As a general rule, if you can't imagine any piece of experimental evidence settling a question, it's probably a definitional one.

So what's the difference between philosophy and science then?

Replies from: nigerweiss
comment by nigerweiss · 2012-11-30T21:25:15.890Z · LW(p) · GW(p)

Err... science deals with questions you can settle with evidence? I'm not sure what you're getting at here.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-11-30T21:27:50.173Z · LW(p) · GW(p)

How does your use of the label "philosophical" fit in with your uses of the categories "definitional" and "can be settled by experimental evidence"?

comment by Rune · 2012-11-30T00:52:25.767Z · LW(p) · GW(p)

I once met a philosophy professor who was at the time thinking about the problem "Are electrons real?" I asked her what her findings had shown thus far, and she said she thinks they're not real. I then asked her to give me examples of things that are real. She said she doesn't know any examples of such things.

comment by Rob Bensinger (RobbBB) · 2012-11-29T23:01:24.374Z · LW(p) · GW(p)

Not only are pretty much all contemporary philosophers attentive to this fact, but there's an active philosophical literature about the naturalness of some terms as opposed to others, and about how one can reasonably distinguish natural kinds from non-natural ones. Particularly interesting is some of the recent work in metaphilosophy and in particular metametaphysics, which examines whether (or when) ontological disputes are substantive, what is the function of philosophical disputes, when one can be justified in believing a metaphysical doctrine, etc. (Note: This field is not merely awesome because it has a hilarious name.)

Don't confuse disagreements about which natural kinds exist, and hence about which disputes are substantive, with disagreements about whether there's a distinction between substantive and non-substantive disputes at all.

comment by bryjnar · 2012-11-29T23:18:12.556Z · LW(p) · GW(p)

I strongly disagree. Almost every question in philosophy that I've ever studied has some camp of philosophers who reject the question as ill-posed, or want to dissolve it, or some such. Wittgensteinians sometimes take that attitude towards every question. Such philosophers often not discussed as much as those who propose "big answers" but there's no question that they exist and that any philosopher working in the field is well aware of them.

Also, there's a selection effect: people who think question X isn't a proper question tend not to spend their careers publishing on question X!

Replies from: siodine, nigerweiss
comment by siodine · 2012-11-29T23:55:06.326Z · LW(p) · GW(p)

I agree, but the problems remain and the arguments flourish.

comment by nigerweiss · 2012-11-30T00:47:02.437Z · LW(p) · GW(p)

Sure, there are absolutely philosophers who aren't talking about absolute nonsense. But as an industry, philosophy has a miserably bad signal-noise ratio.

Replies from: bryjnar
comment by bryjnar · 2012-11-30T02:52:19.159Z · LW(p) · GW(p)

I'd mostly agree, but the particular criticism that you levelled isn't very well-founded. Questioning the way we use language and the way that philosophical questions are put is not the unheard of idea that you portray it as. In fact, it's pretty standard. It's just not necessarily the stuff that people choose to put into most "Intro to the Philosophy of X" textbooks, since there's usually more discussion to be had if the question is well-posed!

comment by Peterdjones · 2012-11-30T00:17:16.214Z · LW(p) · GW(p)

Please name some contemporary philosophers who are naive linguistic realists.

comment by Rob Bensinger (RobbBB) · 2012-11-29T22:27:59.412Z · LW(p) · GW(p)

Your previous post was good, but this one seems to be eliding a few too many issues. If you took a poll of physicists asking them to explain what their fundamental model — quantum mechanics — actually tells us about the world (surely a simple enough question), there would be disagreement comparable to that regarding the philosophical questions you mentioned. The survey you cite is also obviously unhelpful, in that the questions on that survey were chosen because they're controversial. Most philosophical questions are not very controversial, but for that very reason you don't hear much about them. If we hand-picked all the foundational questions physicists disagreed about and conducted a popularity poll, would we be rightly surprised to find that the poll results were divided?

(It's also worth noting that some of the things being measured by the poll are attitudinal and linguistic variation between different philosophical schools and programs, not just doctrinal disagreements. Why should we expect ethicists and philosophers of mathematics to completely agree in methodology and terminology, when we do not expect the same from physicists and biologists?)

There are three reasons philosophers disagree about foundational issues:

(1) Almost everyone disagrees, at least tacitly, about foundational issues. Foundational issues are hard, and our ordinary methods of acquiring truth and resolving disagreements often short-circuit when we arrive at them. Scientific realism is controversial among scientists. Platonism is controversial among mathematicians. Moral realism is controversial among politicians and voters. Philosophers disagree about these matters for the same basic reasons that everyone else does; the only difference is that philosophers do not follow the same social conventions the rest of us do that dictate bracketing and ignoring foundational disagreements as much as possible. In other words...

(2) ... philosophy is about foundational disagreement. There is no one worldly content or subject matter or methodology shared between all the things we call 'philosophy.' The only thing we really use to distinguish philosophers from non-philosophers is how foundational and controversial the things they talk about are. When you put all the deep controversies in a box and call that box Philosophy, you should not be surprised upon opening the box to see that it is clogged with disagreement.

(3) Inasmuch as philosophical issues are settled, they stop getting talked about. So there's an obvious selection bias effect. Philosophical progress occurs; but that progress gets immediately imported into our political systems, our terminological choices and conceptual distinctions, our scientific theories and practices, our logical and mathematical toolboxes. And then it stops being philosophy.

That said, I agree with a lot of your criticisms of a lot of philosophers' practices. They need more cognitive science and experimentalism. Desperately. But we should be a lot more careful and sophisticated in making this criticism, because most philosophers these days (even the most metaphysically promiscuous) do not endorse the claim 'our naive, unreflective intuitions automatically pick out the truth,' and because we risk alienating the Useful Philosophers when we make our target of attack simply Philosophy, rather than a more carefully constructed group.

LessWrong: Start tabooing the word 'philosophy.' See how it goes.

Replies from: Vladimir_Nesov, Viliam_Bur, Pablo_Stafforini
comment by Vladimir_Nesov · 2012-11-29T22:37:32.101Z · LW(p) · GW(p)

If you took a poll of physicists asking them to explain what their fundamental model — quantum mechanics — is actually asserting about the world (surely a simple enough question), there would be disagreement comparable to that regarding the philosophical questions you mentioned.

A major problem with modern physics is that there are almost no known phenomena that are known to work in a way that disagrees with how modern physics predicts they would work (in principle; there are lots of inferential/computational difficulties). What physics asserts about the world is, to the best of anyone's knowledge, coincides with what's known about most of the world in all detail. The physicists have to build billion dollar monstrosities like LHC just to get their hands on something they don't already thoroughly understand. This doesn't resemble the situation with philosophy in the slightest.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-11-29T22:51:15.851Z · LW(p) · GW(p)

You're speaking in very general terms, and you're not directly answering my question, which was 'what is quantum mechanics asserting about the world?' I take it that what you're asserting amounts to just "It all adds up to normality." But that doesn't answer questions concerning the correct interpretation of quantum mechanics. "x + y + z . . . = normality." That's a great sentiment, but I'm asking about what physics' "x" and "y" and "z" are, not questioning whether the equation itself holds.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-11-29T23:02:01.957Z · LW(p) · GW(p)

you're not directly answering my question, which was 'what is quantum mechanics asserting about the world?'

I'm pointing out that in particular it's asserting all those things that we know about the world. That's a lot, and the fact that there is consensus and not much arguing about this shouldn't make this achievement a trivial detail. This seems like a significant distinction from philosophy that makes simple analogies between these disciplines extremely suspect.

(I agree that I'm not engaging with the main points of your comment; I'm focusing only on this particular aside.)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-11-29T23:07:26.069Z · LW(p) · GW(p)

So your response to my pointing out that physicists too disagree about basic things, is to point out that physicists don't disagree about everything. In particular, they agree that the world around us exists.

Uh... good for them? Philosophers too have been known to harbor a strong suspicion that there is a world, and that it harbors things like chairs and egg timers and volcanoes. Physicists aren't special in that respect. (In particular, see the philosophical literature on Moorean facts.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-11-29T23:11:43.226Z · LW(p) · GW(p)

physicists don't disagree about everything. In particular, they agree that the world around us exists. ... Philosophers too have been known to harbor a strong suspicion that there is a world

Physicists agree about almost everything. In particular, they agree about all specific details about how the world works relevant (in principle) to most things that have ever been observed (this is a lot more detail than "the world exists").

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-11-29T23:25:04.034Z · LW(p) · GW(p)

They agree about the most useful formalisms for modeling and predicting observations. But 'formalism' and 'observation' are not themselves concepts of physics; they are to be analyzed away in the endgame. My request is not for you to assert (or deny) that physicists have very detailed formalisms, or very useful ones; it is for you to consider how much agreement there is about the territory ultimately corresponding to these formalisms.

A simple example is the disagreement about which many-worlds-style interpretation is best; and about whether many-worlds-style interpretations are the best interpretations at all; and about whether, if they are the best, whether they're best enough to dominate the probability space. Since the final truth-conditions and referents of all our macro- and micro-physical discourse depends on this interpretation, one cannot duck the question 'what are chairs?' or 'what are electrons?' simply by noting 'chairs are something or other that's real and fits our model.' It's true, but it's not the question under dispute. I said physicists disagree about many things; I never said that physicists fail to agree about anything, so changing the topic to the latter risks confusing the issue.

Replies from: prase
comment by prase · 2012-11-30T19:24:57.895Z · LW(p) · GW(p)

You are basically saying that physicists disagree about philosophical questions.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-11-30T19:38:10.038Z · LW(p) · GW(p)

Is the truth of many-worlds theory, or of non-standard models, a purely 'philosophical' matter? If so, then sure. But that's just a matter of how we choose to use the word 'philosophy;' it doesn't change the fact that these are issues physicists, specifically, care and disagree about. To dismiss any foundational issue physicists disagree about as for that very reason 'philosophical' is merely to reaffirm my earlier point. Remember, my point was that we tend to befuddle ourselves by classifying issues as 'philosophical' because they seem intractable and general, then acting surprised when all the topics we've classified in this way are, well, intractable and general.

It's fine if you think that humanity should collectively and universally give up on every topic that has ever seemed intractable. But you can make that point much more clearly in those simple words than by bringing in definitions of 'philosophy.'

Replies from: Desrtopa, prase
comment by Desrtopa · 2012-12-01T16:41:38.779Z · LW(p) · GW(p)

It seems that the matters you're arguing that scientists disagree on are all ones where we cannot, at least by means anyone's come up with yet, discriminate between options by use of empiricism.

The questions they disagree on may or may not be "philosophical," depending on how you define your terms, but they're questions that scientists are not currently able to resolve by doing science to them.

The observation that scientists disagree on matters that they cannot resolve with science doesn't detract from the argument that the process of science is useful for building consensuses. If anything it supports it, since we can see that scientists do not tend to converge on consensuses on questions they aren't able to address with science.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-01T19:38:58.332Z · LW(p) · GW(p)

The observation that scientists disagree on matters that they cannot resolve with science doesn't detract from the argument that the process of science is useful for building consensuses.

Agreed. It's not that scientists universally distrust human rationality, while philosophers universally trust it. Both groups regularly subject their own reasoning faculties to tests and to distrust. (And both also need to rely at least somewhat on human reasoning, since one can only fairly conclude that a kind of reasoning is flawed by reasoning one's way toward that conclusion. Even purely 'empirical' or 'factual' questions require some amount of interpretive work.)

The reason philosophers seem to disagree more than scientists is very simple, and it's the same reason physicists trying to expand the Standard Model disagree more than physicists working within the Standard Model: Because there's a lack of intersubjectively accessible data. Without such data for calibration, different theoretical physicists' inferences, intuitions, and pattern-matching faculties in general will get relatively diverse results, even if their methodologies are quite commendable.

comment by prase · 2012-12-01T18:25:07.201Z · LW(p) · GW(p)

I think you are reading too much into my comment. It totally wasn't about what humanity should collectively give up on, or even what anybody should. And I agree that philosophy is effectively defined as a collection of problems which are not yet understood enough to be even investigated by standard scientific methods.

I was only pointing out (perhaps not much clearly, but I hadn't time for a lengthier comment) that the core of physics is formalisms and modelling and predictions (and perhaps engineering issues since experimental apparatuses today are often more complex than the phenomena they are used to observe). That is, almost all knowledge needed to be a physicist is the ordinary "non-philosophical" knowledge that everybody agrees upon, and almost all talks at physics conferences are about formalism and observations, while the questions you label "foundational" are given relatively small amount of attention. It may seem that asking "what is the true nature of electron" is a question of physics, since it is about electrons, but actually most physicists would find the question uninteresting and/or confused while the question might sound truly interesting to a philosopher. (And it isn't due to lack of agreement on the correct answer, but more likely because physicists like more specific / less vague questions as compared to philosophers).

One can get false impression about that since the most famous physicists tend to talk significantly more about philosophical questions than the average, but if Feynman speaks about interpretation of quantum mechanics, it's not a proof that interpretation of quantum mechanics is extremely important question of physics (because else a Nobel laureate wouldn't talk about it), it's rather proof that Feynman has really high status and he can get away with giving a talk on a less-than-usually rigorous topic (and it is much easier to make an interesting lecture from philosophical stuff than from more technical stuff).

Of course, my point is partly about definitions - not so much the definition of philosophy but rather the definition of physics - but once we are comparing two disciplines having common definitions of those disciplines is unavoidable.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-01T20:47:42.304Z · LW(p) · GW(p)

I don't think we disagree all that much; and I meant 'you' to be a hypothetical interlocuter, not prase. All I want to reiterate is that the line between physics and philosophy-of-physics can be quite fuzzy. The 'measurement problem' is perhaps the pre-eminent problem in 'philosophy of physics,' but it's not some neoscholastic mumbo-jumbo of the form "what is the true nature of electron?". Rather, it's a straightforward physics problem that happens to have turned out to be especially intractable. Specifically, it is the problem that these three propositions form an inconsistent triad given our Born-probabilistic observations:

  • (1) Wave-function descriptions specify all the properties of physical systems.
  • (2) The wave function evolves solely in accord with the Schrödinger equation.
  • (3) Measurements have definite outcomes.

De-Broglie-style interpretations ('hidden variables') reject (1), von-Neumann-style interpretations ('objective collapse') reject (2), and Everett-style interpretations ('many worlds') reject (3). So far. there doesn't seem to be anything 'unphysical' or 'unphysicsy' about any of these views. What's made them 'philosophical' is simply that the problem is especially difficult, and the prospects for solving it to everyone's satisfaction, by ordinary physicsy methods, seem especially dim. So, if that makes it philosophy, OK. But problems of this sort divide philosophers because they're hard, not because philosophers 'trust their own rationality' more than physicists do.

Replies from: prase
comment by prase · 2012-12-02T17:03:59.850Z · LW(p) · GW(p)

I find it a bit tricky to formulate problems in propositions like yours (1) - (3) and insist that at least one must be rejected because of mutual inconsistency. The problem is that the meaning of the propositions is not precise. What exactly does "all properties of physical systems" denote? Is it "maximum information about the system that can be obtained in principle" (subproblem: what does "in principle" mean), or is it "information sufficient to predict all events in which the system is involved, if there is no uncertainty external to the system involved", or is it something else?

We know that the conditions under which we prepare the system can be summarised in a wave function and we know how to calculate the frequencies of measurement outcomes, given a specific wave function. We know that the knowledge of wave function doesn't let us predict the measurements with certainty. We even know, due to Bell's inequalities and the experimental results, that if there is some unknown property of the system which determines the measurement outcome prior to actual measurement, then this property must be non-local. We know that the evolution of systems under observation isn't described by Schrödinger equation only. All this is pretty uncontroversial.

Now the interpretations tend to use different words to describe the same amount of knowledge. Instead of saying that we can get unpredictably different outcomes from a measurement on a system with some given wave function, one may say that the outcome is always the same but our consciousness splits and each part is aligned only with a portion of the outcome, or one may say that the outcome is not "definite" (whatever it means). This verbal play is the unphysicsy thing with the given propositions.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-02T18:22:57.691Z · LW(p) · GW(p)

What exactly does "all properties of physical systems" denote? Is it "maximum information about the system that can be obtained in principle"

You seem to be trying to explain something rather clear with something less clear. The sentence in question is simply affirming that the wave function captures everything that is true of the system; thus (if you accept this view) there are no hidden variables determining the seemingly probabilistic outcomes of trying to measure non-observables. There's nothing mysterious about asserting that there's a hidden cause in this case, any more than science in general is going Mystical when it hypothesizes unobserved causes for patterns in our data.

To say that the outcome is not "definite" is to say that it is false that a particular measurement outcome (like 'spin up'), and not an alternative outcome (like 'spin down'), obtains. "Definite" sounds vague here because the very idea of "many worlds" is extremely vague and hard to pin down. One way to think of it is that the statistical properties of quantum mechanics are an epiphenomenon of a vastly larger, unobserved reality (the wave function itself) that continues merrily on its way after the observation.

Where's the 'verbal play'?

Replies from: prase
comment by prase · 2012-12-02T19:36:56.600Z · LW(p) · GW(p)

Say there are no hidden variables and the evolution is probabilistic. Does then the wave function capture everything that is true of the system? It seems to me that it doesn't: it is true that the system will be measured spin up in the next measurement, but the wave function is as well compatible with spin down. But you seem to assert that if I don't believe in hidden variables then the wave function does capture everything that is true of the system. Thus I don't find it rather clear. Neither does "epiphenomenon of a vastly larger reality" seem clarifying to me even a little bit.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-02T20:35:11.906Z · LW(p) · GW(p)

Say there are no hidden variables and the evolution is probabilistic. Does then the wave function capture everything that is true of the system?

At a given time, yes. But over time, the way a wave function changes may (a) be determined entirely by the Schrödinger equation, or (b) be determined by a mixture of the Schrödinger equation and intermittent 'collapses.' Given (a), the apparently probabilistic distribution of observations is somehow mistaken, and we get a many-worlds-type interpretation. Given (b), the probabilities are preserved but the universe suddenly operates by two completely different causal orders, and we get an 'objective collapse' interpretation. These are the two options if the wave function captures all the variables.

Replies from: prase
comment by prase · 2012-12-02T22:12:24.118Z · LW(p) · GW(p)

I am now interested in clarification of "everything that is true of the system". I have an electron whose spin I am going to measure five minutes from now. Does the proposition "the spin will be measured up" belong to "everything that is true about the electron"? Presume that the spin will indeed be measured up (or I will perceive the world in which it was up or whatever formulation will suit you the best). To me it appears as a true proposition, but there may be philosophical arguments to the contrary (problem of future contingents comes to mind).

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-12-02T22:43:33.307Z · LW(p) · GW(p)

Physics-inclined people tend to be 4-dimensionalists, so I don't think they'll object to describing wave functions in terms that account for them at all times. Even indeterminists (i.e., collapse theorists) can accept that we can talk about what will be true of electrons in the future, though we can't even in principle know some of those facts in advance.

Does the proposition "the spin will be measured up" belong to "everything that is true about the electron"?

de Broglie sez: "Yes, that belongs to everything that is true (about the electron's wave function). But at least one truth about the electron (its position at any given time) is not accounted for in the wave function. (This explains why the Schrödinger equation, although a complete description of how wave functions change, is not a complete description of how physical systems change.)"

von Neumann sez: "Yes. And the wave function encompasses all these truths. But there is no linear dynamical equation relating all the time-slices of the wave function. There are more free-floating brute facts within wave functions than we might have expected."

Everett sez: "Yes... well, sort of. The formalism for 'the spin will be measured up' is a component of a truth. But it would be more accurate and objective to say something like 'the spin will be measured up and down' (assuming it was in a prior superposition). Thus the wave function encompasses all the truths, and evolves linearly over time in accord with the Schrödinger equation. Win-win!"

comment by Viliam_Bur · 2012-11-30T11:55:01.296Z · LW(p) · GW(p)

Inasmuch as philosophical issues are settled, they stop getting talked about.

Why exactly? I mean, there is no controversy in mathematics about whether 2+2=4, and yet we continue teaching this knowledge in schools. Uncontroversial, yet necessary to be taught, because humans don't get it automatically, and because it is necessary for more complicated calculations.

Why exactly don't philosophers do an equivalent of this? It is because once a topic has been settled at a philosophical conference, the next generations of humans are automatically born with this knowledge? Or at least the answer is published so widely, that it becomes more known than the knowledge of 2+2=4? Or what?

Start tabooing the word 'philosophy.' See how it goes.

First approximation: Pretended ability to make specific conclusions concerning ill-defined but high-status topics. :(

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2012-11-30T19:23:47.169Z · LW(p) · GW(p)

I mean, there is no controversy in mathematics about whether 2+2=4, and yet we continue teaching this knowledge in schools.

Yes, and we continue teaching modus ponens and proof by reductio in philosophy classrooms. (Not to mention historical facts about philosophy.) Here we're changing the subject from 'do issues keep getting talked about equally after they're settled?' to 'do useful facts get taught in class?' The philosopher certainly has plenty of simple equations to appeal to. But the mathematician also has foundational controversies, both settled and open.

Pretended ability to make specific conclusions concerning ill-defined but high-status topics. :(

So if I pretend to be able to make specific conclusions about capital in macroeconomics, I'm doing philosophy?

comment by Pablo (Pablo_Stafforini) · 2012-11-30T06:34:04.910Z · LW(p) · GW(p)

Most philosophical questions are not very controversial, but for that very reason you don't hear much about them.

Really? Can you name a few philosophical questions whose answers are uncontroversial?

comment by kip1981 · 2012-11-29T20:42:43.702Z · LW(p) · GW(p)

Although I'm a lawyer, I've developed my own pet meta-approach to philosophy. I call it the "Cognitive Biases Plus Semantic Ambiguity" approach (CB+SA). Both prongs (CB and SA) help explain the amazing lack of progress in philosophy.

First, cognitive biases - or (roughly speaking) cognitive illusions - are persistent by nature. The fact that cognitive illusions (like visual illusions) are persistent, and the fact that philosophy problems are persistent, is not a coincidence. Philosophy problems cluster around those that involve cognitive illusions (positive outcome bias, the just world phenomenon, the Lake Wobegon effect, the fundamental attribution error), etc. I see this in my favorite topic area (the free will problem), but I believe that it likely applies broadly across philosophy.

Second, semantic ambiguity creates persistent problems if not identified and fixed. The solutions to several of Hilbert's 100 problems are "no answer - problem statement is not well defined." That approach is unsexy, and emotionally dissatisfying (all of this work, yet we get no answer!). Perhaps for that reason, philosophers (but not mathematicians) seem completely incapable of doing it. On only the rarest occasions do philosophers suggest that some term ("good", "morality," "rationalism", "free will", "soul", "knowledge") might not possess a definition that is precise enough to do the work that we ask of it. In fact, as with CB, philosophy problems tend to cluster around problems that persist because of SA. (If the problems didn't persist, they might be considered trivial or boring.)

Replies from: Peterdjones, BerryPick6
comment by Peterdjones · 2012-11-30T00:24:01.715Z · LW(p) · GW(p)

On only the rarest occasions do philosophers suggest that some term ("good", "morality," "rationalism", "free will", "soul", "knowledge") might not possess a definition that is precise enough to do the work that we ask of it.

And they neve expend any effort in establishing clear meanings for such terms. Oh wait....they expend far too mcuh effort arguing about definitions...no, too little...no, too much.

OK: the problem with philosopher is that they are contradictory.

Replies from: khafra, Bruno_Coelho
comment by khafra · 2012-11-30T18:14:52.148Z · LW(p) · GW(p)

And they never expend any effort in establishing clear meanings for such terms. Oh wait....they expend far too much effort arguing about definitions

If philosophers were strongly biased toward climbing the ladder of abstraction instead of descending it, they could expend a great deal of effort, flailing uselessly about definitions.

comment by Bruno_Coelho · 2012-12-02T16:09:20.868Z · LW(p) · GW(p)

What sort of people do you have in mind? The generalization apparently consider academic philosophers in the actual state, but not past people. Sure, someone without strong science background will miss the point, focusing on the words. But arguing "by definitions" is not something done exclusively by philosophers.

comment by BerryPick6 · 2012-11-30T18:20:16.954Z · LW(p) · GW(p)

On only the rarest occasions do philosophers suggest that some term ("good", "morality," "rationalism", "free will", "soul", "knowledge") might not possess a definition that is precise enough to do the work that we ask of it.

At least when it comes to the concepts "Good," "Morality" and "Free Will," I'm familiar with some fairly prominent suggestions that they are in dire need of redefinition and other attempts to narrow or eliminate discussions about such loose ideas altogether.

comment by [deleted] · 2012-12-02T00:55:09.123Z · LW(p) · GW(p)

We might also ask: How well do philosophers perform on standard tests of rationality, for example Frederick (2005)'s CRT?...

Your presentation here seems misleading to me. You imply that philosophers are merely average scorers on the CRT relative to the rest of the (similarly educated) population.

This claim is misleading for several reasons: 1) The study from which you get the philosophers' score is a mean score for people who have had some graduate level philosophical training. This is a set that will overlap with many of the other groups you mention. While it will include all professional philosophers, I don't think a majority of the set will be professional philosophers. Graduate level logic or political philosophy, etc. courses are pretty standard in graduate educations across the board.

2) Fredrick takes scores from a variety of different schools, trying to capture people, evidentially, who are undergraduates, graduate students, or faculty. Fredrick comes up with a mean score of 1.24 for respondents who are members of a university. In contrast, Livengood (from which you get the philosophers' mean score) gets a mean score of 0.65 and 0.82 for people with undergraduate or graduate/professional education respectively. If these two studies were using similar tests and methodologies, we should expect these scores to converge more. It seems likely that the Fredrick study is not using comparable methodology or controls, making the straight comparison of scores misleading.

3) The Livengood study actually argues that people with some philosophical training tend to do significantly better than the rest of the population on the CRT test, even when one controls for education. You do not mention this. You really ought to. Especially since, unlike the Fredrick study, the Livengood study is the only one you cite which uses a methodology relevant to the question you're asking.

comment by Wei Dai (Wei_Dai) · 2012-11-30T12:02:21.433Z · LW(p) · GW(p)

I'm not sure that more rationality in philosophy would help enough as far as FAI is concerned. I expect that if philosophers became more rational, they would mainly just become more uncertain about various philosophical positions, rather than reach many useful (for building FAI) consensuses.

If you look at the most interesting recent advances in philosophy, it seems that most of them were made by non-philosophers. For example, Turing, Church, and other's work on understanding the nature of computation, von Neumann and Morgenstern's decision theory, Tegmark's Ultimate Ensemble, and algorithmic information theory / Solomonoff Induction. (Can anyone think of a similarly impressive advance made by professional philosophers, in this same time frame?) Based on this, I think appropriate background knowledge and raw intellectual firepower (most of the smartest humans probably go into math/science instead of philosophy) are perhaps more important than rationality for making philosophical progress.

Replies from: Peterdjones, None, BerryPick6, Peterdjones
comment by Peterdjones · 2012-11-30T13:01:19.817Z · LW(p) · GW(p)

(Can anyone think of a similarly impressive advance made by professional philosophers, in this same time frame?)

  • Quine's attack on aprioricity and analycity.
  • Kuhn's' and Popper's philosophy of science.
  • Rawls' and Nozick's political philsophy
  • Kripkes New Metaphsycal Necessity

ETA:

  • Austin's speach act theory
  • Ryles critique of Cartesianism
  • HOT theory (various)
  • Tarski's convention T
  • Gettier's counteraxamples
  • Parfitt on personal identiy
  • Parfitt on ehtics
  • Wittgenstein's PLA
Replies from: Wei_Dai, TimS, JoshuaZ, BerryPick6
comment by Wei Dai (Wei_Dai) · 2012-11-30T18:08:31.313Z · LW(p) · GW(p)

I'm only familiar with about a third of these (not counting Tarski who I agreed with JoshuaZ is more of a mathematician than philosopher), but the ones that I am familiar with do not seem as interesting/impressive/fruitful/useful as the advances I mentioned in the grandparent comment. If you could pick one or two on your list for me to study in more detail, which would you suggest?

Replies from: BerryPick6, Peterdjones
comment by BerryPick6 · 2012-11-30T18:12:26.197Z · LW(p) · GW(p)

I know you aren't asking me, but my choices to answer this question would be Popper's Philosophy of Science; Rawls and Nozick's Political Philosophy and Quine.

comment by Peterdjones · 2012-11-30T19:00:39.604Z · LW(p) · GW(p)

Interesting to whom? Fruitful for what?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-11-30T21:18:03.087Z · LW(p) · GW(p)

Interesting to whom? Fruitful for what?

According to my own philosophical interests, which as it turned out (i.e., apparently by coincidence) also seems well aligned with what's useful for building FAI. I guess one thing that might be causing us to talk a bit past each other is that I read the opening post as talking about philosophy in the context of building FAI (since I know that's what the author is really interested in), but you may be seeing it as talking about philosophy in general (and looking at the post again I notice that it doesn't actually mention Friendly AI at all except by linking to a post about it).

Anyway, if you think any of the examples you gave might be especially interesting to someone like me, please let me know. Or, if you want, tell me which is most interesting to you and why.

comment by TimS · 2012-11-30T21:20:54.851Z · LW(p) · GW(p)

Kuhn's' and Popper's philosophy of science.

Made me laugh for a second seeing those two on the same line because Popper (falsifiability) and Kuhn (Structures of Scientific Revolutions) are not particularly related.

Replies from: Peterdjones
comment by Peterdjones · 2012-11-30T23:05:28.393Z · LW(p) · GW(p)

Not at all. i should probably have put them on separate lines.

comment by JoshuaZ · 2012-11-30T17:38:45.901Z · LW(p) · GW(p)

Most of your examples seem valid but this one is strongly questionable:

Tarski's convention T

This example doesn't work. Tarski was a professional mathematician. There was a lot of interplay at the time between math and philosophy, but it seems he was closer to the math end of things. He did at times apply for philosophy positions, but for the vast majority of his life he was doing work as a mathematician. He was a mathematician/logician when he was at the Institute for Advanced Study, and he spent most of his professional career as a professor at Berkley in the math department. Moreover, while he did publish some papers in philosophy proper, he was in general a very prolific writer, and the majority of his work (like his work with quantifier elimination in the real numbers, or the Banach-Tarski paradox) are unambiguously mathematical.

Similarly, the people who studied under him are all thought of as mathematicians(like Julia Robinson), or mathematician-philosophers(Feferman), with most in the first category.

Overall, Tarski was much closer to being a professional mathematician whose work sometimes touched on philosophy than a professional philosopher who sometimes did math.

comment by BerryPick6 · 2012-11-30T14:14:25.514Z · LW(p) · GW(p)
  • Mackie's Argument from Queerness
  • Hare and Ayers' work on Expressivism
  • Goodman's New Riddle of Induction
  • Wittgenstein
  • Frankfurt on Free Will
  • The Quine-Putnam indispensability thesis
  • Causal Theory of Reference
comment by [deleted] · 2012-11-30T12:49:55.454Z · LW(p) · GW(p)

(Can anyone think of a similarly impressive advance made by professional philosophers, in this same time frame?)

I think the canonical example would be Thomas Metzinger's model of the first-person perspective.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-01T12:27:18.219Z · LW(p) · GW(p)

I think the canonical example would be Thomas Metzinger's model of the first-person perspective.

Would't there be at least one reference to his book in SEP if that was true?

Replies from: gwern, None
comment by gwern · 2012-12-01T16:49:07.305Z · LW(p) · GW(p)

http://plato.stanford.edu/search/searcher.py?query=metzinger ?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-01T20:11:39.841Z · LW(p) · GW(p)

Yeah, I did the same search, but none of those results reference his main work, the book that paper-machine cited (or any other papers/books that, judging from the titles, are about his main ideas).

Replies from: gwern
comment by gwern · 2012-12-01T20:37:22.035Z · LW(p) · GW(p)

They're still citations to his body of work, which is all on pretty much the same topic. SEP is good, but it is just an encyclopedia, after all, and Being No One is a very challenging book (I still haven't read it because it's too hard for me). A general citation search would be more useful; I see 647 citations to it in Google Scholar. (I don't know of a citation engine specializing in philosophy - Philpapers shows a fair bit of activity related to Metzinger but doesn't give me how many philosophy papers cite it, much less philosophy of mind.)

Replies from: Kawoomba, Wei_Dai, BerryPick6
comment by Kawoomba · 2012-12-01T23:20:19.947Z · LW(p) · GW(p)

Being No One is a very challenging book

This lecture he gives about the very same topic is much more accessible.

Replies from: fubarobfusco, NancyLebovitz
comment by fubarobfusco · 2012-12-02T02:15:07.061Z · LW(p) · GW(p)

Thank you for posting this.

comment by NancyLebovitz · 2012-12-02T05:34:35.563Z · LW(p) · GW(p)

He suggests that the reason we don't have awareness that our sensory experiences are created by a detailed internal process is that it wasn't evolutionarily worthwhile. However, we're currently in an environment where at least our emotional experiences are more and more likely to be hacked by other people who aren't necessarily on our side, which means that self-awareness is becoming more valuable. At this point, the evolution is more likely to be memetic (parents teaching their children to notice what's going on in advertisements) than physiological, though it's also plausible that some people find it innately easier to track what is going on with their emotions than others.

Has anyone read The Book of Not Knowing by Peter Ralston? I've only read about half of it, but it looks like it's heading into the same territory.

comment by Wei Dai (Wei_Dai) · 2012-12-01T20:57:38.111Z · LW(p) · GW(p)

I didn't even try to read the book, but went through a bunch of review papers (which of course all try to summarize the main ideas of the book) and feel like I got a general understanding that way. I wanted to see how his ideas compare to his peers (so as to judge how much of an advance they are upon the state of the art), and that's when I found the SEP lacking any discussion of them (which still seems fairly damning to me).

comment by BerryPick6 · 2012-12-01T20:50:34.727Z · LW(p) · GW(p)

Being No One is a very challenging book (I still haven't read it because it's too hard for me).

Apparently, his follow-up book "The Ego Tunnel" deals with mostly the same stuff and is not as impenetrable. Have you read it? I'd be interested in hearing your thoughts on it.

Replies from: gwern
comment by gwern · 2012-12-01T21:16:10.275Z · LW(p) · GW(p)

Ironically, my problem with that book was that it was too easy and simple.

comment by [deleted] · 2012-12-01T16:07:56.564Z · LW(p) · GW(p)

No idea why this would be true.

(For example, despite being a reasonably well-known mathematician, there is only one reference to S. S. Abhyankar in the MacTutor history of mathematicians.)

comment by BerryPick6 · 2012-11-30T13:33:42.036Z · LW(p) · GW(p)

Nick Bostrom?

Replies from: Wei_Dai, Peterdjones
comment by Wei Dai (Wei_Dai) · 2012-12-01T00:08:25.888Z · LW(p) · GW(p)

I think Nick is actually an example of how rationality isn't that useful for making philosophical progress. I'm a bit reluctant to say this (for obvious social reasons, which I'm judging to be outweighed by the strategic importance of this issue) but his work (PhD thesis) on anthropic reasoning wasn't actually very good. I know that at least one SI Research Associate agrees with my assessment.

ETA: I should qualify this by saying that while his proposed solution wasn't very good (which you can also infer from the fact that nobody ever talks about or builds upon it around here despite strong interest in the topic) he did come up arguments/considerations/thought experiments, such as the Presumptuous Philosopher, that we still discuss.

Replies from: BerryPick6, cousin_it
comment by BerryPick6 · 2012-12-01T00:11:13.759Z · LW(p) · GW(p)

I'll freely admit that I haven't actually read any of his work, and I was mainly making the comment due to the generally fanboyish response he gets 'round these parts. I found your comment very interesting, and may investigate further.

comment by cousin_it · 2012-12-02T04:04:26.278Z · LW(p) · GW(p)

I know that at least one SI Research Associate agrees with my assessment.

Just in case this refers to me: I agree with your assessment of Bostrom's thesis, but I'm no longer a SI research associate :-)

comment by Peterdjones · 2012-11-30T13:42:44.365Z · LW(p) · GW(p)

As an example of what?

Replies from: BerryPick6
comment by BerryPick6 · 2012-11-30T13:48:14.344Z · LW(p) · GW(p)

A straight-up philosopher who is useful to FAI (more X-Risk, but it's probably still applicable.) Obviously, your examples are the ones that immediately occurred to me, so I didn't want to repeat them.

comment by Peterdjones · 2012-11-30T13:11:15.478Z · LW(p) · GW(p)

For example, Turing, Church, and other's work on understanding the nature of computation,

Why does that count as phil?

von Neumann and Morgenstern's decision theory,

or that?

and algorithmic information theory / Solomonoff Induction.

or that?

Tegmark's Ultimate Ensemble,

OK. That resembles modal realism, which is deifnitely philosphy, although it is routinely condemned here as bad philosophy.

Replies from: IlyaShpitser, Wei_Dai
comment by IlyaShpitser · 2012-11-30T20:39:13.049Z · LW(p) · GW(p)

Look, everything counts as phil: (http://en.wikipedia.org/wiki/Natural_philosophy). Philosophy gets credit for launching science in the 19th century.

Philosophers were the first to invent the AI effect, apparently (http://en.wikipedia.org/wiki/AI_effect).

If you want to look at interesting advances in philosophy, read the stuff by the CMU causality gang (Spirtes/Scheines/Glymour, philosophy department, also Kelly). Of course you will probably say that is not really philosophy but theoretical statistics or something. Pearl's stuff can be considered philosophy too (certainly his stuff on actual cause is cited a lot in phil papers).

Replies from: Peterdjones
comment by Peterdjones · 2012-11-30T23:13:27.436Z · LW(p) · GW(p)

Look, everything counts as phil: Old science may also have counted as phil. in the days when they weren't distinct. However WD's exmaples were of contemporary developements that seem to be considered not-phil by contemporary philosophers.

certainly his stuff on actual cause is cited a lot in phil papers

Science in general is quoted quite a lot. But there is a difference between phils. discussing phil. and phils. discussing non-phil as somethign that can be philosophised about. if only in tone and presentation.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2012-12-01T07:41:53.398Z · LW(p) · GW(p)

Your quoting is confusing.

comment by Wei Dai (Wei_Dai) · 2012-11-30T16:33:35.099Z · LW(p) · GW(p)

Why does that count as phil?

Perhaps a more relevant question, in the context of the OP, is whether those problems are representative of the types of foundational (as opposed to engineering, logistical, strategic, etc.) problems that need to be solved in order to build an FAI.

But we could talk about "philosophy" as well, since, to be honest, I'm not sure why some topics count as "philosophy" and others don't. It seems to me that my list of advances do fall under Wikipedia's description of philosophy as "the study of general and fundamental problems, such as those connected with reality, existence, knowledge, values, reason, mind, and language." Do you disagree, or have a alternative definition?

Replies from: Richard_Kennaway, DaFranker, Peterdjones
comment by Richard_Kennaway · 2012-11-30T17:04:49.451Z · LW(p) · GW(p)

It seems to me that my list of advances do fall under Wikipedia's description of philosophy

I agree. But there are also some systematic differences between what the people you cited did and what (other) philosophers do.

  • The former didn't merely study fundamental problems, they solved them.

  • They did stuff that now exists and can be studied independently of the original works. You don't have to read a single word of Turing to understand Turing machines and their importance. You need not study Solomonoff to understand Solomonoff induction.

  • Their works are generally not shelved with philosophy in libraries. Are they studied in undergraduate courses on philosophy?

Replies from: novalis, BerryPick6
comment by novalis · 2012-11-30T18:23:21.887Z · LW(p) · GW(p)

Turing's work on AI (and Searle's response) was discussed in my undergrad intro phil course. But that is not quite the same thing.

comment by BerryPick6 · 2012-11-30T17:07:42.626Z · LW(p) · GW(p)

Their works are generally not shelved with philosophy in libraries. Are they studied in undergraduate courses on philosophy?

Not in my undergraduate program, at least.

comment by DaFranker · 2012-11-30T16:52:36.297Z · LW(p) · GW(p)

I think the criticism is indeed pointed towards the scientific "field" of Philosophy, AKA people working in Philosophy Departments or similar.

I doubt many here are targeting the activity of philosophy, nor the people who would identify as "philosophers", but rather specifically towards Philosophy academics with a specialization in Philosophy, who work in a Philosophy Department and produce Philosophy papers to be published in a Journal of Philosophical Writings (and possibly give the occasional Philosophy class or seminar, depending on the local supply of TAs).

IME, a large fraction of real, practicing philosophers are actively publishing papers on arXiv or equivalent.

Replies from: Peterdjones
comment by Peterdjones · 2012-11-30T17:17:27.702Z · LW(p) · GW(p)

I think the criticism is indeed pointed towards the scientific "field" of Philosophy

Did you mean academic field?

I doubt many here are targeting the activity of philosophy, nor the people who would identify as "philosophers", but rather specifically towards Philosophy academics with a specialization in Philosophy, who work in a Philosophy Department and produce Philosophy papers to be published in a Journal of Philosophical Writings (and possibly give the occasional Philosophy class or seminar, depending on the local supply of TAs).

You mean professional phi. bad, amateur phil good. Or not so much amaterur phil as the sort of sciencey-philly cross-disciplinary stuff done by EY and Robin and Botrom and Tegmark do. Maybe. But actually some of it is quite bad for reasons which are evident if you know phil.

Replies from: DaFranker
comment by DaFranker · 2012-11-30T17:30:18.836Z · LW(p) · GW(p)

Did you mean academic field?

Yes, my bad.

You mean professional phi. bad, amateur phil good.

A good professional study of philosophy itself is to me indistinguishable from someone doing metaresearch, i.e. figuring out how to make the standards of the scientific method even better and the techniques of all scientists more efficient. IME, this is not what the majority of academics working in Philosophy Departments are doing.

OTOH, good applied philosophy, i.e. the sort of stuff you do once you've studied the result of the above metaresearch, is basically just doing science. In other words, doing research in any field that is not about how to do research.

So yes, in a sense, most academics categorized as "professional phil" are less good than most academics categorized as "amateur phil" who mainly work in other disciplines. The latter are also almost exclusively "sciencey-philly cross-disciplinary".

I'm guessing we both agree that non-academic-nor-scientist amateur philosophers are less likely to produce meaningful research than any of the above, and yet is pretty much the stereotype that most people (in the general north-american population) assign to "philosophers". Then again, the exclusion of "scientists" from that category feels like begging the question.

Replies from: Peterdjones
comment by Peterdjones · 2012-11-30T17:47:34.990Z · LW(p) · GW(p)

So yes, in a sense, most academics categorized as "professional phil" are less good than most academics categorized as "amateur phil" who mainly work in other disciplines

Is the "so" meant to imply that that follows from the forefgoing? I don't see how it does.

comment by Peterdjones · 2012-11-30T17:08:59.897Z · LW(p) · GW(p)

I was responding to the sentence: "If you look at the most interesting recent advances in philosophy, it seems that most of them were made by non-philosophers."

..which does not mention "advances in philosophy useful to FAI".

Do you disagree, or have a alternative definition?

None of them have been much discussed by phils. (except possibly Bostrom, the Diane Hsieh of LessWrongism).

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-11-30T17:37:15.155Z · LW(p) · GW(p)

None of them have been much discussed by phils.

Theory of computation is obviously used by the computational theory of mind as well as philosophy of language and of mathematics and logic. Decision theorists are commonly employed by philosophy departments and all current decision theories descend from vNM's. AIT actually doesn't seem to be much discussed by philosophers (a search found only a couple of references in the SEP, and even the entry on "simplicity" only gives a brief mention of it) which is a bit surprising. (Oh, there's a more substantial discussion in the entry for "information".)

Replies from: Peterdjones
comment by Peterdjones · 2012-11-30T19:11:36.937Z · LW(p) · GW(p)

Theory of computation is obviously used by the computational theory of mind

Surely that is the other way round. Early computer theorists just wanted to solve mathematical problems mechanically.

Theory of computation is obviously used by the computational theory of mind

What is your point? His day job was physicist.

comment by bryjnar · 2012-11-29T23:51:18.709Z · LW(p) · GW(p)

Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex...

Huh?

Examples like that are the bread and butter of discussions about motivational internalism: precisely the argument that tends to get made is that because it's not motivating it's not a real moral judgement. You may think that's stupid in other ways, but it's not that philosophers are ignorant of what psychology tells us, some of them just disagree about how to interpret it.

comment by John_Maxwell (John_Maxwell_IV) · 2012-11-30T09:03:04.147Z · LW(p) · GW(p)

Are some philosophical questions questions about reality? If so, what does it take for a question about reality to count as "philosophical" as opposed to "scientific"? Are these just empirical clusters?

And if it's not a fact about reality, what does it mean to get it right?

Replies from: ygert
comment by ygert · 2012-12-01T19:24:58.094Z · LW(p) · GW(p)

I think the point is not to think of questions as philosophical or not, but rather look at the people trying to solve these questions. This post is talking about how the people called "philosophers" are not effective at solving these problems, and as such that they should change their approach. In fact, a large part of the Sequences are attempting to solve questions which you might think of as "philosophical" and have in the past been worked on by philosophers. But what this post says is that the correct way to look at these (or any other) problems is to look at them in a rational way (like EY did in writing the Sequences) and not in the way most people (specifically the class of people known as "philosophers") have tried to solve them in the past.

comment by timtyler · 2012-11-30T01:41:55.844Z · LW(p) · GW(p)

Luke quoted:

Science is built around the assumption that you're too stupid and self-deceiving to just use [probability theory]. After all, if it was that simple, we wouldn't need a social process of science... [Standard scientific method] doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.

That's a pretty irritatingly-wrong quote. Of course the scientific method is social of reasons other than the stupidity and self-deceiving nature of scientists. For example, the scientists doing the cigarette-company-funded science probably weren't stupid or self-deceiving. Other scientists doubted their science for a different set of reasons.

comment by Scott Alexander (Yvain) · 2012-12-01T20:13:46.992Z · LW(p) · GW(p)

A score of 1.32 isn't radically different from the mean CRT scores found for psychology undergraduates (1.5), financial planners (1.76), Florida Circuit Court judges (1.23), Princeton Undergraduates (1.63), and people who happened to be sitting along the Charles River during a July 4th fireworks display (1.53). It is also noticeably lower than the mean CRT scores found for MIT students (2.18) and for attendees to a LessWrong.com meetup group (2.69).

I found this by far the most interesting part of this (very good) post. I am surprised I had to learn it hidden inside a mostly unrelated essay. I would certainly like to hear more about this test.

comment by Decius · 2012-12-10T02:10:51.643Z · LW(p) · GW(p)

What would evidence of deontology / consequentialism / virtue ethics, empiricism vs. rationalism, or physicalism vs. non-physicalism look like?

comment by Peterdjones · 2012-11-30T00:02:26.708Z · LW(p) · GW(p)

But] philosophy continually leads experts with the highest degree of epistemic virtue, doing the very best they can, to accept a wide array of incompatible doctrines. Therefore, philosophy is an unreliable instrument for finding truth. A person who enters the field is highly unlikely to arrive at true answers to philosophical questions.

Philosophy hasn;t been very successful at finding the truth about the kind of questions philosophy typically considers. What's better...at answering those kinds of questions? You can only condemn philosophy for having worse methods than science, based on results, if they are both applied to the same problems.

comment by [deleted] · 2012-12-01T23:32:02.563Z · LW(p) · GW(p)

Sometimes, they are even divided on psychological questions that psychologists have already answered...

I think you've misunderstood the debate: philosophers are arguing in this case over whether or not moral judgements are intrinsically motivating. If they are, then the brain-damaged people you make reference to are (according to moral judgement internalizes) not really making moral judgements. They're just mouthing the words.

This is just to say that psychology has answered a certain question, but not the question that philosophers debating this point are concerned about.

Replies from: Manfred, Qiaochu_Yuan
comment by Manfred · 2012-12-02T07:26:37.749Z · LW(p) · GW(p)

This pattern-matches an awful lot to "if a tree falls in a forest..."

Replies from: None
comment by [deleted] · 2012-12-02T16:32:20.824Z · LW(p) · GW(p)

Yeah, but at a sufficiently low resolution (such as my description), lots of stuff pattern-matches, so: http://plato.stanford.edu/entries/moral-motivation/#MorJudMot

I'm not saying the philosophical debate is interesting or important (or that it's not), but the claim that psychologists have settled the question relies on an equivocation on 'moral judgement': in the psychological study, giving an answer to a moral question which comports with answers given by healthy people is a sufficient condition on moral judgement. For philosophers, it is neither necessary, not sufficient. Clearly, they are not talking about the same thing.

comment by Qiaochu_Yuan · 2012-12-02T00:24:52.882Z · LW(p) · GW(p)

How do I know whether anyone is making moral judgments as opposed to mouthing the words?

Replies from: None
comment by [deleted] · 2012-12-02T01:33:20.276Z · LW(p) · GW(p)

That sounds like an interesting question! If you'll forgive me answering your question with another, do you think that this is the kind of question psychology can answer, and if so, what kind of evidential result would help answer it?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-02T06:39:27.424Z · LW(p) · GW(p)

Well, I was hoping you would answer with at least a definition of what constitutes a moral judgment. A tentative definition might come from the following procedure: ask a wide selection of people to make what would colloquially be referred to as moral judgments and see what parts of their brains light up. If there's a common light-up pattern to basic moral judgments about things like murder, then we might call that neurological event a moral judgment. Part of this light-up pattern might be missing in the brain-damaged people.

Replies from: None
comment by [deleted] · 2012-12-02T16:41:14.051Z · LW(p) · GW(p)

Well, I was hoping you would answer with at least a definition of what constitutes a moral judgment.

But that's the philosophical debate!

As to your definition, notice the following problem: suppose you get a healthy person answering a moral question. Region A and B of their brain lights up. Now you go to the brain damaged person, and in response to the same moral question only region A lights up. You also notice that the healthy person is motivated to act on the moral judgement, while the brain damaged person is not. So you conclude that B has something to do with motivation.

So do you define a moral judgement as 'the lighting up of A and B' or just 'the lighting up of A'? Notice that nothing about the result you've observed seems to answer or even address that question. You can presuppose that it's A, or both A and B, but then you've assumed an answer to the philosophical debate. There's a big difference between assuming an answer, and answering.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-02T21:00:44.548Z · LW(p) · GW(p)

Neither. You taboo "moral judgment." From there, as far as I can tell, the question is dissolved.

Replies from: None
comment by [deleted] · 2012-12-02T22:32:10.907Z · LW(p) · GW(p)

Okay, good idea, let's taboo moral judgement. So your definition from the great grandparent was (I'm paraphrasing) "the activity of the brain in response to what are colloquially referred to as moral judgements." What should we replace 'moral judgement' with in this definition?

I assume it's clear that we can't replace it with 'the activity of the brain...'

(ETA: For the record, if tabooing in this way is your strategy, I think you're with me in rejecting Luke's claim that psychology has settled a the externalism vs. internalism question. At the very best, psychology has rejected the question, not solved it. But much more likely, since philosophers probably won't taboo 'moral judgement' the way you have (i.e. in terms of brain states) psychology is simply discussing a different topic.)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-02T22:59:26.683Z · LW(p) · GW(p)

"...in response to questions about whether it is right to kill people in various situations, or take things from people in various situations, or more generally to impose one's will on another person in a way that would have had significance in the ancestral environment." (This is based on my own intuition that people process judgments about ancestral-environment-type things like murder differently from the way people process judgments about non-ancestral-environment-type things like copyright law. I could be wrong about this.)

How would a philosopher taboo "moral judgment"?

Replies from: None
comment by [deleted] · 2012-12-02T23:40:10.950Z · LW(p) · GW(p)

That's fine, but it doesn't address the problem I described in the great great grandparent of this reply. Either you mean the brain activity of a healthy person, or the brain activity common to healthy and brain-damaged people. Even if philosophers intend to be discussing brain processes (which, in almost every case, they do not) then you've assumed an answer, not given one.

But in any case, this way of tabooing 'moral judgement' makes it very clear that the question the psychologist is discussing is not the question the philosopher is discussing.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-03T00:26:23.177Z · LW(p) · GW(p)

In that case I don't understand the question the philosopher is discussing. Can you explain it to me without using the phrase "moral judgment"?

Replies from: None
comment by [deleted] · 2012-12-03T04:25:07.872Z · LW(p) · GW(p)

Well, this isn't something I'm an expert in. Most of my knowledge of the topic comes from this SEP article, which I would in any case just be summarizing if I tried to explain the debate. The article is much clearer than I'm likely to be. So you're probably just better off reading that, especially the intro and section 3: http://plato.stanford.edu/entries/moral-motivation/

That article uses the phrase 'moral judgement' of course, but anyway I think tabooing the term (rather than explaining and then using it) is probably counterproductive.

I'd of course be happy to discuss the article.

comment by CCC · 2012-11-30T09:28:35.212Z · LW(p) · GW(p)

According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics,

???

I am confused. I lean towrds value ethics, and I can certainly see the appeal of consequentialism; but as I understand it, deontology is simply "follow the rules", right?

I fail to see the appeal of that as a basis for ethics. (As a basis for avoiding confrontation, yes, but not as a basis for deciding what is right or wrong). It doesn't seem to stand up well on inspection (who makes the rules? Surely they can't be decided deontologically?)

So... what am I missing? Why is deontology more favoured than either of the other two options?

Replies from: Peterdjones
comment by Peterdjones · 2012-11-30T09:53:01.682Z · LW(p) · GW(p)

Deontology doens't mean "follow any rules" or "follow given rules" or "be law abiding". A deontologist can reject purported moral rules, just as a virtue theorist does not have to accept that copulaing with as many women as possible is "manly virtue", just as a value theorist does not have to value blind patriotism. Etc.

ETA:

Surely they can't be decided deontologically?

Meta-ethical systems ususally don't supply their own methdology. Deontologists usually work out rules based on some specific deontological meta-rule or "maxim", such as "follow on that rule one would wish to be universal law". Deontologies may vary according to the selection of maxim.

Replies from: BerryPick6, CCC
comment by BerryPick6 · 2012-11-30T10:55:46.518Z · LW(p) · GW(p)

Further, many philosophers think that Meta-Ethics and Normative Ethics can have sort of a "hard barrier" between them, so that one's meta-ethical view may have no impact at all upon one's acceptance of Deontology or Deontological systems.

EDIT: For the record, I think this is pretty ridiculous, but it's worth noting that people believe it.

comment by CCC · 2012-12-02T08:40:13.033Z · LW(p) · GW(p)

Meta-ethical systems ususally don't supply their own methdology. Deontologists usually work out rules based on some specific deontological meta-rule or "maxim", such as "follow on that rule one would wish to be universal law". Deontologies may vary according to the selection of maxim.

Ah, thank you. This was the point that I was missing; that the choice of maxim to follow may be via some non-deontological method.

Now it makes sense. Many thanks.

comment by Vaniver · 2012-11-30T06:18:49.516Z · LW(p) · GW(p)

(As likelihood ratios get smaller, your priors need to be better and your updates more accurate.)

It seems to me that rationality is more about updating the correct amount, which is primarily calculating the likelihood ratio correctly. Most of the examples of philosophical errors you've discussed come from not calculating that ratio correctly, not from starting out with a bizarre prior.

For example, consider Yvain and the Case of the Visual Imagination:

Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane."

This looks like having the same prior as many other people; the rationality was in actually running the experiment and calculating the likelihood ratio, which was able to overcome the extreme prior. You could say that Galton only considered this because he had a non-extreme prior, and that if people trusted their intuitions less and had more curious agnosticism, their beliefs would converge faster. But it seems to me that the curiosity (i.e. looking for evidence that favors one hypothesis over another) is more important than the agnosticism- the goal is not "I could be wrong" but "I could be wrong if X."

comment by Ben Pace (Benito) · 2012-11-29T22:59:32.397Z · LW(p) · GW(p)

Just to point out: your 3rd footnote all links to the same page. Enjoyed the post. Perhaps a case study of a big philosophy problem fully dissolved here?

Replies from: lukeprog
comment by lukeprog · 2012-11-30T18:45:12.729Z · LW(p) · GW(p)

Fixed, thanks.

comment by crazy88 · 2012-12-04T00:20:00.571Z · LW(p) · GW(p)

Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1

This isn't an area about which I know very much about but my understanding is that very few philosophers actually hold to a version of internalism which is disproven by these sorts of cases (even fewer than you might expect because those people that do hold to such a view tend to get commented on more often because "look how empirical evidence disproves this philosophical view" is a popular paper writing strategy and so people hunt for a target and then attack it, even if that target is not a good representation of the general perspective). As I said, not my area of expertise so I'm happy to be proben wrong on this.

I know you mention this sort of issue in the footnote but I think that still runs the risk of being misleading and making it seem that philosophers on mass hold a view that they (AFAIK) don't. This is particularly likely to happen because you cite a survey of philosophers in the same breath.

In general, I find that academic philosophy is far less bad than people on LW seem to think it is, in a large part because of a tendency on LW to focus on fringe views instead of mainstream views amongst philosophers and to misinterpret the meaning of words used by philosophers in a technical manner.

comment by aaronsw · 2012-12-01T14:12:32.345Z · LW(p) · GW(p)

Typo: But or many philosophical problems

comment by DanArmak · 2012-11-30T18:37:07.903Z · LW(p) · GW(p)

they're split 25-24-18 on deontology / consequentialism / virtue ethics,

Does that mean they're all moral realists? Otherwise it's like being split on the "true" human skin color.

Replies from: BerryPick6
comment by BerryPick6 · 2012-11-30T18:39:41.269Z · LW(p) · GW(p)

There's a separate question for Moral Realism vs. Moral Anti-Realism. It's an often accepted position among philosophers that one can hold Normative Ethical positions totally removed from their Meta-Ethics, which may account for some of the confusion.

comment by Shmi (shminux) · 2012-11-29T21:22:36.162Z · LW(p) · GW(p)

So, your account basically implies that philosophy is less reliable than astrology, but is not as useful? Then why even bother talking to the philosophical types, to begin with?

Replies from: Peterdjones
comment by Peterdjones · 2012-11-30T00:36:54.227Z · LW(p) · GW(p)

Becuase no one has better approaches to those questions.

comment by roland · 2012-11-30T00:27:51.154Z · LW(p) · GW(p)

sophistication effect

The name of this bias is Bias blind spot.

Replies from: lukeprog
comment by lukeprog · 2012-11-30T04:02:29.582Z · LW(p) · GW(p)

That's part of it. The sophistication effect specifically calls out the fact that due to the bias blind spot, sophisticated arguers have more ammunition with which to avoid noticing their own biases, and to see biases in others.